Submitting a pricing task

Let’s check now the client code for our pricing engine application.

Pricing Engine Client

The client for the pricing application is at https://github.com/awslabs/aws-htc-grid/blob/main/examples/client/python/portfolio_pricing_client.py.

The client takes a few argumets such as:

  • –workload-type: Determines how tasks are being generated. Can be “single_trade” or “random_portfolio”.
  • –portfolio_size: Defines how many tasks/trades each worker will evaluate (i.e., batching per worker)For example if we have 5 trades and trades per worker set to 2, then we have total number of tasks equal to 3: [1,2] [3,4] [5].
  • –trades_per_worker: Override the size of the sample portfolio (by default 10).

The main function that does the submission is available here

105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
def evaluate_trades_on_grid(grid_tasks):
    """This method simply passes the list of grid_tasks to the grid for the execution and then awaits the results
    Args:
        grid_tasks (list of dict) grid_tasks
    Returns:
        dict: final response from the get_results function
    """

    gridConnector = AWSConnector()

    try:
        username = os.environ['USERNAME']
    except KeyError:
        username = ""
    try:
        password = os.environ['PASSWORD']
    except KeyError:
        password = ""

    gridConnector.init(client_config_file,
                       username=username, password=password)
    gridConnector.authenticate()

    submission_resp = gridConnector.send(grid_tasks)
    logging.info(submission_resp)

    results = gridConnector.get_results(submission_resp, timeout_sec=FLAGS.timeout_sec)
    logging.info(results)

    return results

Pricing a single Option trade

To launch the client we will use the same approach as we did before. A container has been created with the application ready to be executed as a kubernetes Job.

cd ~/environment/aws-htc-grid
kubectl apply -f ~/environment/aws-htc-grid/generated/portfolio-pricing-single-trade.yaml

You can see the logs generated by the client using the following command

kubectl logs job.batch/portfolio-pricing-single-trade -f

It may take a few seconds for the portfolio-pricing-single-trade to be deployed to the kubernetes cluster. During that time the kubectl logs command may fail with an error similar to the error below. It only take a few seconds to start the container so, must re-run the command and you should get the logs. Error from server (BadRequest): container "generator" in pod "portfolio-pricing-single-trade-fkmmc" is waiting to start: ContainerCreating.

Remember to repeat this exercise you first need to remove the kubernetes job by running the following command once completed.

kubectl delete -f ~/environment/aws-htc-grid/generated/portfolio-pricing-single-trade.yaml

Submitting tasks from your own environment

So far all the tasks that we have submitted have been submitted using Kubernetes from within the cluster. For as long as your clients can route to the right endpoints and have been configured within the security groups allowed to reach the endpoints, you can run the clients from wherever you need to. It is common to have clients running on premise.

As you have seen in the client above, clients need to provide a configuration file to the AWSConnector. There is a configuration file that you have been using during this workshop. Run the following command.

cat $AGENT_CONFIG_FILE

This should display a similar json configuration file to this one:

{
  "region": "eu-west-1",
  "sqs_endpoint": "https://sqs.eu-west-1.amazonaws.com",
  "sqs_queue": "htc_task_queue-main",
  "sqs_dlq": "htc_task_queue_dlq-main",
  "redis_url": "htc-data-cache-main.xxxxxx.0001.euw1.cache.amazonaws.com",
  "redis_password": "123456789101112",
  "cluster_name": "htc-main",
  "ddb_state_table" : "htc_tasks_state_table-main",
  "empty_task_queue_backoff_timeout_sec" : 0.5,
  "work_proc_status_pull_interval_sec" : 0.5,
  "task_ttl_expiration_offset_sec" : 30,
  "task_ttl_refresh_interval_sec" : 5,
  "dynamodb_results_pull_interval_sec" : 0.5,
  "agent_task_visibility_timeout_sec" : 3600,
  "task_input_passed_via_external_storage" : 1,
  "lambda_name_ttl_checker": "ttl_checker-main",
  "lambda_name_submit_tasks": "submit_task-main",
  "lambda_name_get_results": "get_results-main",
  "lambda_name_cancel_tasks": "cancel_tasks-main",
  "s3_bucket": "htc-data-bucket-main20210809182129987200000002",
  "grid_storage_service" : "REDIS",
  "htc_path_logs" : "logs/",
  "error_log_group" : "grid_errors-main",
  "error_logging_stream" : "lambda_errors-main",
  "metrics_are_enabled": "1",
  "metrics_grafana_private_ip": "influxdb.influxdb",
  "metrics_submit_tasks_lambda_connection_string": "influxdb 8086 measurementsdb submit_tasks",
  "metrics_cancel_tasks_lambda_connection_string": "influxdb 8086 measurementsdb cancel_tasks",
  "metrics_pre_agent_connection_string": "influxdb 8086 measurementsdb agent_pre",
  "metrics_post_agent_connection_string": "influxdb 8086 measurementsdb agent_post",
  "metrics_get_results_lambda_connection_string": "influxdb 8086 measurementsdb get_results",
  "metrics_ttl_checker_lambda_connection_string": "influxdb 8086 measurementsdb ttl_checker",
  "agent_use_congestion_control": "0",
  "user_pool_id": "eu-west-1_P1FJAzyzz",
  "cognito_userpool_client_id": "xXXXXXXXXXXXXXXXXXXXX",
  "public_api_gateway_url": "https://3v94pleei0.execute-api.eu-west-1.amazonaws.com/v1",
  "private_api_gateway_url": "https://dakrks3g9j.execute-api.eu-west-1.amazonaws.com/v1",
  "api_gateway_key": "xXXXXXXXXXXXXXXXXXXXX",
  "enable_xray" : "0"
}

Additionally in the case of python you can distribute the python whl packages created within the ~/environment/aws-htc-grid/dist director and upload them to your local nexus or artifactory python repositories. To verify that the python client library can be loaded with the usual python commands, you can run the following command. This will install within the python virtual environment the libraries and download all the required dependencies.

cd ~/environment/aws-htc-grid
pip install ~/environment/aws-htc-grid/dist/*

While we have used the Cloud9 environment we created for admin tasks on both HTC-Grid and the Kubernetes/EKS, the Cloud9 environment has been created on the default VPC and as a result cannot reach some of the components without changes.