Before we proceed with the final step, we need to build a few HTC-Grid Artifacts. HTC artifacts include: python packages (for the HTC-Connector-Library), docker images (deploying example applications), configuration files for HTC and k8s.
To build and install these:
make happy-path TAG=$TAG REGION=$HTCGRID_REGION
A few notes on this command:
TAG
is omitted then mainline will be chosen as the default value.REGION
is omitted then eu-west-1 will be used.Once the command above gets executed, A folder named generated
will be created at ~/environment/aws-htc-grid/generated
. This folder will contain some important files, like the following:
The ~/environment/aws-htc-grid/generated/grid_config.json
file contains the configuration file that we will use to deploy HTC-Grid, let’s explore a few sections:
|
|
Using EKS as the Compute Plane allows us to use EC2 Spot Instances. Amazon EC2 Spot Instances offer spare compute capacity available in the AWS cloud at steep discounts compared to On-Demand instances. Spot Instances enable you to optimize your costs on the AWS cloud and scale your application’s throughput up to 10X for the same budget.
Given that we will use Kubernetes Cluster Autoscaler we create the following node groups each with instances of the same size. You can read more about EKS and Spot best practices here
|
|
As this is a test deployment we will just use the default values. Users may need to scale this values up depending on your workload. When they do they will also need to consider the max_htc_agents
and min_htc_agents
as well as the dynamodb_default_read_capacity
and `dynamodb_default_write_capacity
Finally the last section of the file. We have highlighted a section that defines how much memory and CPU the deployment will get. In this case we have attributed ~1 VCPU amd ~2GB of Ram for each of the workers.
Note also how the location of the lambda points to the lambda.zip
that we just created by executing the make
command above.
|
|