First, we will update our ECS cluster to enable the fargate capacity provider. Because the cluster already exists, we will do it via the CLI as it presently can’t be done via the console on existing clusters.
Using the AWS CLI, run the following command:
aws ecs put-cluster-capacity-providers \ --cluster container-demo \ --capacity-providers FARGATE FARGATE_SPOT \ --default-capacity-provider-strategy \ capacityProvider=FARGATE,weight=1,base=1 \ capacityProvider=FARGATE_SPOT,weight=4
With this command, we’re adding the Fargate and Fargate Spot capacity providers to our ECS Cluster. Let’s break it down by each parameter:
--cluster: we’re simply passing in our cluster name that we want to update the capacity provider strategy for.
--capacity-providers: this is where we pass in our capacity providers that we want enabled on the cluster. Since we do not use EC2 backed ECS tasks, we don’t need to create a cluster capacity provider prior to this. With that said, there are only the two options when using Fargate.
--default-capacity-provider-strategy: this is setting a default strategy on the cluster; meaning, if a task or service gets deployed to the cluster without a strategy and launch type set, it will default to this. Let’s break the base/weight down to get a better understanding.
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined.
The weight value designates the relative percentage of the total number of launched tasks that should use the specified capacity provider. For example, if you have a strategy that contains two capacity providers, and both have a weight of 1, then when the base is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1 for capacityProviderA and a weight of 4 for capacityProviderB, then for every one task that is run using capacityProviderA, four tasks would use capacityProviderB.
In the command we ran, we are stating that we want a minimum of 1 Fargate task as our base, and after that, for every one task using Fargate strategy, four tasks will use Fargate Spot.
Next, let’s navigate to the service repo and to the
fargate directory. This is where we’ll do the rest of the work.
The application we are deploying is a simple API that will return the arns of tasks running in the cluster, as well as the provider that they are using. Lastly, the application will tell us the ARN of the container we landed on and the provider that it is using. It’s a simple application that allows us to see in realtime the strategy in action.
Here is what we should see when we hit the load balancer URL after we deploy the application:
Like our previous services, we are using the CDK to deploy. Let’s go ahead and deploy it, and then do some deep dive and review the code!
Review what changes are being proposed:
Deploy the service
cdk deploy --require-approval never
As we’ve gone over in other sections, we are importing platform related items using the BasePlatform construct. For the sake of time, we will skip the review.
This deployment pattern looks very similar to what we deployed for the frontend service. We are using a high level construct via the CDK which will build all of the resources we need to connect our application to a frontend load balancer.
class CapacityProviderFargateService(core.Stack): def __init__(self, scope: core.Stack, id: str, **kwargs): super().__init__(scope, id, **kwargs) self.base_platform = BasePlatform(self, self.stack_name) self.task_image = aws_ecs_patterns.ApplicationLoadBalancedTaskImageOptions( image=aws_ecs.ContainerImage.from_registry("adam9098/ecsdemo-capacityproviders:latest"), container_port=5000, ) self.load_balanced_service = aws_ecs_patterns.ApplicationLoadBalancedFargateService( self, "FargateCapacityProviderService", service_name='ecsdemo-capacityproviders-fargate', cluster=self.base_platform.ecs_cluster, cpu=256, memory_limit_mib=512, desired_count=10, public_load_balancer=True, task_image_options=self.task_image, platform_version=aws_ecs.FargatePlatformVersion.VERSION1_4 )
The only difference you may notice in the code is that we remove the default launch type of being Fargate. The reason behind this, is we want to let the cluster choose the default capacity provider strategy (which we defined earlier). By removing the Launch type, the cluster capacity provider will decide the launch type(s) based on the default.
We are also creating an IAM policy statement that will be attached to our service. This is another one of the native integrations ECS has with the AWS Cloud. We are creating a policy that is allowing the containers to list and describe ecs tasks in the AWS account, and attaching it to the service. This will ensure that every time a task is spun up by it’s service, it will have the IAM permissions to make the calls to AWS resources.
self.cfn_resource = self.load_balanced_service.service.node.children self.cfn_resource.add_deletion_override("Properties.LaunchType") self.load_balanced_service.task_definition.add_to_task_role_policy( aws_iam.PolicyStatement( actions=[ 'ecs:ListTasks', 'ecs:DescribeTasks' ], resources=['*'] ) )
Once the deployment is finished, copy the load balancer URL, and paste it into your browser. The output should look something like this:
You can directly go to the url in the browser to see the json response. Or, if you want to see it on the command line, you can curl the load balancer.
Here is what to run to see the output from the command line:
curl -s <paste-load-balancer-url-here> | jq
The command line output should look something like this:
Whether you or on the browser or using the command line, go ahead and refresh a few times. You should see that as you are routed to different containers via the load balancer on each new request. The containers responding will be running on either Fargate or fargate spot launch types.
Here’s what we accomplished in this section of the workshop:
Run the cdk command to delete the service (and dependent components) that we deployed.
cdk destroy -f
Next, go back to the ECS Cluster in the console. In the top right, select
Default capacity provider strategy, click the
x next to all of the strategies until there are no more left to remove. Once you’ve done that, click
In the next section, we’re going to: