Navigate to the nodejs service repo and populate the git hash file which is required for our microservice.
cd ~/environment/ecsdemo-nodejs git rev-parse --short=7 HEAD > code_hash.txt
In the previous section, we deployed our application, test environment, and the frontend service.
To start, we need to create our nodejs service in the ecsworkshop application.
The following command will open a prompt for us to add our service to the application.
We will be prompted with a series of questions related to the application, environment, and the service we want to deploy. Answer the questions as follows:
After you answer the questions, it will begin the process of creating some baseline resources for your service. This also includes the manifest file which defines the desired state of this service. For more information on the manifest file, see the copilot-cli documentation.
Next, you will be prompted to deploy a test environment. An environment encompasses all of the resources that are required to support running your containers in ECS. This includes the networking stack (VPC, Subnets, Security Groups, etc), the ECS Cluster, Load Balancers (if required), and more.
Type “y”, and hit enter. Given that a test environment already exists, copilot will continue on and build the docker image, push it to ECR, and deploy the backend service.
Below is an example of what the cli interaction will look like:
That’s it! When the deployment is complete, navigate back to the frontend URL and you should now see the backend Nodejs service in the image.
Let’s check out the ecsworkshop application details.
copilot app show ecsworkshop
The result should look like this:
We can see that our recently deployed Nodejs service is shown as a Backend Service in the ecsworkshop application.
Given that we deployed the test environment when creating our nodejs service, let’s show the details of the test environment:
copilot env show -n test
We now can see our newly deployed service in the test environment!
Let’s now check the status of the nodejs service.
copilot svc status -n ecsdemo-nodejs
We can see that we have one active running task, along with some additional details.
Let’s scale our task count up! To do this, we are going to update the manifest file that was created when we initialized our service earlier. Open the manifest file (./copilot/ecsdemo-nodejs/manifest.yml), and replace the value of the count key from 1 to 3. This is declaring our state of the service to change from 1 task, to 3. Feel free to explore the manifest file to familiarize yourself.
# Number of tasks that should be running in your service. count: 3
Once you are done and save the changes, run the following:
copilot svc deploy
Copilot does the following with this command:
To confirm the deploy, let’s first check our service details via the copilot-cli:
copilot svc status -n ecsdemo-nodejs
You should now see three tasks running!
Now go back to the load balancer url, and you should see the diagram alternate between the three nodejs tasks.
The services we deploy via copilot are automatically shipping logs to Cloudwatch logs by default. Rather than navigate and review logs via the console, we can use the copilot cli to see those logs locally. Let’s tail the logs for the nodejs service.
copilot svc logs -a ecsworkshop -n ecsdemo-nodejs --follow
Note that if you are in the same directory of the service you want to review logs for, simply type the below command. Of course, if you wanted to review logs for a service in a particular environment, you would pass the -e flag with the environment name.
copilot svc logs
Last thing to bring up is that you aren’t limited to live tailing logs, type
copilot svc logs --help to see the different ways to review logs from the command line.
We have officially completed deploying our nodejs backend service. In the next section, we will extend our application by adding the crystal backend service.
cdk deploy --require-approval never
As we mentioned in the platform build, we are defining our deployment configuration via code. Let’s look through the code to better understand how cdk is deploying.
Because we built the platform in its own stack, there are certain environmental values that we will need to reuse amongst all services being deployed. In this custom construct, we are importing the VPC, ECS Cluster, and Cloud Map namespace from the base platform stack. By wrapping these into a custom construct, we are isolating the platform imports from our service deployment logic.
class BasePlatform(core.Construct): def __init__(self, scope: core.Construct, id: str, **kwargs): super().__init__(scope, id, **kwargs) # The base platform stack is where the VPC was created, so all we need is the name to do a lookup and import it into this stack for use self.vpc = aws_ec2.Vpc.from_lookup( self, "ECSWorkshopVPC", vpc_name='ecsworkshop-base/BaseVPC' ) # Importing the service discovery namespace from the base platform stack self.sd_namespace = aws_servicediscovery.PrivateDnsNamespace.from_private_dns_namespace_attributes( self, "SDNamespace", namespace_name=core.Fn.import_value('NSNAME'), namespace_arn=core.Fn.import_value('NSARN'), namespace_id=core.Fn.import_value('NSID') ) # Importing the ECS cluster from the base platform stack self.ecs_cluster = aws_ecs.Cluster.from_cluster_attributes( self, "ECSCluster", cluster_name=core.Fn.import_value('ECSClusterName'), security_groups=, vpc=self.vpc, default_cloud_map_namespace=self.sd_namespace ) # Importing the security group that allows frontend to communicate with backend services self.services_sec_grp = aws_ec2.SecurityGroup.from_security_group_id( self, "ServicesSecGrp", security_group_id=core.Fn.import_value('ServicesSecGrp') )
For the backend service, we simply want to run a container from a docker image, but still need to figure out how to deploy it and get it behind a scheduler. To do this on our own, we would need to build a task definition, ECS service, and figure out how to get it behind CloudMap for service discovery. To build these components on our own would equate to hundreds of lines of CloudFormation, whereas with the higher level constructs that the cdk provides, we are able to build everything with 30 lines of code.
class NodejsService(core.Stack): def __init__(self, scope: core.Stack, id: str, **kwargs): super().__init__(scope, id, **kwargs) # Importing our shared values from the base stack construct self.base_platform = BasePlatform(self, self.stack_name) # The task definition is where we store details about the task that will be scheduled by the service self.fargate_task_def = aws_ecs.TaskDefinition( self, "TaskDef", compatibility=aws_ecs.Compatibility.EC2_AND_FARGATE, cpu='256', memory_mib='512', ) # The container definition defines the container(s) to be run when the task is instantiated self.container = self.fargate_task_def.add_container( "NodeServiceContainerDef", image=aws_ecs.ContainerImage.from_registry("brentley/ecsdemo-nodejs"), memory_reservation_mib=512, logging=aws_ecs.LogDriver.aws_logs( stream_prefix='ecsworkshop-nodejs' ) ) # Serve this container on port 3000 self.container.add_port_mappings( aws_ecs.PortMapping( container_port=3000 ) ) # Build the service definition to schedule the container in the shared cluster self.fargate_service = aws_ecs.FargateService( self, "NodejsFargateService", task_definition=self.fargate_task_def, cluster=self.base_platform.ecs_cluster, security_group=self.base_platform.services_sec_grp, desired_count=1, cloud_map_options=aws_ecs.CloudMapOptions( cloud_map_namespace=self.base_platform.sd_namespace, name='ecsdemo-nodejs' ) )
log_group=$(awslogs groups -p ecsworkshop-nodejs) awslogs get -G -S --timestamp --start 1m --watch $log_group
First, we will navigate to ECS in the console and drill down into our service to get detailed information. As you can see, there is a lot of information that we can gather around the service itself, such as Service Discovery details, number of tasks running, as well as logs. Click the logs tab to review the logs for the running service.
Next, we can review our service logs in near real time. You can go back in time as far as one week, or drill down to the past 30 seconds. In the example below, we select 30 seconds.
app.pyand change the desired count from 1 to 3
self.fargate_service = aws_ecs.FargateService( self, "NodejsFargateService", task_definition=self.fargate_task_def, cluster=self.base_platform.ecs_cluster, security_group=self.base_platform.services_sec_grp, desired_count=3, #desired_count=1, cloud_map_options=aws_ecs.CloudMapOptions( cloud_map_namespace=self.base_platform.sd_namespace, name='ecsdemo-nodejs' ) )
Using the editor of your choice, open ‘~/environment/ecsdemo-nodejs/cdk/app.py’ in the cdk directory.
Enable Service Autoscaling to find the code that will enable autoscaling for the service.
Remove the comments (#) from the code for self.autoscale and below, once you remove them, it should look like the following:
# Enable Service Autoscaling self.autoscale = self.fargate_service.auto_scale_task_count( min_capacity=1, max_capacity=10 ) self.autoscale.scale_on_cpu_utilization( "CPUAutoscaling", target_utilization_percent=50, scale_in_cooldown=core.Duration.seconds(30), scale_out_cooldown=core.Duration.seconds(30) )
# Enable Service Autoscaling self.autoscale = self.fargate_service.auto_scale_task_count( min_capacity=1, max_capacity=10 )
self.autoscale.scale_on_cpu_utilization( "CPUAutoscaling", target_utilization_percent=50, scale_in_cooldown=core.Duration.seconds(30), scale_out_cooldown=core.Duration.seconds(30) )
Now that you have the autoscaling code in place, let’s deploy it!
Let’s see a diff of our present state, vs the proposed changes to our environment. Run the following:
cdk deploy --require-approval never
In order to introduce load to the Nodejs Fargate service, we need to have the ability to reach its service endpoint. Since the service is in a private subnet, we will use the EC2 instance that was deployed within the same VPC. The instance was created in the Platform/Build Environment steps.
Once you have deployed the autoscaling, copy the instance id created during the platform deployment and get into temporary ec2 using the SSM agent or use the following code:
ec2InstanceId=$(aws cloudformation describe-stacks --stack-name ecsworkshop-base --query "Stacks" --output json | jq -r '..Outputs | select(.OutputKey |contains("StressToolEc2Id")) | .OutputValue') aws ssm start-session --target "$ec2InstanceId"
siege -c 100 -i http://ecsdemo-nodejs.service:3000&
watch -d -n 3 echo `aws ecs describe-services --cluster container-demo --services ecsdemo-nodejs | jq '.services | "Tasks Desired: \(.desiredCount) vs Tasks Running: \(.runningCount)"'`
Now that we’ve seen the service autoscale out, let’s stop the running while loop. Simply press
control + c to cancel.
Time to cancel the load test. By prepending our command with
&, we instructed it to run in the background. Bring it back to the foreground, and stop it. To stop it, type the following:
control + c
NOTE: To ensure application availability, the service scales out proportionally to the metric as fast as it can, but scales in more gradually. For more information, see the documentation