Crystal Backend API

Deploy the Crystal backend service

Navigate to the crystal service repo.

cd ~/environment/ecsdemo-crystal

In the previous section, we deployed our application, test environment, frontend service, and the nodejs service.

Like we’ve done in previous sections, we will first need to create our crystal service in the ecsworkshop application.

The following command will open a prompt for us to add our service to the application.

copilot init

We will be prompted with a series of questions related to the application, environment, and the service we want to deploy. Answer the questions as follows:

  • Would you like to use one of your existing applications? “Y”
  • Which existing application do you want to add a new service to? Select “ecsworkshop”, hit enter
  • Which service type best represents yur service’s architecture? Select “Backend Service”, hit enter
  • What do you want to name this Backend Service: ecsdemo-crystal
  • Dockerfile: ./Dockerfile

After you answer the questions, it will begin the process of creating some baseline resources for your service. This also includes the manifest file which defines the desired state of this service. For more information on the manifest file, see the copilot-cli documentation.

Next, you will be prompted to deploy a test environment. An environment encompasses all of the resources that are required to support running your containers in ECS. This includes the networking stack (VPC, Subnets, Security Groups, etc), the ECS Cluster, Load Balancers (if required), and more.

Type “y”, and hit enter. Given that a test environment already exists, copilot will continue on and build the docker image, push it to ECR, and deploy the backend service.

Below is an example of what the cli interaction will look like:

deployment

The crystal service is now deployed! Navigate back to the frontend load balancer url, and you should now see the crystal service. You may notice that it is not working as we fully expect with the diagram. Like we’ve experienced with the previous services, this is because the service needs an environment variable as well as an IAM role addon to fully function as expected. Run the commands below to add an environment variable, and create the IAM role in the addons path.

mkdir -p copilot/ecsdemo-crystal/addons
cat << EOF > copilot/ecsdemo-crystal/addons/task-role.yaml
# You can use any of these parameters to create conditions or mappings in your template.
Parameters:
  App:
    Type: String
    Description: Your application's name.
  Env:
    Type: String
    Description: The environment name your service, job, or workflow is being deployed to.
  Name:
    Type: String
    Description: The name of the service, job, or workflow being deployed.

Resources:
  SubnetsAccessPolicy:
    Type: AWS::IAM::ManagedPolicy
    Properties:
      PolicyDocument:
        Version: 2012-10-17
        Statement:
          - Sid: EC2Actions
            Effect: Allow
            Action:
              - ec2:DescribeSubnets
            Resource: "*"

Outputs:
  # You also need to output the IAM ManagedPolicy so that Copilot can inject it to your ECS task role.
  SubnetsAccessPolicyArn:
    Description: "The ARN of the Policy to attach to the task role."
    Value: !Ref SubnetsAccessPolicy
EOF

cat << EOF >> copilot/ecsdemo-crystal/manifest.yml

variables:
  AWS_DEFAULT_REGION: $(echo $AWS_REGION)
EOF

git rev-parse --short=7 HEAD > code_hash.txt

Now, let’s redeploy the service:

copilot svc deploy

Interacting with the application

Let’s check out the ecsworkshop application details.

copilot app show ecsworkshop

The result should look like this:

app_show

We can see that our recently deployed crystal service is shown as a Backend Service in the ecsworkshop application.

Interacting with the environment

Given that we deployed the test environment when creating our frontend service, let’s show the details of the test environment:

copilot env show -n test

env_show

We now can see our newly deployed service in the test environment!

Interacting with the crystal service

Let’s now check the status of the frontend service.

Run:

copilot svc status -n ecsdemo-crystal

svc_status

We can see that we have one active running task, along with some additional details.

Scale our task count

Let’s scale our task count up! To do this, we are going to update the manifest file that was created when we initialized our service earlier. Open the manifest file (./copilot/ecsdemo-crystal/manifest.yml), and replace the value of the count key from 1 to 3. This is declaring our state of the service to change from 1 task, to 3. Feel free to explore the manifest file to familiarize yourself.

# Number of tasks that should be running in your service.
count: 3

Once you are done and save the changes, run the following:

copilot svc deploy

Copilot does the following with this command:

  • Build your image locally
  • Push to your service’s ECR repository
  • Convert your manifest file to CloudFormation
  • Package any additional infrastructure into CloudFormation
  • Deploy your updated service and resources to CloudFormation

To confirm the deploy, let’s first check our service details via the copilot-cli:

copilot svc status -n ecsdemo-crystal

You should now see three tasks running!

Now go back to the load balancer url, and you should see the diagram alternate between the three frontend tasks.

Review the service logs

The services we deploy via copilot are automatically shipping logs to Cloudwatch logs by default. Rather than navigate and review logs via the console, we can use the copilot cli to see those logs locally. Let’s tail the logs for the crystal service.

copilot svc logs -a ecsworkshop -n ecsdemo-crystal --follow

Note that if you are in the same directory of the service you want to review logs for, simply type the below command. Of course, if yuo wanted to review logs for a service in a particular environment, you would pass the -e flag with the environment name.

copilot svc logs

Last thing to bring up is that you aren’t limited to live tailing logs, type copilot svc logs --help to see the different ways to review logs from the command line.

Next steps

We did it! We have successfully deployed a three tier, polyglot, microservice application to ECS!

Validate deployment configuration

cd ~/environment/ecsdemo-crystal/cdk

Confirm that the cdk can synthesize the assembly CloudFormation templates

cdk synth

Review what the cdk is proposing to build and/or change in the environment

cdk diff

Deploy the crystal backend service

cdk deploy --require-approval never

Code Review

As we mentioned in the platform build, we are defining our deployment configuration via code. Let’s look through the code to better understand how cdk is deploying.

Importing base configuration values from our base platform stack

Because we built the platform in its own stack, there are certain environmental values that we will need to reuse amongst all services being deployed. In this custom construct, we are importing the VPC, ECS Cluster, and Cloud Map namespace from the base platform stack. By wrapping these into a custom construct, we are isolating the platform imports from our service deployment logic.

class BasePlatform(core.Construct):
    
    def __init__(self, scope: core.Construct, id: str, **kwargs):
        super().__init__(scope, id, **kwargs)

        # The base platform stack is where the VPC was created, so all we need is the name to do a lookup and import it into this stack for use
        self.vpc = aws_ec2.Vpc.from_lookup(
            self, "ECSWorkshopVPC",
            vpc_name='ecsworkshop-base/BaseVPC'
        )
        
        # Importing the service discovery namespace from the base platform stack
        self.sd_namespace = aws_servicediscovery.PrivateDnsNamespace.from_private_dns_namespace_attributes(
            self, "SDNamespace",
            namespace_name=core.Fn.import_value('NSNAME'),
            namespace_arn=core.Fn.import_value('NSARN'),
            namespace_id=core.Fn.import_value('NSID')
        )
        
        # Importing the ECS cluster from the base platform stack
        self.ecs_cluster = aws_ecs.Cluster.from_cluster_attributes(
            self, "ECSCluster",
            cluster_name=core.Fn.import_value('ECSClusterName'),
            security_groups=[],
            vpc=self.vpc,
            default_cloud_map_namespace=self.sd_namespace
        )

        # Importing the security group that allows frontend to communicate with backend services
        self.services_sec_grp = aws_ec2.SecurityGroup.from_security_group_id(
            self, "ServicesSecGrp",
            security_group_id=core.Fn.import_value('ServicesSecGrp')
        )

Crystal backend service deployment code

For the backend service, we simply want to run a container from a docker image, but still need to figure out how to deploy it and get it behind a scheduler. To do this on our own, we would need to build a task definition, ECS service, and figure out how to get it behind CloudMap for service discovery. To build these components on our own would equate to hundreds of lines of CloudFormation, whereas with the higher level constructs that the cdk provides, we are able to build everything with 30 lines of code.

class CrystalService(core.Stack):
    
    def __init__(self, scope: core.Stack, id: str, **kwargs):
        super().__init__(scope, id, **kwargs)

        # Importing our shared values from the base stack construct
        self.base_platform = BasePlatform(self, self.stack_name)

        # The task definition is where we store details about the task that will be scheduled by the service
        self.fargate_task_def = aws_ecs.TaskDefinition(
            self, "TaskDef",
            compatibility=aws_ecs.Compatibility.EC2_AND_FARGATE,
            cpu='256',
            memory_mib='512',
        )
        
        # The container definition defines the container(s) to be run when the task is instantiated
        self.container = self.fargate_task_def.add_container(
            "CrystalServiceContainerDef",
            image=aws_ecs.ContainerImage.from_registry("brentley/ecsdemo-crystal"),
            memory_reservation_mib=512,
            logging=aws_ecs.LogDriver.aws_logs(
                stream_prefix='ecsworkshop-crystal'
            )
        )
        
        # Serve this container on port 3000
        self.container.add_port_mappings(
            aws_ecs.PortMapping(
                container_port=3000
            )
        )

        # Build the service definition to schedule the container in the shared cluster
        self.fargate_service = aws_ecs.FargateService(
            self, "CrystalFargateService",
            task_definition=self.fargate_task_def,
            cluster=self.base_platform.ecs_cluster,
            security_group=self.base_platform.services_sec_grp,
            desired_count=1,
            cloud_map_options=aws_ecs.CloudMapOptions(
                cloud_map_namespace=self.base_platform.sd_namespace,
                name='ecsdemo-crystal'
            )
        )

Review service logs

Review the service logs from the command line:

Expand here to see the solution

Review the service logs from the console:

Expand here to see the solution

Scale the service

Manually scaling

Expand here to see the solution

Autoscaling

Expand here to see the solution

Let’s bring up the Crystal Backend API!

Deploy our crystal application:

cd ~/environment/ecsdemo-crystal
envsubst < ecs-params.yml.template >ecs-params.yml

ecs-cli compose --project-name ecsdemo-crystal service up \
    --create-log-groups \
    --private-dns-namespace service \
    --enable-service-discovery \
    --cluster-config container-demo \
    --vpc $vpc
    

Here, we change directories into our crystal application code directory. The envsubst command templates our ecs-params.yml file with our current values. We then launch our crystal service on our ECS cluster (with a default launchtype of Fargate)

Note: ecs-cli will take care of building our private dns namespace for service discovery, and log group in cloudwatch logs.

View running container, and store the output of the task id as an env variable for later use:

ecs-cli compose --project-name ecsdemo-crystal service ps \
    --cluster-config container-demo

task_id=$(ecs-cli compose --project-name ecsdemo-crystal service ps --cluster-config container-demo | awk -F \/ 'FNR == 2 {print $2}')

We should have one task registered.

Check reachability (open url in your browser):

alb_url=$(aws cloudformation describe-stacks --stack-name container-demo-alb --query 'Stacks[0].Outputs[?OutputKey==`ExternalUrl`].OutputValue' --output text)
echo "Open $alb_url in your browser"

This command looks up the URL for our ingress ALB, and outputs it. You should be able to click to open, or copy-paste into your browser.

View logs:

# Referencing task id from above ps command
ecs-cli logs --task-id $task_id \
    --follow --cluster-config container-demo

To view logs, find the task id from the earlier ps command, and use it in this command. You can follow a task’s logs also.

Scale the tasks:

ecs-cli compose --project-name ecsdemo-crystal service scale 3 \
    --cluster-config container-demo
ecs-cli compose --project-name ecsdemo-crystal service ps \
    --cluster-config container-demo

We can see that our containers have now been evenly distributed across all 3 of our availability zones.