Frontend Rails App

Deploy our application, service, and environment

Navigate to the frontend service repo.

cd ~/environment/ecsdemo-frontend

To start, we will initialize our application, and create our first service. In the context of copilot-cli, the application is a group of related services, environments, and pipelines. Run the following command to get started:

copilot init

We will be prompted with a series of questions related to the application, and then our service. Answer the questions as follows:

  • Application name: ecsworkshop
  • Service Type: Load Balanced Web Service
  • What do you want to name this Load Balanced Web Service: ecsdemo-frontend
  • Dockerfile: ./Dockerfile

After you answer the questions, it will begin the process of creating some baseline resources for your application and service. This includes the manifest file for the frontend service, which defines the desired state of your service deployment. For more information on the Load Balanced Web Service manifest, see the copilot documentation.

Next, you will be prompted to deploy a test environment. An environment encompasses all of the resources that are required to support running your containers in ECS. This includes the networking stack (VPC, Subnets, Security Groups, etc), the ECS Cluster, Load Balancers (if required), service discovery namespace (via CloudMap), and more.

Type “y”, and hit enter. This part will take a few minutes because of all of the resources that are being created. This is not an action you run every time you deploy your service, it’s just the one time to get your environment up and running.

Below is an example of what the cli interaction will look like:

deployment

Ok, that’s it! With one command and answering a few questions, we have our frontend service deployed to an environment!

Grab the load balancer url and paste it into your browser.

copilot svc show -n ecsdemo-frontend --json | jq -r .routes[].url

You should see the frontend service up and running. The app may look strange or like it’s not working properly. This is because our service relies on the ability to talk to AWS services that it presently doesn’t have access to. The app should be showing an architectural diagram with the details of what Availability Zones the services are running in. We will address this fix later in the chapter. Now that we have the frontend service deployed, how do we interact with our environment and service? Let’s dive in and answer those questions.

Interacting with the application

To interact with our application, run the following in the terminal:

copilot app

This will bring up a help message that looks like the below image.

app

We can see the available commands, so let’s first see what applications we have deployed.

copilot app ls

The output should show one application, and it should be named “ecsworkshop”, which we named when we ran copilot init earler. When you start managing multiple applications with copilot, this will serve as that single command to get insight into all of them.

app_ls

Now that we see our application, let’s get a more detailed view into what environments and services our application contains.

copilot app show ecsworkshop

The result should look like this:

app_show

Reviewing the output, we see the environments and services deployed under the application. In a real world scenario, we would want to deploy a production environment that is completely isolated from test. Ideally that would be in another account as well. With this view, we see what accounts and regions our application is deployed to.

Interacting with the environment

Let’s now look deeper into our test environment. To interact with our environments, we will use the copilot env command.

env_ls

To list the environments, run:

copilot env ls

The response will come back with test, so let’s get more details on the test environment by running:

copilot env show -n test

env_show

With this view, we’re able to see all of the services deployed to our application’s test environment. As we add more services, we will see this grow. A couple of neat things to point out here:

  • The tags associated with our environment. The default tags have the application name as well as the environment.
  • The details about the environment such as account id, region, and if the environment is considered production.

Interacting with the frontend service

env_show

There is a lot of power with the copilot svc command. As you can see from the above image, there is quite a bit that we can do when interacting with our service.

Let’s look at a couple of the commands:

  • package: The copilot-cli uses CloudFormation to manage the state of the environment and services. If you want to get the CloudFormation template for the service deployment, you can simply run copilot svc package. This can be especially helpful if you decide to move to CloudFormation to manage your deployments on your own.
  • deploy: To put it simply, this will deploy your service. For local development, this enables one to locally push their service changes up to the desired environment. Of course when it comes time to deploy to production, a proper git workflow integrated with CI/CD would be the best path forward. We will deploy a pipeline later!
  • status: This command will give us a detailed view of the the service. This includes health information, task information, as well as active task count with details.
  • logs: Lastly, this is an easy way to view your service logs from the command line.

Let’s now check the status of the frontend service.

Run:

copilot svc status -n ecsdemo-frontend

svc_status

We can see that we have one active running task, and the details.

Scale our task count

One thing we haven’t discussed yet is ways to manage/control our service configuration. This is done via the manifest file. The manifest is a declarative yaml template that defines the desired state of our service. It was created automatically when we ran through the setup wizard (running copilot init), and includes details such as docker image, port, load balancer requirements, environment variables/secrets, as well as resource allocation. It dynamically populates this file based off of the Dockerfile as well as opinionated, sane defaults.

Open the manifest file (./copilot/ecsdemo-frontend/manifest.yml), and replace the value of the count key from 1 to 3. This is declaring our state of the service to change from 1 task, to 3. Feel free to explore the manifest file to familiarize yourself.

# Number of tasks that should be running in your service.
count: 3

Once you are done and save the changes, run the following:

copilot svc deploy

Copilot does the following with this command:

  • Build your image locally
  • Push to your service’s ECR repository
  • Convert your manifest file to CloudFormation
  • Package any additional infrastructure into CloudFormation
  • Deploy your updated service and resources to CloudFormation

To confirm the deploy, let’s first check our service details via the copilot-cli:

copilot svc status -n ecsdemo-frontend

You should now see three tasks running! Now go back to the load balancer url, and you should see the service showing different IP addresses based on which frontend service responds to the request. Note, it’s still not showing the full diagram, we’re going to fix this shortly.

Review the service logs

The services we deploy via copilot are automatically shipping logs to Cloudwatch logs by default. Rather than navigate and review logs via the console, we can use the copilot cli to see those logs locally. Let’s tail the logs for the frontend service.

copilot svc logs -a ecsworkshop -n ecsdemo-frontend --follow

Note that if you are in the same directory of the service you want to review logs for, simply type the below command. Of course, if yuo wanted to review logs for a service in a particular environment, you would pass the -e flag with the environment name.

copilot svc logs

Last thing to bring up is that you aren’t limited to live tailing logs, type copilot svc logs --help to see the different ways to review logs from the command line.

Create a CI/CD Pipeline

Expand here to deploy a pipeline
Expand here if you don't want to create the pipeline, but still want to fix the frontend service

Next steps

We have officially completed deploying our frontend. In the next section, we will extend our application by adding two backend services.

Validate deployment configuration

cd ~/environment/ecsdemo-frontend/cdk

Confirm that the cdk can synthesize the assembly CloudFormation templates

cdk synth

Review what the cdk is proposing to build and/or change in the environment

cdk diff

Deploy the frontend web service

cdk deploy --require-approval never
  • Once the deployment is complete, there will be two outputs. Look for the frontend url output, and open that link in a new tab. At this point you should see the frontend website up and running. Below is an example output:

feoutput

Code Review

As we mentioned in the platform build, we are defining our deployment configuration via code. Let’s look through the code to better understand how cdk is deploying.

Importing base configuration values from our base platform stack

Because we built the platform in its own stack, there are certain environmental values that we will need to reuse amongst all services being deployed. In this custom construct, we are importing the VPC, ECS Cluster, and Cloud Map namespace from the base platform stack. By wrapping these into a custom construct, we are isolating the platform imports from our service deployment logic.

class BasePlatform(core.Construct):
    
    def __init__(self, scope: core.Construct, id: str, **kwargs):
        super().__init__(scope, id, **kwargs)

        # The base platform stack is where the VPC was created, so all we need is the name to do a lookup and import it into this stack for use
        self.vpc = aws_ec2.Vpc.from_lookup(
            self, "ECSWorkshopVPC",
            vpc_name='ecsworkshop-base/BaseVPC'
        )
        
        # Importing the service discovery namespace from the base platform stack
        self.sd_namespace = aws_servicediscovery.PrivateDnsNamespace.from_private_dns_namespace_attributes(
            self, "SDNamespace",
            namespace_name=core.Fn.import_value('NSNAME'),
            namespace_arn=core.Fn.import_value('NSARN'),
            namespace_id=core.Fn.import_value('NSID')
        )
        
        # Importing the ECS cluster from the base platform stack
        self.ecs_cluster = aws_ecs.Cluster.from_cluster_attributes(
            self, "ECSCluster",
            cluster_name=core.Fn.import_value('ECSClusterName'),
            security_groups=[],
            vpc=self.vpc,
            default_cloud_map_namespace=self.sd_namespace
        )

        # Importing the security group that allows frontend to communicate with backend services
        self.services_sec_grp = aws_ec2.SecurityGroup.from_security_group_id(
            self, "ServicesSecGrp",
            security_group_id=core.Fn.import_value('ServicesSecGrp')
        )

Frontend service deployment code

For the frontend service, there are quite a few components that have to be built to serve it up as a frontend service. Those components are an Application Load Balancer, Target Group, ECS Task Definition, and an ECS Service. To build these components on our own would equate to hundreds of lines of CloudFormation, whereas with the higher level constructs that the cdk provides, we are able to build everything with 18 lines of code.

class FrontendService(core.Stack):
    
    def __init__(self, scope: core.Stack, id: str, **kwargs):
        super().__init__(scope, id, **kwargs)

        self.base_platform = BasePlatform(self, self.stack_name)

        # This defines some of the components required for the docker container to run
        self.fargate_task_image = aws_ecs_patterns.ApplicationLoadBalancedTaskImageOptions(
            image=aws_ecs.ContainerImage.from_registry("brentley/ecsdemo-frontend"),
            container_port=3000,
            environment={
                "CRYSTAL_URL": "http://ecsdemo-crystal.service:3000/crystal",
                "NODEJS_URL": "http://ecsdemo-nodejs.service:3000"
            },
        )

        # This high level construct will build everything required to ensure our container is load balanced and running as an ECS service
        self.fargate_load_balanced_service = aws_ecs_patterns.ApplicationLoadBalancedFargateService(
            self, "FrontendFargateLBService",
            cluster=self.base_platform.ecs_cluster,
            cpu=256,
            memory_limit_mib=512,
            desired_count=1,
            public_load_balancer=True,
            cloud_map_options=self.base_platform.sd_namespace,
            task_image_options=self.fargate_task_image
        )

        # Utilizing the connections method to connect the frontend service security group to the backend security group
        self.fargate_load_balanced_service.service.connections.allow_to(
            self.base_platform.services_sec_grp,
            port_range=aws_ec2.Port(protocol=aws_ec2.Protocol.TCP, string_representation="frontendtobackend", from_port=3000, to_port=3000)
        )

Review service logs

Review the service logs from the command line:

Expand here to see the solution

Review the service logs from the console:

Expand here to see the solution

Scale the service

Manually scaling

Expand here to see the solution

Autoscaling

Expand here to see the solution

Set environment variables from our build

export clustername=$(aws cloudformation describe-stacks --stack-name container-demo --query 'Stacks[0].Outputs[?OutputKey==`ClusterName`].OutputValue' --output text)
export target_group_arn=$(aws cloudformation describe-stack-resources --stack-name container-demo-alb | jq -r '.[][] | select(.ResourceType=="AWS::ElasticLoadBalancingV2::TargetGroup").PhysicalResourceId')
export vpc=$(aws cloudformation describe-stacks --stack-name container-demo --query 'Stacks[0].Outputs[?OutputKey==`VpcId`].OutputValue' --output text)
export ecsTaskExecutionRole=$(aws cloudformation describe-stacks --stack-name container-demo --query 'Stacks[0].Outputs[?OutputKey==`ECSTaskExecutionRole`].OutputValue' --output text)
export subnet_1=$(aws cloudformation describe-stacks --stack-name container-demo --query 'Stacks[0].Outputs[?OutputKey==`PrivateSubnetOne`].OutputValue' --output text)
export subnet_2=$(aws cloudformation describe-stacks --stack-name container-demo --query 'Stacks[0].Outputs[?OutputKey==`PrivateSubnetTwo`].OutputValue' --output text)
export subnet_3=$(aws cloudformation describe-stacks --stack-name container-demo --query 'Stacks[0].Outputs[?OutputKey==`PrivateSubnetThree`].OutputValue' --output text)
export security_group=$(aws cloudformation describe-stacks --stack-name container-demo --query 'Stacks[0].Outputs[?OutputKey==`ContainerSecurityGroup`].OutputValue' --output text)

cd ~/environment

Configure ecs-cli to talk to your cluster:

ecs-cli configure --region $AWS_REGION --cluster $clustername --default-launch-type FARGATE --config-name container-demo

We set a default region so we can reference the region when we run our commands.

Authorize traffic:

aws ec2 authorize-security-group-ingress --group-id "$security_group" --protocol tcp --port 3000 --source-group "$security_group"

We know that our containers talk on port 3000, so authorize that traffic on our security group:

Deploy our frontend application:

cd ~/environment/ecsdemo-frontend
envsubst < ecs-params.yml.template >ecs-params.yml

ecs-cli compose --project-name ecsdemo-frontend service up \
    --create-log-groups \
    --target-group-arn $target_group_arn \
    --private-dns-namespace service \
    --enable-service-discovery \
    --container-name ecsdemo-frontend \
    --container-port 3000 \
    --cluster-config container-demo \
    --vpc $vpc
    

Here, we change directories into our frontend application code directory. The envsubst command templates our ecs-params.yml file with our current values. We then launch our frontend service on our ECS cluster (with a default launchtype of Fargate)

Note: ecs-cli will take care of building our private dns namespace for service discovery, and log group in cloudwatch logs.

View running container, and store the output of the task id as an env variable for later use:

ecs-cli compose --project-name ecsdemo-frontend service ps \
    --cluster-config container-demo

task_id=$(ecs-cli compose --project-name ecsdemo-frontend service ps --cluster-config container-demo | awk -F \/ 'FNR == 2 {print $2}')

We should have one task registered.

Check reachability (open url in your browser):

alb_url=$(aws cloudformation describe-stacks --stack-name container-demo-alb --query 'Stacks[0].Outputs[?OutputKey==`ExternalUrl`].OutputValue' --output text)
echo "Open $alb_url in your browser"

This command looks up the URL for our ingress ALB, and outputs it. You should be able to click to open, or copy-paste into your browser.

View logs:

# Referencing task id from above ps command
ecs-cli logs --task-id $task_id \
    --follow --cluster-config container-demo

To view logs, find the task id from the earlier ps command, and use it in this command. You can follow a task’s logs also.

Scale the tasks:

ecs-cli compose --project-name ecsdemo-frontend service scale 3 \
    --cluster-config container-demo
ecs-cli compose --project-name ecsdemo-frontend service ps \
    --cluster-config container-demo

We can see that our containers have now been evenly distributed across all 3 of our availability zones.

Set environment variables from our build

export clustername=$(aws cloudformation describe-stacks --stack-name container-demo --query 'Stacks[0].Outputs[?OutputKey==`ClusterName`].OutputValue' --output text)
export target_group_arn=$(aws cloudformation describe-stack-resources --stack-name container-demo-alb | jq -r '.[][] | select(.ResourceType=="AWS::ElasticLoadBalancingV2::TargetGroup").PhysicalResourceId')
export vpc=$(aws cloudformation describe-stacks --stack-name container-demo --query 'Stacks[0].Outputs[?OutputKey==`VpcId`].OutputValue' --output text)
export ecsTaskExecutionRole=$(aws cloudformation describe-stacks --stack-name container-demo --query 'Stacks[0].Outputs[?OutputKey==`ECSTaskExecutionRole`].OutputValue' --output text)
export subnet_1=$(aws cloudformation describe-stacks --stack-name container-demo --query 'Stacks[0].Outputs[?OutputKey==`PrivateSubnetOne`].OutputValue' --output text)
export subnet_2=$(aws cloudformation describe-stacks --stack-name container-demo --query 'Stacks[0].Outputs[?OutputKey==`PrivateSubnetTwo`].OutputValue' --output text)
export subnet_3=$(aws cloudformation describe-stacks --stack-name container-demo --query 'Stacks[0].Outputs[?OutputKey==`PrivateSubnetThree`].OutputValue' --output text)
export security_group=$(aws cloudformation describe-stacks --stack-name container-demo --query 'Stacks[0].Outputs[?OutputKey==`ContainerSecurityGroup`].OutputValue' --output text)

cd ~/environment

Configure ecs-cli to talk to your cluster:

ecs-cli configure --region $AWS_REGION --cluster $clustername --default-launch-type EC2 --config-name container-demo

We set a default region so we can reference the region when we run our commands.

Authorize traffic:

aws ec2 authorize-security-group-ingress --group-id "$security_group" --protocol tcp --port 3000 --source-group "$security_group"

We know that our containers talk on port 3000, so authorize that traffic on our security group:

Deploy our frontend application:

cd ~/environment/ecsdemo-frontend
envsubst < ecs-params.yml.template >ecs-params.yml

ecs-cli compose --project-name ecsdemo-frontend service up \
    --create-log-groups \
    --target-group-arn $target_group_arn \
    --private-dns-namespace service \
    --enable-service-discovery \
    --container-name ecsdemo-frontend \
    --container-port 3000 \
    --cluster-config container-demo \
    --vpc $vpc
    

Here, we change directories into our frontend application code directory. The envsubst command templates our ecs-params.yml file with our current values. We then launch our frontend service on our ECS cluster (with a default launchtype of Fargate)

Note: ecs-cli will take care of building our private dns namespace for service discovery, and log group in cloudwatch logs.

View running container, and store the output of the task id as an env variable for later use:

ecs-cli compose --project-name ecsdemo-frontend service ps \
    --cluster-config container-demo

task_id=$(ecs-cli compose --project-name ecsdemo-frontend service ps --cluster-config container-demo | awk -F \/ 'FNR == 2 {print $2}')

We should have one task registered.

Check reachability (open url in your browser):

alb_url=$(aws cloudformation describe-stacks --stack-name container-demo-alb --query 'Stacks[0].Outputs[?OutputKey==`ExternalUrl`].OutputValue' --output text)
echo "Open $alb_url in your browser"

This command looks up the URL for our ingress ALB, and outputs it. You should be able to click to open, or copy-paste into your browser.

View logs:

# Referencing task id from above ps command
ecs-cli logs --task-id $task_id \
    --follow --cluster-config container-demo

To view logs, find the task id from the earlier ps command, and use it in this command. You can follow a task’s logs also.

Scale the tasks:

ecs-cli compose --project-name ecsdemo-frontend service scale 3 \
    --cluster-config container-demo
ecs-cli compose --project-name ecsdemo-frontend service ps \
    --cluster-config container-demo

We can see that our containers have now been evenly distributed across all 3 of our availability zones.