Frontend Rails App

Deploy our application, service, and environment

Navigate to the frontend service repo.

cd ~/environment/ecsdemo-frontend

To start, we will initialize our application, and create our first service. In the context of copilot-cli, the application is a group of related services, environments, and pipelines. Run the following command to get started:

copilot init

We will be prompted with a series of questions related to the application, and then our service. Answer the questions as follows:

  • Application name: ecsworkshop
  • Service Type: Load Balanced Web Service
  • What do you want to name this Load Balanced Web Service: ecsdemo-frontend
  • Dockerfile: ./Dockerfile

After you answer the questions, it will begin the process of creating some baseline resources for your application and service. This includes the manifest file for the frontend service, which defines the desired state of your service deployment. For more information on the Load Balanced Web Service manifest, see the copilot documentation.

Next, you will be prompted to deploy a test environment. An environment encompasses all of the resources that are required to support running your containers in ECS. This includes the networking stack (VPC, Subnets, Security Groups, etc), the ECS Cluster, Load Balancers (if required), service discovery namespace (via CloudMap), and more.

Before we proceed, we need to do a couple of things for our application to work as we expect. First, we need to define the backend services url’s as environment variables as this is how the frontend will communicate with them. These url’s are created when the backend services are deployed by copilot, which is using service discovery with AWS Cloud Map. The manifest file is where we can make changes to the deployment configuration for our service. Let’s update the manifest file with the environment variables.

cat << EOF >> copilot/ecsdemo-frontend/manifest.yml
variables:
  CRYSTAL_URL: "http://ecsdemo-crystal.test.ecsworkshop.local:3000/crystal"
  NODEJS_URL: "http://ecsdemo-nodejs.test.ecsworkshop.local:3000"
EOF

Next, the application presents the git hash to show the version of the application that is deployed. All we need to do is run the below command to put the hash into a file for the application to read on startup.

git rev-parse --short=7 HEAD > code_hash.txt

We’re now ready to deploy our environment. Run the following command to get started:

copilot env init --name test --profile default --default-config

This part will take a few minutes because of all of the resources that are being created. This is not an action you run every time you deploy your service, it’s just the one time to get your environment up and running.

Next, we will deploy our service!

copilot svc deploy

At this point, copilot will build our Dockerfile and deploy the necessary resources for our service to run.

Below is an example of what the cli interaction will look like:

deployment

Ok, that’s it! By simply answering a few questions, we have our frontend service deployed to an environment!

Grab the load balancer url and paste it into your browser.

copilot svc show -n ecsdemo-frontend --json | jq -r .routes[].url

You should see the frontend service up and running. The app may look strange or like it’s not working properly. This is because our service relies on the ability to talk to AWS services that it presently doesn’t have access to. The app should be showing an architectural diagram with the details of what Availability Zones the services are running in. We will address this fix later in the chapter. Now that we have the frontend service deployed, how do we interact with our environment and service? Let’s dive in and answer those questions.

Interacting with the application

To interact with our application, run the following in the terminal:

copilot app

This will bring up a help message that looks like the below image.

app

We can see the available commands, so let’s first see what applications we have deployed.

copilot app ls

The output should show one application, and it should be named “ecsworkshop”, which we named when we ran copilot init earler. When you start managing multiple applications with copilot, this will serve as that single command to get insight into all of them.

app_ls

Now that we see our application, let’s get a more detailed view into what environments and services our application contains.

copilot app show ecsworkshop

The result should look like this:

app_show

Reviewing the output, we see the environments and services deployed under the application. In a real world scenario, we would want to deploy a production environment that is completely isolated from test. Ideally that would be in another account as well. With this view, we see what accounts and regions our application is deployed to.

Interacting with the environment

Let’s now look deeper into our test environment. To interact with our environments, we will use the copilot env command.

env_ls

To list the environments, run:

copilot env ls

The response will come back with test, so let’s get more details on the test environment by running:

copilot env show -n test

env_show

With this view, we’re able to see all of the services deployed to our application’s test environment. As we add more services, we will see this grow. A couple of neat things to point out here:

  • The tags associated with our environment. The default tags have the application name as well as the environment.
  • The details about the environment such as account id, region, and if the environment is considered production.

Interacting with the frontend service

env_show

There is a lot of power with the copilot svc command. As you can see from the above image, there is quite a bit that we can do when interacting with our service.

Let’s look at a couple of the commands:

  • package: The copilot-cli uses CloudFormation to manage the state of the environment and services. If you want to get the CloudFormation template for the service deployment, you can simply run copilot svc package. This can be especially helpful if you decide to move to CloudFormation to manage your deployments on your own.
  • deploy: To put it simply, this will deploy your service. For local development, this enables one to locally push their service changes up to the desired environment. Of course when it comes time to deploy to production, a proper git workflow integrated with CI/CD would be the best path forward. We will deploy a pipeline later!
  • status: This command will give us a detailed view of the the service. This includes health information, task information, as well as active task count with details.
  • logs: Lastly, this is an easy way to view your service logs from the command line.

Let’s now check the status of the frontend service.

Run:

copilot svc status -n ecsdemo-frontend

svc_status

We can see that we have one active running task, and the details.

Scale our task count

One thing we haven’t discussed yet is ways to manage/control our service configuration. This is done via the manifest file. The manifest is a declarative yaml template that defines the desired state of our service. It was created automatically when we ran through the setup wizard (running copilot init), and includes details such as docker image, port, load balancer requirements, environment variables/secrets, as well as resource allocation. It dynamically populates this file based off of the Dockerfile as well as opinionated, sane defaults.

Open the manifest file (./copilot/ecsdemo-frontend/manifest.yml), and replace the value of the count key from 1 to 3. This is declaring our state of the service to change from 1 task, to 3. Feel free to explore the manifest file to familiarize yourself.

# Number of tasks that should be running in your service.
count: 3

Once you are done and save the changes, run the following:

copilot svc deploy

Copilot does the following with this command:

  • Build your image locally
  • Push to your service’s ECR repository
  • Convert your manifest file to CloudFormation
  • Package any additional infrastructure into CloudFormation
  • Deploy your updated service and resources to CloudFormation

To confirm the deploy, let’s first check our service details via the copilot-cli:

copilot svc status -n ecsdemo-frontend

You should now see three tasks running! Now go back to the load balancer url, and you should see the service showing different IP addresses based on which frontend service responds to the request. Note, it’s still not showing the full diagram, we’re going to fix this shortly.

Review the service logs

The services we deploy via copilot are automatically shipping logs to Cloudwatch logs by default. Rather than navigate and review logs via the console, we can use the copilot cli to see those logs locally. Let’s tail the logs for the frontend service.

copilot svc logs -a ecsworkshop -n ecsdemo-frontend --follow

Note that if you are in the same directory of the service you want to review logs for, simply type the below command. Of course, if you wanted to review logs for a service in a particular environment, you would pass the -e flag with the environment name.

copilot svc logs

Last thing to bring up is that you aren’t limited to live tailing logs, type copilot svc logs --help to see the different ways to review logs from the command line.

Create a CI/CD Pipeline

Expand here to deploy a pipeline

Next steps

We have officially completed deploying our frontend. In the next section, we will extend our application by adding two backend services.

Install awslogs and seige

pip3 install --user awslogs
sudo yum install -y siege
cd ~/environment/ecsdemo-frontend/cdk
pip install -r requirements.txt --user

Confirm that the cdk can synthesize the assembly CloudFormation templates

cdk synth

Review what the cdk is proposing to build and/or change in the environment

cdk diff

Deploy the frontend web service

cdk deploy --require-approval never
  • Once the deployment is complete, there will be two outputs. Look for the frontend url output, and open that link in a new tab. At this point you should see the frontend website up and running. Below is an example output:

feoutput

Code Review

As we mentioned in the platform build, we are defining our deployment configuration via code. Let’s look through the code to better understand how cdk is deploying.

Importing base configuration values from our base platform stack

Because we built the platform in its own stack, there are certain environmental values that we will need to reuse amongst all services being deployed. In this custom construct, we are importing the VPC, ECS Cluster, and Cloud Map namespace from the base platform stack. By wrapping these into a custom construct, we are isolating the platform imports from our service deployment logic.

class BasePlatform(Construct):
    
    def __init__(self, scope: Construct, id: str, **kwargs):
        super().__init__(scope, id, **kwargs)
        environment_name = 'ecsworkshop'

        # The base platform stack is where the VPC was created, so all we need is the name to do a lookup and import it into this stack for use
        self.vpc = ec2.Vpc.from_lookup(
            self, "VPC",
            vpc_name='{}-base/BaseVPC'.format(environment_name)
        )

        self.sd_namespace = servicediscovery.PrivateDnsNamespace.from_private_dns_namespace_attributes(
            self, "SDNamespace",
            namespace_name=cdk.Fn.import_value('NSNAME'),
            namespace_arn=cdk.Fn.import_value('NSARN'),
            namespace_id=cdk.Fn.import_value('NSID')
        )

        self.ecs_cluster = ecs.Cluster.from_cluster_attributes(
            self, "ECSCluster",
            cluster_name=cdk.Fn.import_value('ECSClusterName'),
            security_groups=[],
            vpc=self.vpc,
            default_cloud_map_namespace=self.sd_namespace
        )
        
        self.services_sec_grp = ec2.SecurityGroup.from_security_group_id(
            self, "ServicesSecGrp",
            security_group_id=cdk.Fn.import_value('ServicesSecGrp')
        )

Frontend service deployment code

For the frontend service, there are quite a few components that have to be built to serve it up as a frontend service. Those components are an Application Load Balancer, Target Group, ECS Task Definition, and an ECS Service. To build these components on our own would equate to hundreds of lines of CloudFormation, whereas with the higher level constructs that the cdk provides, we are able to build everything with 18 lines of code.

class FrontendService(Stack):
    
    def __init__(self, scope: Stack, id: str, **kwargs):
        super().__init__(scope, id, **kwargs)

#        base_platform = BasePlatform(self, stack_name) 
        self.base_platform = BasePlatform(self, "BasePlatform") 

        self.fargate_task_image = ecs_patterns.ApplicationLoadBalancedTaskImageOptions(
            image=ecs.ContainerImage.from_registry("public.ecr.aws/aws-containers/ecsdemo-frontend"),
            container_port=3000,
            environment={
                "CRYSTAL_URL": "http://ecsdemo-crystal.service.local:3000/crystal",
                "NODEJS_URL": "http://ecsdemo-nodejs.service.local:3000",
                "REGION": os.getenv('AWS_DEFAULT_REGION')
            },
        )

        # This high level construct will build everything required to ensure our container is load balanced and running as an ECS service
        self.fargate_load_balanced_service = ecs_patterns.ApplicationLoadBalancedFargateService(
            self, "FrontendFargateLBService",
            service_name='ecsdemo-frontend',
            cluster=self.base_platform.ecs_cluster,
            cpu=256,
            memory_limit_mib=512,
            desired_count=1,
            public_load_balancer=True,
            cloud_map_options=ecs.CloudMapOptions(
                cloud_map_namespace=self.base_platform.sd_namespace
                ),
            task_image_options=self.fargate_task_image
        )

        # Utilizing the connections method to connect the frontend service security group to the backend security group
        self.fargate_load_balanced_service.service.connections.allow_to(
            self.base_platform.services_sec_grp,
            port_range=ec2.Port(protocol=ec2.Protocol.TCP, string_representation="frontendtobackend", from_port=3000, to_port=3000)
        )

Review service logs

Review the service logs from the command line:

Expand here to see the solution

Review the service logs from the console:

Expand here to see the solution

Scale the service

Manually scaling

Expand here to see the solution

Autoscaling

Expand here to see the solution