AWS Fargate Tutorial (CloudShell)

Configuring Fargate to deploy a containerized application using CloudShell.

Published 2025-01-12
60 minutes completion

This tutorial guides you through deploying containerized applications using AWS Fargate without requiring a local Docker installation. We'll use AWS CloudShell, this will save us the time of installing Docker and the AWS CLI on our local machine and can be a really useful tool to quickly experiment with AWS services.

All you will need is a web browser logged into your AWS account.

Low Cost Expectation, with Caution
Following this tutorial is expected to incur minimal costs, between $0.00 and $0.10 USD or equivalent in your local currency. This estimate assumes that the tutorial is fully completed and all resources are cleaned up on the same day.
Leaving resources running can become expensive, especially if larger resources are configured than are needed. Accuracy of this information is not guaranteed. You should verify the costs yourself before proceeding and if you have any doubts, do not follow this tutorial on the live system.

Part 1: Prepare the Environment

1.1. Sign in and open CloudShell:

As we are using the web-based CloudShell, we just need to log into AWS via the regular AWS Management Console.

  • Sign in to AWS Management Console
  • Click the CloudShell icon (>_) in the top navigation bar
  • Wait for the environment to initialize
  • You'll see a welcome message when ready

1.2. Check the needed commands are available.

Our CloudShell environment should have these already installed, but let's check both commands are available:

$ docker --version
$ aws --version

1.3. Create Working Directory:

To keep things tidy, and make it easier to clean up after we are done, let's create a working directory:

$ mkdir gn-fargate-tutorial
$ cd gn-fargate-tutorial

The CloudShell does have an inactivity timeout, so if you are not using it for a while, you may need to log back in and return to this directory. You may also need to repeat steps to set environment variables when used in the tutorial.

Part 2: Create Our Application

2.1. Create our Python source code file.

We will create a small Python application which will be served by a container. You don't need to know Python to follow along. Our code will be starting up Python's built-in HTTP server and whenever there is a standard GET request it will respond with "Hello from your python app!".

Copy and paste this this whole code block into your CloudShell environment:

  • Create app.py:
$ cat > app.py << 'EOL'
from http.server import HTTPServer, BaseHTTPRequestHandler

class SimpleHandler(BaseHTTPRequestHandler):
    def do_GET(self):
        self.send_response(200)
        self.send_header('Content-type', 'text/plain')
        self.end_headers()
        self.wfile.write(b'Hello from your python app!')

def run(server_class=HTTPServer, handler_class=SimpleHandler):
    # Listen on all interfaces (0.0.0.0) for container compatibility
    server_address = ('', 3000)
    httpd = server_class(server_address, handler_class)
    print('Server running at http://localhost:3000/')
    httpd.serve_forever()

if __name__ == '__main__':
    run()
EOL

Check the file has been created by displaying its contents:

$ cat app.py

2.2. Create our Dockerfile.

Now we'll do the same to create our Dockerfile:

$ cat > Dockerfile << 'EOL'
FROM python:3.9-slim
WORKDIR /app
COPY app.py .
EXPOSE 3000
CMD ["python", "app.py"]
EOL

And check the file has been created:

$ cat Dockerfile

This gives us everything we need to create our self-contained package of everything we need to run our application and if you are not familiar with Dockerfiles, this one is pretty simple:

   FROM python:3.9-slim

We've specified that we want to use a lightweight (slim) Python 3.9 base image

   WORKDIR /app
   COPY app.py .

We've instructed the packaging/build tool to set the working directory to /app (this is basically the directory within the container to use for the other commands where our application will run) and then copy our web-server Python script into the container.

   EXPOSE 3000

We want to expose port 3000 to allow browser clients to communicate with it.

   CMD ["python", "app.py"]

Finally we've set the command to actually run the script/start the server.

2.3. Build Docker Image:

Now we can build our Docker image, we will also tag it with the name gn-fargate-tutorial for easier identification.

$ docker build -t gn-fargate-tutorial .

2.4. Run and test the Application locally

It's always a good idea to test the application locally before deploying it to AWS, so let's run it in the background:

$ docker run -d -p 3000:3000 --name test-container gn-fargate-tutorial

As we are running it in the CloudShell environment, we won't be able to access it directly from our local web broswer but we can use curl on the command line to test it:

$ curl http://localhost:3000

You should see the output: Hello from your python app!

Ok, all looks good, let's quickly clean up our running test container so we don't leave any lingering resources. We can always re-run the docker run command to start it again later if we need to:

$ docker stop test-container
$ docker rm test-container

Part 3: Fargate Preparations

Now we've tested our application locally, we need to work on getting it deployed to AWS Fargate.

Fargate will need access to our Docker image in order to deploy it, and it does that by pulling it from a container registry. AWS provides a container registry service called Amazon ECR (Elastic Container Registry) so we'll use that.

We'll start by creating a new repository, giving it the same name as our Docker image to help us to identify it again later.

$ aws ecr create-repository --repository-name gn-fargate-tutorial

Using the CloudShell environment we don't have to worry about setting up any environment variables for credentials as this is all configured automatically.

The container registry (ECR) is region specific, and has been created in the same region as our CloudShell environment.

Take a look at the output of the command, it will provide you with the URL of the repository along with other information:

[NO_COPY]
# Example output:
{
    "repository": {
        "repositoryArn": "arn:aws:ecr:eu-west-2:************:repository/gn-fargate-tutorial",
        "registryId": "************",
        "repositoryName": "gn-fargate-tutorial",
        "repositoryUri": "************.dkr.ecr.eu-west-2.amazonaws.com/gn-fargate-tutorial",
        "createdAt": "2024-12-15T10:39:41.974000+00:00",
        "imageTagMutability": "MUTABLE",
        "imageScanningConfiguration": {
            "scanOnPush": false
        },
        "encryptionConfiguration": {
            "encryptionType": "AES256"
        }
    }
}

You can retreive this information again later using:

$ aws ecr describe-repositories --repository-name gn-fargate-tutorial

Or to view all repositories:

$ aws ecr describe-repositories

You can see the reposityUrl has the following format:

<aws_account>.dkr.ecr.<aws_region>.amazonaws.com/<repository_name>

In order to push our Docker image to the ECR repository, we first need to tag it with this repository URI along with the tag name.

Let's first capture the URI into a variable to avoid any typos later on:

$ export REPO_URI=$(aws ecr describe-repositories --repository-names gn-fargate-tutorial \
  --query 'repositories[0].repositoryUri' --output text)

And check it is set correctly:

$ echo $REPO_URI

Now let's tag our Docker image with the repository URI and the tag name latest.

$ docker tag gn-fargate-tutorial:latest $REPO_URI:latest

Before we can push to our ECR repository, we need to log into it. aws ecr get-login-password will retrieve an authentication token we can use for this so we can pass this to the docker command. We also need to specify the server name, which we can extract from the repository URI.

To understand this better in case you are not familiar with the shell syntax, type:

$ echo $REPO_URI
$ echo ${REPO_URI%/*}

You'll see that the second syntax simply removes the the slash and everything after it from the end of the URI, so we can use this to extract just the server name.

So, type this to log in:

$ aws ecr get-login-password | docker login --username AWS --password-stdin ${REPO_URI%/*}

As internal CloudShell short live credentials are used, you should see Login Succeeded in the output.

Now we can push our image with:

$ docker push $REPO_URI:latest

Part 4: Create Task Definition

To summarise what we have done so far, we have created a simple Python application, created a Dockerfile to package it into a container, built the container image, tagged it with the repository URI and pushed it to the ECR repository so that Fargate can use it to deploy our application.

Now we need to create an IAM role that will allow Fargate to pull our image from ECR, first we create a trust policy .json file:

$ cat > task-execution-role-trust.json << EOL
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ecs-tasks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
EOL

This is a simple trust policy that allows the ECS tasks to assume the role allowing them to pull our image from our ECR (Container Registry).

So let's create it:

$ aws iam create-role \
  --role-name gn-fargate-tutorial-role \
  --assume-role-policy-document file://task-execution-role-trust.json

You should see information about the role being created, returned as json similar to this:

[NO_COPY]
{
    "Role": {
        "Path": "/",
        "RoleName": "gn-fargate-tutorial-role",
        "RoleId": "*********************",
        "Arn": "arn:aws:iam::************:role/gn-fargate-tutorial-role",
        "CreateDate": "2024-12-15T14:26:46+00:00",
        "AssumeRolePolicyDocument": {
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Effect": "Allow",
                    "Principal": {
                        "Service": "ecs-tasks.amazonaws.com"
                    },
                    "Action": "sts:AssumeRole"
                }
            ]
        }
    }
}

We'll need the role ARN shortly so we can capure this into a variable:

$ export TASK_EXECUTION_ROLE=$(aws iam get-role --role-name gn-fargate-tutorial-role --query 'Role.Arn' --output text)

And let's attach the AWS managed policy:

$ aws iam attach-role-policy \
  --role-name gn-fargate-tutorial-role \
  --policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy

Now let's now get the task definition prepared for Fargate.

Copy/paste this:

$ cat > task-definition.json << EOL
{
  "family": "gn-fargate-tutorial",
  "networkMode": "awsvpc",
  "requiresCompatibilities": ["FARGATE"],
  "cpu": "256",
  "memory": "512",
  "executionRoleArn": "${TASK_EXECUTION_ROLE}",
  "containerDefinitions": [
    {
      "name": "gn-fargate-tutorial",
      "image": "${REPO_URI}:latest",
      "portMappings": [
        {
          "containerPort": 3000,
          "protocol": "tcp"
        }
      ]
    }
  ]
}
EOL

Now type cat task-definition.json to check it, you should see the repository URI in the image field and the execution role ARN.

We should now have everything set up to register the task definition with Fargate like this, let's do that next:

$ aws ecs register-task-definition --cli-input-json file://task-definition.json

Note: If you wanted to check how things are looking from the web console, you should see our new task definition by going to the ECS service console and clicking on the task definitions.

Part 5: Create the Cluster and Deploy

We'll first create a cluster for our application with the same name as we've used to identify our repository. Again, this helps us keep things organised and easier to clean up later:

$ aws ecs create-cluster --cluster-name gn-fargate-tutorial

Now for the networking setup. This is a critical part that depends on your AWS account's configuration. In most cases we can use the first option, to use our default VPC. If you are using a AWS account that has been modified to remove or change the default VPC, you can use the second option. If in doubt, try the first option first.

Option 1: Using Default VPC (Simplest, but may not be available)

Check if you have a default VPC:

$ aws ec2 describe-vpcs --filters Name=isDefault,Values=true --query 'Vpcs[*].VpcId' --output text

If this returns a VPC ID we can continue to extract the VPC ID and subnets and set them as variables:

# Get default VPC ID
$ export VPC_ID=$(aws ec2 describe-vpcs --filters Name=isDefault,Values=true --query 'Vpcs[0].VpcId' --output text)

# Get public subnets (assumes default VPC subnets are public)
$ export SUBNET_IDS=$(aws ec2 describe-subnets \
    --filters Name=vpc-id,Values=$VPC_ID Name=map-public-ip-on-launch,Values=true \
    --query 'Subnets[*].SubnetId' --output text | tr '\t' ',')

This should have set two variables, VPC_ID and SUBNET_IDS, let's check them:

$ echo "Using VPC: $VPC_ID"
$ echo "Using Subnets: $SUBNET_IDS"

If either of these are empty, you will need to manually select pre-configured subnets, and will need to continue with Option 2 instead. Otherwise all looks good and you can skip ahead to creating the security group.

Option 2: List Available VPCs and Choose One

If Option 1 hasn't been possible, we'll establish which VPCs are available and choose one.

$ aws ec2 describe-vpcs --query 'Vpcs[*].[VpcId,Tags[?Key==`Name`].Value|[0],CidrBlock]' --output table

Then list subnets for your chosen VPC (replace VPC_ID with your choice):

$ aws ec2 describe-subnets \
    --filters Name=vpc-id,Values=YOUR_VPC_ID \
    --query 'Subnets[*].[SubnetId,AvailabilityZone,MapPublicIpOnLaunch,Tags[?Key==`Name`].Value|[0]]' \
    --output table

Then set your variables:

$ export VPC_ID=YOUR_VPC_ID
$ export SUBNET_IDS=SUBNET_ID1,SUBNET_ID2

Create Security Group

Let's get our security group created for the VPC and Subnet IDs we have now identified:

export SG_ID=$(aws ec2 create-security-group \
    --group-name gn-fargate-tutorial-sg \
    --description "Security group for Fargate tutorial" \
    --vpc-id $VPC_ID \
    --query 'GroupId' --output text)

Our security group is now created and we should have it stored in the SG_ID variable:

$ echo $SG_ID

We can allow inbound traffic to our application on port 3000 from any IP address:

$ aws ec2 authorize-security-group-ingress \
    --group-id $SG_ID \
    --protocol tcp \
    --port 3000 \
    --cidr 0.0.0.0/0

The security group acts as a virtual firewall and in it's default state would block traffic to our application, we have now allowed inbound traffic to our application on port 3000 from any IP address. Security groups are stateful and as such by allowing traffic to our application, we are also allowing return traffic from our application over the same established connection.

In addition to the security group, the network access control list (NACL) needs to allow inbound and outbound traffic to and from our application on port 3000, the default VPC NACLs will allow this so we won't go though the configuration here as it is assumed you already have the knowledge to make the changes yourself if you are using a non-default VPC. Remember though, if you are using a non-default VPC, that the NACL is stateless and as such there would need to be rules that allow traffic in both directions.

Create the Fargate Service

Finally we are ready to create the Fargate service, we will set the desired count to just 1 for our testing and keep costs to a minimum:

$ aws ecs create-service \
    --cluster gn-fargate-tutorial \
    --service-name gn-fargate-tutorial \
    --task-definition gn-fargate-tutorial \
    --desired-count 1 \
    --launch-type FARGATE \
    --network-configuration \
       "awsvpcConfiguration={subnets=[${SUBNET_IDS%,}],securityGroups=[$SG_ID],assignPublicIp=ENABLED}"

Our security group is now created to allow inbound traffic to our application on tcp port 3000 from any IP address.

Test the Service

After creating the service, it will take a minute or two for the task to start up. We can use this command to to keep checking until we have a running task:

# Wait for task to be running (repeat until you see a task ARN)
$ aws ecs list-tasks --cluster gn-fargate-tutorial --service-name gn-fargate-tutorial --desired-status RUNNING

Once you have a task, we'll capture the task ARN into a variable to monitor it later:

$ export TASK_ARN=$(aws ecs list-tasks --cluster gn-fargate-tutorial --service-name gn-fargate-tutorial \
    --desired-status RUNNING --query 'taskArns[0]' --output text)

We should now test this in our web browser although it's not immediately obvious how we can find out the public IP address of our service, we'll first get the ENI ID:

$ export ENI_ID=$(aws ecs describe-tasks --cluster gn-fargate-tutorial --tasks $TASK_ARN \
    --query 'tasks[0].attachments[0].details[?name==`networkInterfaceId`].value' --output text)

And from that we can find the public IP address attached:

$ aws ec2 describe-network-interfaces --network-interface-ids $ENI_ID \
    --query 'NetworkInterfaces[0].Association.PublicIp' --output text

Now you can visit http://public-ip:3000 in your web browser and you will hopefully see "Hello from your python app!"

Let's take a look at the logs. From the command line we'll first get the log stream name:

$ export LOG_STREAM=$(aws ecs describe-tasks --cluster gn-fargate-tutorial --tasks $TASK_ARN \
    --query 'tasks[0].containers[0].name' --output text)

Now we can view the logs:

$ aws logs get-log-events \
    --log-group-name /ecs/gn-fargate-tutorial \
    --log-stream-name $LOG_STREAM

That was a lot of work for our tiny application, but hopefully you can see how powerful this scaleable Serverless framework could be running a large, complex application.

Part 6: Clean Up

The following commands will clean up all resources we created in this tutorial.

First, delete the ECS service by setting the desired count to 0 and then removing it:

$ aws ecs update-service --cluster gn-fargate-tutorial --service gn-fargate-tutorial --desired-count 0
$ aws ecs delete-service --cluster gn-fargate-tutorial --service gn-fargate-tutorial --force

Delete the ECS cluster:

$ aws ecs delete-cluster --cluster gn-fargate-tutorial

Remove the ECR repository and all its images:

$ aws ecr delete-repository --repository-name gn-fargate-tutorial --force

Clean up the IAM role by first detaching the policy:

$ aws iam detach-role-policy \
  --role-name gn-fargate-tutorial-role \
  --policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy

Then delete the role:

$ aws iam delete-role --role-name gn-fargate-tutorial-role

Remove the security group:

$ aws ec2 describe-security-groups --filters Name=group-name,Values=gn-fargate-tutorial-sg \
   --query 'SecurityGroups[0].GroupId' --output text | xargs -I {} aws ec2 delete-security-group \
   --group-id {}

Finally, clean up the local working directory:

$ cd ~
$ rm -rf gn-fargate-tutorial

© 2025 Goldnode. All rights reserved.