Setting Up AWS ECS with Codeship

Jonathan Chao
5 min readDec 15, 2018

--

Recently I’ve been working on a project that I have to host a docker container on AWS ECS. We already use Codeship to do the continuous integration, so this requires some additional setup. The good thing is This is the list of steps of what I’ve done. Hopefully it’s detailed enough that I can refer back to this in the future without too much troubleshooting.

Sure, these tasks can all be achieved by API, but I’ve done it through UI. I don’t think linking these two will be too hard.

Step 1: Create new load balancer

We’ve already have an existing cluster, so I’m utilizing that. Otherwise there would be one additional step to create the cluster.

And yes, it is a bit counterintuitive as we don’t start in the ECS service but EC2.

go to EC2 -> click load balancer -> create new load balancer -> http/https create -> enter name of the load balancer (i usually follow the format of the service name followed by “-lb” at the end) -> internet-facing -> ipv4 -> listener set to http:80 -> availability zones set to the vpc where your service resides (this setup should be identical to your ECS setup, unless you have different requirements) -> click next twice to go to configure security groups

setup security groups the same as your ECS setup ->set up target group -> “New target group” -> Name should follow the format of the load balancer, but with “-tg” instead of “-lb”, target type = Instance -> protocol = http -> port = 80 (these should be default already) -> Health check should be linked to your health check page (say, your health check page is http://your-domain/somestring/healthcheck/ , then the path here should be “somestring/healthcheck/”) -> skip registering target for now

Step 2: Set up Task Definition and all setup files in your code

The load balancer is to have a DNS where you can call. It will then access teh target group, which will redirect to the task/service you choose. These tasks would have their own public and private ip. To finish setting up the target, we need to create the task first.

For this project, we are using a Django project, but this shouldn’t matter what type of project you do.

First, create a Dockerfile inside the directory where your requirements.txt is in with the following info

FROM python:3.6WORKDIR /opt/yourdir
ENV PYTHONPATH="/opt/yourdir"
RUN useradd -ms /bin/bash adminuserCOPY requirements.txt /opt/yourdir/requirements.txtRUN pip install pip -U && \
pip install -r requirements.txt
# now we need to copy the source code to the directory
COPY django_project /opt/yourdir/django_project
# we will set up the bootstrap.sh and everything else later
COPY ops/bootstrap.sh /opt/yourdir/bootstrap.sh
ENTRYPOINT ["/bin/bash", "bootstrap.sh"]

Since we are using AWS, we will need to have the information written.

Create a file called aws.env

AWS_REGION=us-east-1
AWS_DEFAULT_REGION=us-east-1
AWS_ACCESS_KEY=<you key>
AWS_SECRET_ACCESS_KEY=<your secret>

Now, this cannot be directly uploaded. We need to encrypt it first. This requires 2 things: Jet CLI and codeship.aes

Jet CLI can be installed through various ways like apt-get, and codeship.aes can be found on codeship page under your project -> general.

After these two are set in place, simply run

jet encrypt aws_env aws_env.encrypted

Which will create a file called aws_env.encrypted for codeship to decrypt and use. DO NOT commit the unencrypted file to github or anywhere.

Then we need to set up the bootstrap.sh

# !/bin/bashAPP_NAME=your_appENV=prod  # or whatever you want it to be, just make sure this matches what you put on ECS later# Here you can set your variables, or you can create a file and source itFILEPATH=blahblah/blah
# source some_file.txt
# actually execution script
case "$1" in
run)
# we start django here
python3 django_project/manage.py migrate # migrate the models
python3 django_project/manage.py runserver
exit
;;
*)
# the command is something like bash. Just run it in the right env
exec "$@"
;;
esac

Then we set up aws_deployment.sh

# !/bin/bashsource /deploy/ops/your_source_file # source any variables you wantpip install awscli
pip install jinja2-cli # watch for versions for these
for COMMAND in "run"
do
jinja2 /deploy/ops/tasks/task_def.json.j2 -D branch=<your_branch> -D env=<your_env> -D command=<your commands> > /task_def_<your commands>_json
cat task_def_<your commands>.json # register a new version of task defined in json and update
aws ecs register-task-definition --cli-input-json files:///task_def_<your commands>.json
aws ecs update-service --cluster <your_cluster> -service <your_service> --task_definition <your_task>done

Phew, quite some work, but we haven’t really tied them all together.

We need the task_definition.json.j2 now

This will be a json file that follows the format in aws ecs. You can follow this post: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-task-definition.html

Since we specify the path to be ops/tasks/task_def.json.j2 Store it accordingly.

Okay, back to ECS

Step 3: create task in ECS

We already have an existing cluster we want to use, so I’ll skip the creating cluster part, otherwise you need to create your cluster.

Create repository:

Go to ECS -> click Repositories on the left panel -> Create repository -> enter repo name you choose

Create service:

Go to ECS -> under the cluster you wanna use -> click Create under “Service” tab -> Launch type = FARGATE -> Task Definition (Family) = the task you wanna use -> Cluster = the cluster you’re in -> Number of Task =1 (or however many you need) -> Deployment Type = Rolling update -> Click Next

Cluster VPC = VPC you wanna use, be consistent with Load Balancer -> Subnets = subnets you wanna use, same idea -> Security Groups, same idea -> Auto-assign public IP=ENABLED -> Load Balancer = Application Load Balancer -> Load balancer name = <the -lb you created> -> Container name=<there should be only one choice> -> Add to Load Balancer -> Production listener port: 80:HTTP -> Target group name=create new: the -tg you created -> Next Step

Set Auto Scaling= Do not adjust the service’s desired count -> Next -> Create Service

At this point, you should be able to run

jet steps --tag=prod --push

or something similar to make the push to ECS

Step 4: Codeship integration

Essentially, Codeship just uses jet to do the work with some wrapper. Therefore make sure your jet command above works first.

You essentially need 2 additional files: codeship-steps.yml and codeship-services.yml

In codeship-services.yml, we have

your_app:
build:
image: the link to the repo on ECS
dockerfile_path: Dockerfile
encrypted_env_file: aws_env.encrypted
environment:
- AWS_DEFAULT_REGION=us-east-1

awsdockerconfig:
image: codeship/aws-ecr-dockercfg-generator # or something like that
add_docker: true
encrypted_env_file: aws_env.encrypted
awsdeployment:
image: codeship/aws-deployment
add_docker: true
encrypted_env_file: aws_env.encrypted
volume:
- ./:/deploy
environment:
- AWS_DEFAULT_REGION=us-east-1

And finally codeship-steps.yml

- name: Build my app and push to AWS
service: your_app
type: push
tag: "^(dev|prod)$" # this is optional, it tells codeship when to execute this step
image_name: repo on ECS
image_tag: "{{.Branch}}"
registry: <your seervice>
dockercfg_service: awsdockerconfig
- name: Deploy to ECS
type: serial
steps:
- service: awsdeployment
tag: "^(dev|prod)$"
command: /deploy/ops/aws_deployment.sh
dockercfg_service: awsdockerconfig

More Codeship doc: https://documentation.codeship.com/pro/continuous-deployment/aws/#codeship-aws-deployment-container

Now when you push any change to branches specified in codeship, the steps will be executed, and you should find yourself a new task on your ECS!

--

--

Jonathan Chao

I am a software developer who has been in this industry for close to a decade. I share my experience to people who are or want to get into the industry