Contents

Use AWS ECS with EFS for Persistent Storage

This post is about using EFS as persistent volume for ECS containers.

Here are a few things I wish the AWS documentation was more clear about:

  • You don’t need to install any dependencies in your Docker image to be able to mount EFS inside the container (plain python:3.8-slim worked for me);
  • The ECS container and the EFS filesystem (mount target) don’t have to share the same subnet, however, they have to be in the same Availability Zone;
  • VPC DNS hostnames option has to be enabled, ECS uses the EFS DNS name to connect;

Keywords: AWS, ECS, EFS, Fargate, persistent storage, volume, Terraform

Overview

Here’s what needs to be done:

  1. Create an EFS filesystem and a mount target;
  2. Create an ECS task definition that uses EFS;
  3. Configure Security Groups;

I used Fargate to run the container, but the EC2 launch type should work the same. The Fargate platform has to be version 1.4.0 or higher.

EFS

The EFS part is rather simple. You have to create an EFS filesystem and a mount target. Here’s what it may look like in Terraform:

resource "aws_efs_file_system" "this" {
  availability_zone_name = "us-east-1a"
  creation_token         = "ecs-efs-test"
  encrypted              = false
  performance_mode       = "generalPurpose"
  throughput_mode        = "bursting"

  tags = {
    Name = "ecs-efs-test"
  }
}

resource "aws_efs_mount_target" "this" {
  file_system_id  = aws_efs_file_system.this.id
  subnet_id       = "subnet-xxxxxxxxxx"
  security_groups = ["sg-xxxxxxxxxx"]
}

ECS

For the ECS container, you have to do two things: 1) define a volume in the task definition; 2) use that volume in the task’s container definition. Terraform:

resource "aws_ecs_task_definition" "this" {
  # unrelated arguments skipped
  volume {
    name = "ecs-test"
    efs_volume_configuration {
      file_system_id = aws_efs_file_system.this.id
      root_directory = "/"
    }
  }

  container_definitions = jsonencode([
    {
      # unrelated arguments skipped
      mountPoints: [
        {
          sourceVolume: "ecs-test",
          containerPath: "/app/efs"
        }
      ],
    }
  ])
}

Pay attention that volume.name and mountPoints.sourceVolume are the same. This configuration will map the EFS root / to /app/efs inside the container.

Security Groups

You have to allow NFS traffic from the container to the EFS fs. For this, you need a Security Group for the ECS container and an SG for the EFS mount target. Create an ingress rule in the EFS SG that allows tcp traffic on port 2049 from the container’s SG and you are good to go.

Terraform example:

module "fargate_security_group" {
  source  = "terraform-aws-modules/security-group/aws"
  version = "4.7.0"

  name        = "ecs-fargate-sg"
  vpc_id      = "vpc-xxxxxxxxxx"

  egress_with_cidr_blocks = [
    {
      rule        = "all-all"
      cidr_blocks = "0.0.0.0/0"
    }
  ]
}

module "efs_security_group" {
  source  = "terraform-aws-modules/security-group/aws"
  version = "4.7.0"

  name        = "efs-mount-target-sg"
  vpc_id      = "vpc-xxxxxxxxxx"

  ingress_with_source_security_group_id = [
    {
      from_port                = 2049
      to_port                  = 2049
      protocol                 = "tcp"
      description              = "ECS container to NFS port"
      source_security_group_id = module.fargate_security_group.security_group_id
    }
  ]

  egress_with_cidr_blocks = [
    {
      rule        = "all-all"
      cidr_blocks = "0.0.0.0/0"
    }
  ]
}

And this concludes the configuration, your ECS container should be able to mount the EFS fs on startup.

Troubleshooting

Generally, this AWS post should help if you have any troubles. However, it doesn’t directly mention that one of the reasons for the following error may be disabled DNS hostnames in your VPC:

ResourceInitializationError: failed to invoke EFS utils commands to set up EFS volumes: stderr: Failed to resolve “fs-xxxxxxxxxxx.efs.us-east-1.amazonaws.com” - check that your file system ID is correct.

Make sure the VPC DNS hostnames are enabled.