CDN for multi-tenant SaaS application with let’s encrypt free SSL certificates Part 1

Problems

Sign up on example.com and you will get either subdomain e.g. yours.example.com or directory e.g. example.com/yours/ for free. If you want to use your own domain then you point your domain to our IP or create a cname.

 

We wanted to generate a free LetsEncrypt certificate for your domain.

CDN, we wanted to serve content to the user from the nearest geographical location and also use edge caching.

 

Existing solutions:

We first tried to explore some of the existing solutions but they were too expensive or do not support ssl certificates with multiple domains.

 

Two of the solutions I would like to mention are Cloudflare and AWS Cloudfront. Cloudflare does provide the solution but it is too expensive and for sure startups can not afford it. AWS Cloudfront has a lot of restrictions in place like limited distributions per account and also becomes expensive as you scale.

 

So we decided to build our own CDN.

This blog post will walk you through a step-by-step process of setting up auto-ssl certificate generation on a multi-tenant application. 

 

This blog post requires that you have some knowledge of AWS Route53, AWS VPC, Subnets, AWS SNS and SQS. AWS ECS Fargate and AWS Elastic Load Balancers. Additionally, you should be aware of the Redis server and AWS IAM permissions system.

 

Tools and their purpose

 

AWS Route53 for geolocation-based DNS resolution. Install OpenResty with lua-resty-auto-ssl to generate ssl certificates on demand. Redis for certificate storage. Setup all this in a docker container and use ECS Fargate for horizontal scaling. AWS SNS and SQS for interacting with our containers across the region.

 

Overall Game Plan

 

Write geolocation-based rules on Route53 to send the traffic to the nearest regional servers. For our edge servers, we’ll be setting up the docker containers for OpenResty, Redis, the listener for Redis, and a listener for AWS SQS. Then we will deploy these containers on AWS ECS Fargate for horizontal scaling and then set up SNS and SQS notifications for cross-region communication.

 

Geolocation based DNS resolution:

 

We needed a DNS provider that provides geolocation-based DNS resolution. AWS Route53 is an excellent choice for this purpose.

 

Looking for a Web Development team

Share the details of your request and we will provide you with a full-cycle team under one roof.

Get an Estimate

 

Tools that were used

 

We were using Python (Django) for our backend server but this task can be performed with all those backend tools for those AWS provides its SDKs like Javascript, PHP, RoR etc.

 

Installing OpenResty in a Docker container

 

For this container, we will be using openresty/openresty:alpine-fat”. In the Dockerfile we will be doing the following steps:

  1. Make a directory for Nginx cache “/var/cache/nginx”
  2. Install dependencies for lua-resty-auto-ssl
    • bash
    • curl
    • diffutils
    • grep
    • sed
    • OpenSSL
  1. Make directory “/etc/resty-auto-ssl”
  2. Give the directory appropriate permissions
  3. Run the following command to install lua-resty-auto-ssl

 

/usr/local/openresty/luajit/bin/luarocks 
install lua-resty-auto-ssl \
   && rm -rf /usr/local/openresty/nginx/conf/*
  1. To start Nginx we have to add a backup certificate. Our system will use this certificate if the domain is not registered in our system or a certificate couldn’t be generated for the domain.
  2. Last but not the least, copy your nginx.conf file in “/usr/local/openresty/nginx/conf/nginx.conf”

 

You can write Nginx.conf according to your requirements or routing but there are some additional snippets that need to be added in the file for our auto-ssl to work.

 

First, we will add the following lines in our HTTP section:

 

lua_shared_dict auto_ssl 1m;

lua_shared_dict auto_ssl_settings 64k;

resolver 8.8.8.8;

 

This is Google’s DNS server resolver but you can use your system’s default DNS resolver or your custom one if you have any. Next, we will initialize our Lua block using the following snippet:

 

init_by_lua_block {
auto_ssl = (require “resty.auto-ssl”).new()
auto_ssl:set(“storage_adapter”,“resty.auto-ssl.storage_adapters.redis”)
auto_ssl:set(“redis”, {host=”127.0.01”, prefix=”<any>”})
auto_ssl:init()
}
init_worker_by_lua_block {
 auto_ssl:init_worker()
}

 

So first of all, we are initializing the variable “auto_ssl”. We are using Redis as a storage adapter for auto-ssl. There are two storage adapters that you can use according to your needs: “File” and “Redis”. You can read more on storage adapters here.

 

Next, we are setting up configuration variables for Redis. The host will be the hostname for your Redis server. In this case, we have set this to 127.0.0.1, which is the hostname for localhost. This is because we will be running the Redis server in our docker containers so according to AWS ECS container networking, we can use localhost to communicate with the containers internally. The prefix is the key prefix that Redis will use to store and then get your domain’s certificate. For example, if you set it as “certificate”, then auto-SSL will store your certificate with the key certificate:yourdomain.com. You can customize this to any of your likings.

 

Now one question arises here that this will generate a certificate for any domain that is pointed to our CDN and we wanted to create certificates for the domains that only we allow. Well, auto-ssl provides a function allow_domain which will be called each time a new certificate is going to be generated. This function should return true if a domain is allowed to get a certificate or not. For more information you can visit this.

 

Then the next block init_worker_by_lua_block will just simply initialize the auto-SSL worker.

Our next phase in the configuration is to set up a cache in nginx. For the details, you can visit here. The cache is an important part of a CDN. Our CDN will cache the static website and will be serving that cached site to all the requests more efficiently. So it will not have to fetch a new version for each request. That’s the beauty of the CDN. Don’t worry, we will be implementing a technique to delete the cache when a new version of the website is uploaded.

 

After caching, we will be setting up the Nginx server block. The first block will be listening to HTTPS requests. In that block you will add the following snippet:

ssl_certificate_by_lua_block {
  auto_ssl:ssl_certificate()
}

This will perform all the certificate-related operations when a request hits us.

After that we will be using proxy_pass to pass all our requests to S3 static website hosting.

 

Next, we will also be listening to HTTP requests on port 80. Now here, if you want, you can add redirection to HTTPS. But most importantly, we will use it to perform domain verification with Let’s Encrypt.

 

location /.well-known/acme-challenge/ {
  content_by_lua_block {
    auto_ssl:challenge_server()
  }
}

Now here, we will be also configuring /purge routes for our cache purging. Just write two lines of Lua script to delete the cache folder that you configured for Nginx caching.

 

content_by_lua_block {
    os.execute("rm -r /usr/local/cache/*")
}

 

Also, there will also be an internal server running for auto-ssl’s own operations. You have to configure it like this:

 

server {
  listen 127.0.0.1:8999;
  client_body_buffer_size 128k;
  client_max_body_size 128k;
  location / {
    content_by_lua_block {
      auto_ssl:hook_server()
    }
  }
}

These were the last bits of configurations for our OpenResty docker container. Believe me, the next parts will not be that complicated. It’s the core of our task so we wanted to explain each and everything in it to avoid any confusion. Now let’s go to our next Docker container.

 

Redis Docker container

 

There is not much in this container. We will just be using redis:6.2.1 for our Redis server. There is not much going on in the configurations. Just remember that AWS ECS containers use 127.0.0.1 addresses to communicate with each other so you have to just bind it on this address in the configuration. Next, in the Dockerfile, just copy the configuration to “/usr/local/etc/redis/redis.conf”. Now just copying the configuration won’t be enough, you have to run “redis-server /usr/local/etc/redis/redis.conf” to tell Redis to use this configuration file and you will be good to go.

Continue to Part 2

 

Share this article