How to Host Multiple Static Websites Under One S3 Bucket

Published

Hello and welcome to my blog! Today, I’ll guide you through the process of hosting multiple static websites on different subdomains under a single S3 bucket. I implemented this for my postcard project , and I’m excited to share the steps with you.

Here’s the result we aim to achieve. The first website will be accessible at the subdomain alan.postcard-gift.site:

First example of the website

And the second website will be accessible at lumberj3ck.postcard-gift.site:

Second example of the website

Initially, I considered using the Vercel API for this purpose. However, I realized it might not be suitable due to potential usage limits. Since Vercel is essentially a wrapper around AWS, I decided it would be more cost-effective to use AWS directly, eliminating overhead costs. Moreover, because our use case doesn’t require features like website building or resource packaging, AWS is the ideal solution.

Architecture

The architecture for our setup will look like this:

S3-and-Lambda-Edge architecture

The magic lies in the combination of CloudFront and a Lambda@Edge function, which ensures that each URL correctly maps to the corresponding S3 folder.

Steps to Set Up

1. Create an S3 Bucket

First, create an S3 bucket without enabling public access. Since CloudFront will handle access to the bucket internally, there’s no need to make the bucket publicly accessible.

Create bucket

Next, enable Static Website Hosting in the bucket properties.

Enable static website hosting

Configure the following options:

Static website hosting options

Now, upload your two static websites into the S3 bucket, ensuring that the folder names match the desired subdomains.

Note: Folder names should correspond to subdomains. For example, the folder alan matches the subdomain alan.postcard-gift.site.

Two websites in S3

2. Set Up AWS Certificate Manager

Next, create an SSL certificate in AWS Certificate Manager for your domain. This ensures secure HTTPS access to your CloudFront distribution.

Note: A single certificate can include multiple domains, which is useful if you’re hosting several subdomains.


ACM domains

Add the generated CNAME records to your DNS provider to validate domain ownership. If your DNS provider doesn’t support this, you can use Cloudflare, as I did.

Note: I had an isssue with NameCheap they didn’t allow me to insert CNAME value from AWS Certificate manager, that’s why I used cloudflare instead. The process of moving to manage your domain name in cloudflare is pretty straighforward.

Cloudflare DNS settings

3. Configure CloudFront

Create a new CloudFront distribution with your S3 bucket as the origin. Use the bucket’s default endpoint instead of the static website hosting endpoint. Since we don’t want to leave bucket public, we need to use Origin Access Control for origin access, aws will generate bucket policy which we must apply

CloudFront creation

Set the Cache Policy to CachingOptimized:

Cache policy for CloudFront

Add your subdomains as CNAMEs and select the SSL certificate created in AWS Certificate Manager.

Distribution alternative names

4. Configure S3 Bucket Policy

Update the S3 bucket policy with the one generated by CloudFront to allow access to the bucket.

S3 bucket policy

5. Create a Lambda@Edge Function

Finally, create a Lambda@Edge function to handle URL routing. This function modifies requests to point to the correct S3 folder based on the subdomain.

Lambda function creation

By default aws lambda must have a permissions for logging to the CloudWatch, in my case I have had a problem with that and lambda couldn’t create proper logs to the CloudWatch, so I created new role for lambda function. You can check permissions at the configuration tab of your lambda.

Here’s the JavaScript code for the Lambda@Edge function:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
export const handler = async (event) => {
    // here our handler accepts parameter --> event
    const request = event.Records[0].cf.request;
    // logs are accesible in AWS CloudWatch
    console.log('Original request:', JSON.stringify(request, null, 2));
    const headers = request.headers;
    // Access the domain to which request has been made
    const host = headers.host[0].value;
    console.log('Host:', host);

    // Extract the domain parts
    const parts = host.split('.');
    let subdomain, folderName;
    
    // Parsing s3 bucket folders correctly
    if (parts.length > 2) {
        subdomain = parts[0];
        folderName = parts[1];
    } else {
        subdomain = '';
        folderName = parts[0];
    }

    console.log('Subdomain:', subdomain);
    console.log('Folder name:', folderName);

    // Hardcoded name of S3 bucket
    // Update!
    const bucketName = 'valetine-postcard-websites';

    // Modify the origin of request
    request.origin = {
        s3: {
            authMethod: 'none',
            domainName: `${bucketName}.s3.eu-central-1.amazonaws.com`,
            path: ''
        }
    };

    // Modify the URI to include the folder and subdomain path
    if (request.uri === '/' || request.uri.endsWith('/')) {
        request.uri = `/${folderName}/${subdomain}/index.html`;
    } else {
        request.uri = `/${folderName}/${subdomain}${request.uri}`;
    }

    console.log('Modified request:', JSON.stringify(request, null, 2));
    // now request goes to the correct folder of our s3 
    return request;
};

Deploy the function to Lambda@Edge and associate it with the appropriate CloudFront event you need to select Viewer request .

Deploy to Lambda@Edge

Conclusion

Once everything is set up, your subdomains will correctly route to their respective folders in the S3 bucket. If you followed this guide step by step, you should now have a scalable and cost-effective way to host multiple static websites on AWS.