Skip to content

Blog

⏲️ Configuring UK Regional Settings on Windows Servers with PowerShell

When building out cloud-hosted or automated deployments of Windows Servers, especially for UK-based organisations, it’s easy to overlook regional settings. But these seemingly small configurations — like date/time formats, currency symbols, or keyboard layouts — can have a big impact on usability, application compatibility, and user experience.

In this post, I’ll show how I automate this using a simple PowerShell script that sets all relevant UK regional settings in one go.


🔍 Why Regional Settings Matter

Out-of-the-box, Windows often defaults to en-US settings:

  • Date format becomes MM/DD/YYYY
  • Decimal separators switch to . instead of ,
  • Currency symbols use $
  • Time zones default to US-based settings
  • Keyboard layout defaults to US (which can be infuriating!)

For UK-based organisations, this can:

  • Cause confusion in logs or spreadsheets
  • Break date parsing in scripts or apps expecting DD/MM/YYYY
  • Result in the wrong characters being typed (e.g., @ vs ")
  • Require manual fixing after deployment

Automating this ensures consistency across environments, saves time, and avoids annoying regional mismatches.


🔧 Script Overview

I created a PowerShell script that:

  • Sets the system locale and input methods
  • Configures UK date/time formats
  • Applies the British English language pack (if needed)
  • Sets the time zone to GMT Standard Time (London)

The script can be run manually, included in provisioning pipelines, or dropped into automation tools like Task Scheduler or cloud-init processes.


✅ Prerequisites

To run this script, you should have:

  • Administrator privileges
  • PowerShell 5.1+ (default on most supported Windows Server versions)
  • Optional: Internet access (if language pack needs to be added)

🔹 The Script: Set-UKRegionalSettings.ps1

# Set system locale and formats to English (United Kingdom)
Set-WinSystemLocale -SystemLocale en-GB
Set-WinUserLanguageList -LanguageList en-GB -Force
Set-Culture en-GB
Set-WinHomeLocation -GeoId 242
Set-TimeZone -Id "GMT Standard Time"

# Optional reboot prompt
Write-Host "UK regional settings applied. A reboot is recommended for all changes to take effect."

🚀 How to Use It

✈️ Option 1: Manual Execution

  1. Open PowerShell as Administrator
  2. Run the script:
.\Set-UKRegionalSettings.ps1

🔢 Option 2: Include in Build Pipeline or Image

For Azure VMs or cloud images, consider running this as part of your deployment process via:

  • Custom Script Extension in ARM/Bicep
  • cloud-init or Terraform provisioners
  • Group Policy Startup Script

⚡ Quick Tips

  • Reboot after running to ensure all settings apply across UI and system processes.
  • For non-UK keyboards (like US physical hardware), you may also want to explicitly set InputLocale.
  • Want to validate the settings? Use:
Get-WinSystemLocale
Get-Culture
Get-WinUserLanguageList
Get-TimeZone

📂 Registry Verification: Per-User and Default Settings

Registry Editor Screenshot

If you're troubleshooting or validating the configuration for specific users, regional settings are stored in the Windows Registry under:

👤 For Each User Profile

HKEY_USERS\<SID>\Control Panel\International

You can find the user SIDs by looking under HKEY_USERS or using:

Get-ChildItem Registry::HKEY_USERS

🧵 For New Users (Default Profile)

HKEY_USERS\.DEFAULT\Control Panel\International

This determines what settings new user profiles inherit on first logon.

You can script changes here if needed, but always test carefully to avoid corrupting profile defaults.


🌟 Final Thoughts

Small tweaks like regional settings might seem minor, but they go a long way in making your Windows Server environments feel localised and ready for your users.

Automating them early in your build pipeline means one less thing to worry about during post-deployment configuration.

Let me know if you want a version of this that handles multi-user scenarios or works across multiple OS versions!

Share on Share on

🕵️ Replacing SAS Tokens with User Assigned Managed Identity (UAMI) in AzCopy for Blob Uploads

Using Shared Access Signature (SAS) tokens with azcopy is common — but rotating tokens and handling them securely can be a hassle. To improve security and simplify our automation, I recently replaced SAS-based authentication in our scheduled AzCopy jobs with Azure User Assigned Managed Identity (UAMI).

In this post, I’ll walk through how to:

  • Replace AzCopy SAS tokens with managed identity authentication
  • Assign the right roles to the UAMI
  • Use azcopy login to authenticate non-interactively
  • Automate the whole process in PowerShell

🔍 Why Remove SAS Tokens?

SAS tokens are useful, but:

  • 🔑 They’re still secrets — and secrets can be leaked
  • 📅 They expire — which breaks automation when not rotated
  • 🔐 They grant broad access — unless scoped very carefully

Managed Identity is a much better approach when the copy job is running from within Azure (like an Azure VM or Automation account).

📖 This post is part of my Managed Identity Series — replacing secrets with identity-based authentication across Azure services:


🌟 Project Goal

Replace the use of SAS tokens in an AzCopy job that uploads files from a local UNC share to Azure Blob Storage — by using a User Assigned Managed Identity.


✅ Prerequisites

To follow along, you’ll need:

  • A User Assigned Managed Identity (UAMI)
  • A Windows Server or Azure VM to run the copy job
  • Access to a local source folder or UNC share (e.g., \\fileserver\\data\\export\\)
  • AzCopy v10.7+ installed on the machine
  • Azure RBAC permissions to assign roles

ℹ️ Check AzCopy Version: Run azcopy --version to ensure you're using v10.7.0 or later, which is required for --identity-client-id support.


🔧 Step-by-Step Setup

🛠️ Step 1: Create the UAMI

✅ CLI
az identity create \
  --name my-azcopy-uami \
  --resource-group my-resource-group \
  --location <region>
✅ Portal
  1. Go to Managed Identities in the Azure Portal
  2. Click + Create and follow the wizard

🖇️ Step 2: Assign the UAMI to the Azure VM

AzCopy running on a VM must be able to assume the identity. Assign the UAMI to your VM:

✅ CLI
az vm identity assign \
  --name my-vm-name \
  --resource-group my-resource-group \
  --identities my-azcopy-uami
✅ Portal
  1. Navigate to the Virtual Machines blade
  2. Select the VM running your AzCopy script
  3. Under Settings, click Identity
  4. Go to the User assigned tab
  5. Click + Add, select your UAMI, then click Add

🔐 Step 3: Assign RBAC Permissions to UAMI

For AzCopy to function correctly with a UAMI, the following role assignments are recommended:

  • Storage Blob Data Contributor: Required for read/write blob operations
  • Storage Blob Data Reader: (Optional) For read-only scenarios or validation scripts
  • Reader: (Optional) For browsing or metadata-only permissions on the storage account

RBAC Tip: It may take up to 5 minutes for role assignments to propagate fully. If access fails initially, wait and retry.

✅ CLI
az role assignment create \
  --assignee <client-id-or-object-id> \
  --role "Storage Blob Data Contributor" \
  --scope "/subscriptions/<sub-id>/resourceGroups/<rg>/providers/Microsoft.Storage/storageAccounts/<storage-account>/blobServices/default/containers/<container-name>"

az role assignment create \
  --assignee <client-id-or-object-id> \
  --role "Storage Blob Data Reader" \
  --scope "/subscriptions/<sub-id>/resourceGroups/<rg>/providers/Microsoft.Storage/storageAccounts/<storage-account>"

az role assignment create \
  --assignee <client-id-or-object-id> \
  --role "Reader" \
  --scope "/subscriptions/<sub-id>/resourceGroups/<rg>/providers/Microsoft.Storage/storageAccounts/<storage-account>"
✅ Portal
  1. Go to your Storage Account in the Azure Portal
  2. Click on the relevant container (or stay at the account level for broader scope)
  3. Open Access Control (IAM)
  4. Click + Add role assignment
  5. Repeat this for each role:
  6. Select Storage Blob Data Contributor, assign to your UAMI, and click Save
  7. Select Storage Blob Data Reader, assign to your UAMI, and click Save
  8. Select Reader, assign to your UAMI, and click Save

🧪 Step 4: Test AzCopy Login Using UAMI

$clientId = "<your-uami-client-id>"
& "C:\azcopy\azcopy.exe" login --identity --identity-client-id $clientId

You should see a confirmation message that AzCopy has successfully logged in.

🔍 To verify AzCopy is authenticated with the correct identity, you can run:

azcopy env

This will show the login type and confirm whether the token is being sourced from the Managed Identity.


📁 Step 5: Upload Files Using AzCopy + UAMI

Here's the PowerShell script that copies all files from a local share to the Blob container:

$clientId = "<your-uami-client-id>"

# Login with Managed Identity
& "C:\azcopy\azcopy.exe" login --identity --identity-client-id $clientId

# Run the copy job
& "C:\azcopy\azcopy.exe" copy \
  "\\\\fileserver\\data\\export\\" \
  "https://<your-storage-account>.blob.core.windows.net/<container-name>" \
  --overwrite=true \
  --from-to=LocalBlob \
  --blob-type=Detect \
  --put-md5 \
  --recursive \
  --log-level=INFO

💡 UNC Note: Double backslashes are used in PowerShell to represent UNC paths properly.

This script can be scheduled using Task Scheduler or run on demand.


⏱️ Automate with Task Scheduler (Optional)

To automate the job:

  1. Open Task Scheduler on your VM
  2. Create a New Task (not a Basic Task)
  3. Under General, select "Run whether user is logged on or not"
  4. Under Actions, add a new action to run powershell.exe
  5. Set the arguments to point to your .ps1 script
  6. Ensure the AzCopy path is hardcoded in your script

🚑 Troubleshooting Common Errors

❌ 403 AuthorizationPermissionMismatch
  • Usually means the identity doesn’t have the correct role or the role hasn’t propagated yet
  • Double-check:
  • UAMI is assigned to the VM
  • UAMI has Storage Blob Data Contributor on the correct container
  • Wait 2–5 minutes and try again
❌ azcopy : The term 'azcopy' is not recognized
  • AzCopy is not in the system PATH
  • Solution: Use the full path to azcopy.exe, like C:\azcopy\azcopy.exe

🛡️ Benefits of Switching to UAMI

  • ✅ No secrets or keys stored on disk
  • ✅ No manual token expiry issues
  • ✅ Access controlled via Azure RBAC
  • ✅ Easily scoped and auditable

🧼 Final Thoughts

Replacing AzCopy SAS tokens with UAMI is one of those small wins that pays dividends over time. Once set up, it's secure, robust, and hands-off.

Let me know if you'd like a variant of this that works from Azure Automation or a hybrid worker!


Share on Share on

Replacing SQL Credentials with User Assigned Managed Identity (UAMI) in Azure SQL Managed Instance

Storing SQL usernames and passwords in application configuration files is still common practice — but it poses a significant security risk. As part of improving our cloud security posture, I recently completed a project to eliminate plain text credentials from our app connection strings by switching to Azure User Assigned Managed Identity (UAMI) authentication for our SQL Managed Instance.

In this post, I’ll walk through how to:

  • Securely connect to Azure SQL Managed Instance without using usernames or passwords
  • Use a User Assigned Managed Identity (UAMI) for authentication
  • Test this connection using the new Go-based sqlcmd CLI
  • Update real application code to remove SQL credentials

🔐 Why Replace SQL Credentials?

Hardcoded SQL credentials come with several downsides:

  • Security risk: Stored secrets can be compromised if not properly secured
  • Maintenance overhead: Rotating passwords across environments is cumbersome
  • Audit concerns: Plain text credentials often trigger compliance red flags

Azure Managed Identity solves this by providing a token-based, identity-first way to connect to services — no secrets required.

📖 This post is part of my Managed Identity Series — replacing secrets with identity-based authentication across Azure services:


⚙️ What is a User Assigned Managed Identity?

There are two types of Managed Identities in Azure:

  • System-assigned: Tied to the lifecycle of a specific resource (like a VM or App Service)
  • User-assigned: Standalone identity that can be attached to one or more resources

For this project, we used a User Assigned Managed Identity (UAMI) to allow our applications to authenticate against SQL without managing secrets.


🌟 Project Objective

Replace plain text SQL credentials in application connection strings with User Assigned Managed Identity (UAMI) for secure, best-practice authentication to Azure SQL Managed Instances.


✅ Prerequisites

To follow this guide, you’ll need:

  • An Azure SQL Managed Instance with Microsoft Entra (AAD) authentication enabled
  • A User Assigned Managed Identity (UAMI)
  • An Azure VM or App Service to host your app (or test client)
  • The Go-based sqlcmd CLI installed
    Install guide

🔧 Setting Up the User Assigned Managed Identity (UAMI)

Before connecting to Azure SQL using UAMI, ensure the following steps are completed:

  • Create the UAMI
  • Assign the UAMI to the Virtual Machine(s)
  • Configure Microsoft Entra authentication on the SQL Managed Instance
  • Grant SQL access to the UAMI

These steps can be completed via Azure CLI, PowerShell, or the Azure Portal.


🛠️ Step 1: Create the User Assigned Managed Identity (UAMI)

✅ CLI
az identity create \
  --name my-sql-uami \
  --resource-group my-rg \
  --location <region>

Save the Client ID and Object ID — you’ll need them later.

✅ Portal
  1. Go to Azure Portal → Search Managed Identities
  2. Click + Create
  3. Choose Subscription, Resource Group, and Region
  4. Name the identity (e.g., my-sql-uami)
  5. Click Review + Create

🖇️ Step 2: Assign the UAMI to a Virtual Machine

Attach the UAMI to:

  • The VM(s) running your application code
  • The VM used to test the connection
✅ CLI
az vm identity assign \
  --name my-vm-name \
  --resource-group my-rg \
  --identities my-sql-uami
✅ Portal
  1. Go to Virtual Machines → Select your VM
  2. Click Identity under Settings
  3. Go to the User assigned tab
  4. Click + Add → Select the UAMI
  5. Click Add

🔑 Step 3: Configure SQL Managed Instance for Microsoft Entra Authentication

  1. Set an Entra Admin:
  2. Go to your SQL MI → Azure AD admin blade
  3. Click Set admin and choose a user or group
  4. Save changes

  5. Ensure Directory Reader permissions:

  6. Your SQL MI’s managed identity needs Directory Reader access
  7. You can assign this role via Entra ID > Roles and administrators > Directory Readers

More details: Configure Entra authentication


📜 Step 4: (Optional) Assign Azure Role to the UAMI

This may be needed if the identity needs to access Azure resource metadata or use Azure CLI from the VM.

✅ CLI
az role assignment create \
  --assignee-object-id <uami-object-id> \
  --role "Reader" \
  --scope /subscriptions/<sub-id>/resourceGroups/<rg-name>
✅ Portal
  1. Go to the UAMI → Azure role assignments
  2. Click + Add role assignment
  3. Choose role (e.g., Reader)
  4. Set scope
  5. Click Save

🔑 Step 5: Grant SQL Access to the UAMI

Once the UAMI is assigned to the VM and Entra auth is enabled on SQL MI, log in with an admin and run:

CREATE USER [<client-id>] FROM EXTERNAL PROVIDER;
ALTER ROLE db_datareader ADD MEMBER [<client-id>];
ALTER ROLE db_datawriter ADD MEMBER [<client-id>];

Or use a friendly name:

CREATE USER [my-app-identity] FROM EXTERNAL PROVIDER;
ALTER ROLE db_datareader ADD MEMBER [my-app-identity];

🧪 Step 6: Test the Connection Using sqlcmd

sqlcmd \
  -S <your-sql-mi>.database.windows.net \
  -d <database-name> \
  --authentication-method ActiveDirectoryManagedIdentity \
  -U <client-id-of-uami>

If successful, you’ll see the 1> prompt where you can execute SQL queries.


📊 Step 7: Update Application Code

Update your app to use the UAMI for authentication.

Example connection string for UAMI in C#:

string connectionString = @"Server=tcp:<your-sql-mi>.database.windows.net;" +
                          "Authentication=Active Directory Managed Identity;" +
                          "Encrypt=True;" +
                          "User Id=<your-uami-client-id>;" +
                          "Database=<your-db-name>;";

Make sure your code uses Microsoft.Data.SqlClient with AAD token support.

Or retrieve and assign the token programmatically:

var credential = new DefaultAzureCredential();
var token = await credential.GetTokenAsync(new TokenRequestContext(
    new[] { "https://database.windows.net/" }));

var connection = new SqlConnection("Server=<your-sql-mi>; Database=<your-db-name>; Encrypt=True;");
connection.AccessToken = token.Token;

🔒 Security Benefits

  • 🔐 No credentials stored
  • 🔁 No password rotation
  • 🛡️ Entra-integrated access control and auditing

✅ Summary

By switching to User Assigned Managed Identity, we removed credentials from connection strings and aligned SQL access with best practices for cloud identity and security.

Comments and feedback welcome!

Share on Share on

Cloud Resume Challenge with Terraform: Final Reflections & Future Directions 🎯

Journey Complete: What We've Built 🏗️

Over the course of this blog series, we've successfully completed the Cloud Resume Challenge using Terraform as our infrastructure-as-code tool. Let's recap what we've accomplished:

  1. Set up our development environment with Terraform and AWS credentials
  2. Deployed a static website using S3, CloudFront, Route 53, and ACM
  3. Built a serverless backend API with API Gateway, Lambda, and DynamoDB
  4. Implemented CI/CD pipelines with GitHub Actions for automated deployments
  5. Added security enhancements like OIDC authentication and least-privilege IAM policies

The final architecture we've created looks like this:

Basic Project Diagram

The most valuable aspect of this project is that we've built a completely automated, production-quality cloud solution. Every component is defined as code, enabling us to track changes, rollback if needed, and redeploy the entire infrastructure with minimal effort.

Key Learnings from the Challenge 🧠

Technical Skills Gained 💻

Throughout this challenge, I've gained significant technical skills:

  1. Terraform expertise: I've moved from basic understanding to writing modular, reusable infrastructure code
  2. AWS service integration: Learned how multiple AWS services work together to create a cohesive system
  3. CI/CD implementation: Set up professional GitHub Actions workflows for continuous deployment
  4. Security best practices: Implemented OIDC, least privilege, encryption, and more
  5. Serverless architecture: Built and connected serverless components for a scalable, cost-effective solution

Unexpected Challenges & Solutions 🔄

The journey wasn't without obstacles. Here are some challenges I faced and how I overcame them:

1. State Management Complexity

Challenge: As the project grew, managing Terraform state became more complex, especially when working across different environments.

Solution: I restructured the project to use workspaces and remote state with careful output references between modules. This improved state organization and made multi-environment deployments more manageable.

2. CloudFront Cache Invalidation

Challenge: Updates to the website weren't immediately visible due to CloudFront caching.

Solution: Implemented proper cache invalidation in the CI/CD pipeline and set appropriate cache behaviors for different file types.

3. CORS Configuration

Challenge: The frontend JavaScript couldn't connect to the API due to CORS issues.

Solution: Added comprehensive CORS handling at both the API Gateway and Lambda levels, ensuring proper headers were returned.

4. CI/CD Authentication Security

Challenge: Initially used long-lived AWS credentials in GitHub Secrets, which posed security risks.

Solution: Replaced with OIDC for keyless authentication between GitHub Actions and AWS, eliminating credential management concerns.

Real-World Applications of This Project 🌐

The skills demonstrated in this challenge directly translate to real-world cloud engineering roles:

1. Infrastructure as Code Expertise

The ability to define, version, and automate infrastructure is increasingly essential in modern IT environments. This project showcases expertise with Terraform that can be applied to any cloud provider or on-premises infrastructure.

2. DevOps Pipeline Creation

Setting up CI/CD workflows that automate testing and deployment demonstrates key DevOps skills that organizations need to accelerate their development cycles.

3. Serverless Architecture Design

The backend API implementation shows understanding of event-driven, serverless architecture patterns that are becoming standard for new cloud applications.

4. Security Implementation

The security considerations throughout the project - from IAM roles to OIDC authentication - demonstrate the ability to build secure systems from the ground up.

Maintaining Your Cloud Resume 🔧

Now that your resume is live, here are some tips for maintaining it:

1. Regular Updates

Set a schedule to update both your resume content and the underlying infrastructure. I recommend:

  • Monthly content refreshes to keep your experience and skills current
  • Quarterly infrastructure reviews to apply security patches and update dependencies
  • Annual architecture reviews to consider new AWS services or features

2. Cost Management

While this solution is relatively inexpensive, it's good practice to set up AWS Budgets and alerts to monitor costs. My current monthly costs are approximately:

  • S3: ~$0.10 for storage
  • CloudFront: ~$0.50 for data transfer
  • Route 53: $0.50 for hosted zone
  • Lambda: Free tier covers typical usage
  • DynamoDB: Free tier covers typical usage
  • API Gateway: ~$1.00 for API calls
  • Total: ~$2.10/month

3. Monitoring and Alerting

I've set up CloudWatch alarms for:

  • API errors exceeding normal thresholds
  • Unusual traffic patterns that might indicate abuse
  • Lambda function failures

Consider adding application performance monitoring tools like AWS X-Ray for deeper insights.

Future Enhancements 🚀

There are many ways to extend this project further:

1. Content Management System Integration

Add a headless CMS like Contentful or Sanity to make resume updates easier without needing to edit HTML directly:

module "contentful_integration" {
  source = "./modules/contentful"

  api_key     = var.contentful_api_key
  space_id    = var.contentful_space_id
  environment = var.environment
}

resource "aws_lambda_function" "content_sync" {
  function_name = "resume-content-sync-${var.environment}"
  handler       = "index.handler"
  runtime       = "nodejs14.x"
  role          = aws_iam_role.content_sync_role.arn

  environment {
    variables = {
      CONTENTFUL_API_KEY = var.contentful_api_key
      CONTENTFUL_SPACE_ID = var.contentful_space_id
      S3_BUCKET = module.frontend.website_bucket_name
    }
  }
}

2. Advanced Analytics

Implement sophisticated visitor analytics beyond simple counting:

resource "aws_kinesis_firehose_delivery_stream" "visitor_analytics" {
  name        = "resume-visitor-analytics-${var.environment}"
  destination = "extended_s3"

  extended_s3_configuration {
    role_arn   = aws_iam_role.firehose_role.arn
    bucket_arn = aws_s3_bucket.analytics.arn

    processing_configuration {
      enabled = "true"

      processors {
        type = "Lambda"

        parameters {
          parameter_name  = "LambdaArn"
          parameter_value = aws_lambda_function.analytics_processor.arn
        }
      }
    }
  }
}

resource "aws_athena_workgroup" "analytics" {
  name = "resume-analytics-${var.environment}"

  configuration {
    result_configuration {
      output_location = "s3://${aws_s3_bucket.analytics_results.bucket}/results/"
    }
  }
}

3. Multi-Region Deployment

Enhance reliability and performance by deploying to multiple AWS regions:

module "frontend_us_east_1" {
  source = "./modules/frontend"

  providers = {
    aws = aws.us_east_1
  }

  # Configuration for US East region
}

module "frontend_eu_west_1" {
  source = "./modules/frontend"

  providers = {
    aws = aws.eu_west_1
  }

  # Configuration for EU West region
}

resource "aws_route53_health_check" "primary_region" {
  fqdn              = module.frontend_us_east_1.cloudfront_domain_name
  port              = 443
  type              = "HTTPS"
  resource_path     = "/"
  failure_threshold = 3
  request_interval  = 30
}

resource "aws_route53_record" "global" {
  zone_id = data.aws_route53_zone.selected.zone_id
  name    = var.domain_name
  type    = "CNAME"

  failover_routing_policy {
    type = "PRIMARY"
  }

  health_check_id = aws_route53_health_check.primary_region.id
  set_identifier  = "primary"
  records         = [module.frontend_us_east_1.cloudfront_domain_name]
  ttl             = 300
}

4. Infrastructure Testing

Add comprehensive testing using Terratest:

package test

import (
    "testing"
    "github.com/gruntwork-io/terratest/modules/terraform"
    "github.com/stretchr/testify/assert"
)

func TestResumeFrontend(t *testing.T) {
    terraformOptions := terraform.WithDefaultRetryableErrors(t, &terraform.Options{
        TerraformDir: "../modules/frontend",
        Vars: map[string]interface{}{
            "environment": "test",
            "domain_name": "test.example.com",
        },
    })

    defer terraform.Destroy(t, terraformOptions)
    terraform.InitAndApply(t, terraformOptions)

    // Verify outputs
    bucketName := terraform.Output(t, terraformOptions, "website_bucket_name")
    assert.Contains(t, bucketName, "resume-website-test")
}

Career Impact & Personal Growth 📈

Completing this challenge has had a significant impact on my career development:

Technical Growth

I've moved from basic cloud knowledge to being able to architect and implement complex, multi-service solutions. The hands-on experience with Terraform has been particularly valuable, as it's a highly sought-after skill in the job market.

Portfolio Enhancement

This project now serves as both my resume and a demonstration of my cloud engineering capabilities. I've included the GitHub repository links on my resume, allowing potential employers to see the code behind the deployment.

Community Engagement

Sharing this project through blog posts has connected me with the broader cloud community. The feedback and discussions have been invaluable for refining my approach and learning from others.

Final Thoughts 💭

The Cloud Resume Challenge has been an invaluable learning experience. By implementing it with Terraform, I've gained practical experience with both AWS services and infrastructure as code - skills that are directly applicable to professional cloud engineering roles.

What makes this challenge particularly powerful is how it combines so many aspects of modern cloud development:

  • Front-end web development
  • Back-end serverless APIs
  • Infrastructure as code
  • CI/CD automation
  • Security implementation
  • DNS configuration
  • Content delivery networks

If you're following along with this series, I encourage you to customize and extend the project to showcase your unique skills and interests. The foundational architecture we've built provides a flexible platform that can evolve with your career.

For those just starting their cloud journey, this challenge offers a perfect blend of practical skills in a realistic project that demonstrates end-to-end capabilities. It's far more valuable than isolated tutorials or theoretical knowledge alone.

The cloud engineering field continues to evolve rapidly, but the principles we've applied throughout this project - automation, security, scalability, and operational excellence - remain constants regardless of which specific technologies are in favor.

What's Next? 🔮

While this concludes our Cloud Resume Challenge series, my cloud learning journey continues. Some areas I'm exploring next include:

  • Kubernetes and container orchestration
  • Infrastructure testing frameworks
  • Cloud cost optimization
  • Multi-cloud deployments
  • Infrastructure security scanning
  • Service mesh implementations

I hope this series has been helpful in your own cloud journey. Feel free to reach out with questions or to share your own implementations of the challenge!


This post concludes our Cloud Resume Challenge with Terraform series. Thanks for following along!

Want to see the Cloud Resume Challenge in action? Visit my resume website and check out the GitHub repositories for the complete code.

Share on Share on

Cloud Resume Challenge with Terraform: Automating Deployments with GitHub Actions ⚡

In our previous posts, we built the frontend and backend components of our cloud resume project. Now it's time to take our implementation to the next level by implementing continuous integration and deployment (CI/CD) with GitHub Actions.

Why CI/CD Is Critical for Cloud Engineers 🛠️

When I first started this challenge, I manually ran terraform apply every time I made a change. This quickly became tedious and error-prone. As a cloud engineer, I wanted to demonstrate a professional approach to infrastructure management by implementing proper CI/CD pipelines.

Automating deployments offers several key benefits:

  • Consistency: Every deployment follows the same process
  • Efficiency: No more manual steps or waiting around
  • Safety: Automated tests catch issues before they reach production
  • Auditability: Each change is tracked with a commit and workflow run

This approach mirrors how professional cloud teams work and is a crucial skill for any cloud engineer.

CI/CD Architecture Overview 🏗️

Here's a visual representation of our CI/CD pipelines:

┌─────────────┐          ┌─────────────────┐          ┌─────────────┐
│             │          │                 │          │             │
│  Developer  ├─────────►│  GitHub Actions ├─────────►│  AWS Cloud  │
│  Workstation│          │                 │          │             │
└─────────────┘          └─────────────────┘          └─────────────┘
       │                          │                          ▲
       │                          │                          │
       ▼                          ▼                          │
┌─────────────┐          ┌─────────────────┐                 │
│             │          │                 │                 │
│   GitHub    │          │  Terraform      │                 │
│ Repositories│          │  Plan & Apply   ├─────────────────┘
│             │          │                 │
└─────────────┘          └─────────────────┘

We'll set up separate workflows for:

  1. Frontend deployment: Updates the S3 website content and invalidates CloudFront
  2. Backend deployment: Runs Terraform to update our API infrastructure
  3. Smoke tests: Verifies that both components are working correctly after deployment

Setting Up GitHub Repositories 📁

For this challenge, I've created two repositories:

  • cloud-resume-frontend: Contains HTML, CSS, JavaScript, and frontend deployment workflows
  • cloud-resume-backend: Contains Terraform configuration, Lambda code, and backend deployment workflows

Repository Structure

Here's how I've organized my repositories:

Frontend Repository:

cloud-resume-frontend/
├── .github/
│   └── workflows/
│       └── deploy.yml
├── website/
│   ├── index.html
│   ├── styles.css
│   ├── counter.js
│   └── error.html
├── tests/
│   └── cypress/
│       └── integration/
│           └── counter.spec.js
└── README.md

Backend Repository:

cloud-resume-backend/
├── .github/
│   └── workflows/
│       └── deploy.yml
├── lambda/
│   └── visitor_counter.py
├── terraform/
│   ├── modules/
│   │   ├── backend/
│   │   │   ├── api_gateway.tf
│   │   │   ├── dynamodb.tf
│   │   │   ├── lambda.tf
│   │   │   ├── variables.tf
│   │   │   └── outputs.tf
│   ├── environments/
│   │   ├── dev/
│   │   │   └── main.tf
│   │   └── prod/
│   │       └── main.tf
│   ├── main.tf
│   ├── variables.tf
│   └── outputs.tf
├── tests/
│   └── test_visitor_counter.py
└── README.md

Securing AWS Authentication in GitHub Actions 🔒

Before setting up our workflows, we need to address a critical security concern: how to securely authenticate GitHub Actions with AWS.

In the past, many tutorials recommended storing AWS access keys as GitHub Secrets. This approach works but has significant security drawbacks:

  • Long-lived credentials are a security risk
  • Credential rotation is manual and error-prone
  • Access is typically overly permissive

Instead, I'll implement a more secure approach using OpenID Connect (OIDC) for keyless authentication between GitHub Actions and AWS.

Setting Up OIDC Authentication

First, create an IAM OIDC provider for GitHub in your AWS account:

# oidc-provider.tf
resource "aws_iam_openid_connect_provider" "github" {
  url             = "https://token.actions.githubusercontent.com"
  client_id_list  = ["sts.amazonaws.com"]
  thumbprint_list = ["6938fd4d98bab03faadb97b34396831e3780aea1"]
}

Then, create an IAM role that GitHub Actions can assume:

# oidc-role.tf
resource "aws_iam_role" "github_actions" {
  name = "github-actions-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRoleWithWebIdentity"
        Effect = "Allow"
        Principal = {
          Federated = aws_iam_openid_connect_provider.github.arn
        }
        Condition = {
          StringEquals = {
            "token.actions.githubusercontent.com:aud" = "sts.amazonaws.com"
          }
          StringLike = {
            "token.actions.githubusercontent.com:sub" = "repo:${var.github_org}/${var.github_repo}:*"
          }
        }
      }
    ]
  })
}

# Attach policies to the role
resource "aws_iam_role_policy_attachment" "terraform_permissions" {
  role       = aws_iam_role.github_actions.name
  policy_arn = aws_iam_policy.terraform_permissions.arn
}

resource "aws_iam_policy" "terraform_permissions" {
  name        = "terraform-deployment-policy"
  description = "Policy for Terraform deployments via GitHub Actions"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = [
          "s3:*",
          "cloudfront:*",
          "route53:*",
          "acm:*",
          "lambda:*",
          "apigateway:*",
          "dynamodb:*",
          "logs:*",
          "iam:GetRole",
          "iam:PassRole",
          "iam:CreateRole",
          "iam:DeleteRole",
          "iam:PutRolePolicy",
          "iam:DeleteRolePolicy",
          "iam:AttachRolePolicy",
          "iam:DetachRolePolicy"
        ]
        Effect   = "Allow"
        Resource = "*"
      }
    ]
  })
}

For a production environment, I would use more fine-grained permissions, but this policy works for our demonstration.

Implementing Frontend CI/CD Workflow 🔄

Let's create a GitHub Actions workflow for our frontend repository. Create a file at .github/workflows/deploy.yml:

name: Deploy Frontend

on:
  push:
    branches:
      - main
    paths:
      - 'website/**'
      - '.github/workflows/deploy.yml'

  workflow_dispatch:

permissions:
  id-token: write
  contents: read

jobs:
  deploy:
    name: 'Deploy to S3 and Invalidate CloudFront'
    runs-on: ubuntu-latest

    steps:
      - name: Checkout Repository
        uses: actions/checkout@v3

      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v2
        with:
          role-to-assume: arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/github-actions-role
          aws-region: us-east-1

      - name: Deploy to S3
        run: |
          aws s3 sync website/ s3://${{ secrets.S3_BUCKET_NAME }} --delete

      - name: Invalidate CloudFront Cache
        run: |
          aws cloudfront create-invalidation --distribution-id ${{ secrets.CLOUDFRONT_DISTRIBUTION_ID }} --paths "/*"

  test:
    name: 'Run Smoke Tests'
    needs: deploy
    runs-on: ubuntu-latest

    steps:
      - name: Checkout Repository
        uses: actions/checkout@v3

      - name: Install Cypress
        uses: cypress-io/github-action@v5
        with:
          install-command: npm install

      - name: Run Cypress Tests
        uses: cypress-io/github-action@v5
        with:
          command: npx cypress run
          config: baseUrl=${{ secrets.WEBSITE_URL }}

This workflow:

  1. Authenticates using OIDC
  2. Syncs website files to the S3 bucket
  3. Invalidates the CloudFront cache
  4. Runs Cypress tests to verify the site is working

Creating a Cypress Test for the Frontend

Let's create a simple Cypress test to verify that our visitor counter is working. First, create a package.json file in the root of your frontend repository:

{
  "name": "cloud-resume-frontend",
  "version": "1.0.0",
  "description": "Frontend for Cloud Resume Challenge",
  "scripts": {
    "test": "cypress open",
    "test:ci": "cypress run"
  },
  "devDependencies": {
    "cypress": "^12.0.0"
  }
}

Then create a Cypress test at tests/cypress/integration/counter.spec.js:

describe('Resume Website Tests', () => {
  beforeEach(() => {
    // Visit the home page before each test
    cy.visit('/');
  });

  it('should load the resume page', () => {
    // Check that we have a title
    cy.get('h1').should('be.visible');

    // Check that key sections exist
    cy.contains('Experience').should('be.visible');
    cy.contains('Education').should('be.visible');
    cy.contains('Skills').should('be.visible');
  });

  it('should load and display the visitor counter', () => {
    // Check that the counter element exists
    cy.get('#count').should('exist');

    // Wait for the counter to update (should not remain at 0)
    cy.get('#count', { timeout: 10000 })
      .should('not.contain', '0')
      .should('not.contain', 'Loading');

    // Verify the counter shows a number
    cy.get('#count').invoke('text').then(parseFloat)
      .should('be.gt', 0);
  });
});

Implementing Backend CI/CD Workflow 🔄

Now, let's create a GitHub Actions workflow for our backend repository. Create a file at .github/workflows/deploy.yml:

name: Deploy Backend

on:
  push:
    branches:
      - main
    paths:
      - 'lambda/**'
      - 'terraform/**'
      - '.github/workflows/deploy.yml'

  pull_request:
    branches:
      - main

  workflow_dispatch:

permissions:
  id-token: write
  contents: read
  pull-requests: write

jobs:
  test:
    name: 'Run Python Tests'
    runs-on: ubuntu-latest

    steps:
      - name: Checkout Repository
        uses: actions/checkout@v3

      - name: Set up Python
        uses: actions/setup-python@v4
        with:
          python-version: '3.9'

      - name: Install Dependencies
        run: |
          python -m pip install --upgrade pip
          pip install pytest boto3 moto

      - name: Run Tests
        run: |
          python -m pytest tests/

  validate:
    name: 'Validate Terraform'
    runs-on: ubuntu-latest
    needs: test

    steps:
      - name: Checkout Repository
        uses: actions/checkout@v3

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.2.0

      - name: Terraform Format
        working-directory: ./terraform
        run: terraform fmt -check

      - name: Terraform Init
        working-directory: ./terraform
        run: terraform init -backend=false

      - name: Terraform Validate
        working-directory: ./terraform
        run: terraform validate

  plan:
    name: 'Terraform Plan'
    runs-on: ubuntu-latest
    needs: validate
    if: github.event_name == 'pull_request' || github.event_name == 'push' || github.event_name == 'workflow_dispatch'
    environment: dev

    steps:
      - name: Checkout Repository
        uses: actions/checkout@v3

      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v2
        with:
          role-to-assume: arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/github-actions-role
          aws-region: us-east-1

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.2.0

      - name: Terraform Init
        working-directory: ./terraform
        run: terraform init -backend-config="bucket=${{ secrets.TF_STATE_BUCKET }}" -backend-config="key=${{ secrets.TF_STATE_KEY }}" -backend-config="region=us-east-1"

      - name: Terraform Plan
        working-directory: ./terraform
        run: terraform plan -var="environment=dev" -var="domain_name=${{ secrets.DOMAIN_NAME }}" -out=tfplan

      - name: Comment Plan on PR
        uses: actions/github-script@v6
        if: github.event_name == 'pull_request'
        with:
          github-token: ${{ secrets.GITHUB_TOKEN }}
          script: |
            const output = `#### Terraform Format and Style 🖌\`${{ steps.fmt.outcome }}\`
            #### Terraform Plan 📖\`${{ steps.plan.outcome }}\`

            <details><summary>Show Plan</summary>

            \`\`\`terraform
            ${{ steps.plan.outputs.stdout }}
            \`\`\`

            </details>`;
            github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: output
            })

      - name: Upload Plan Artifact
        uses: actions/upload-artifact@v3
        with:
          name: tfplan
          path: ./terraform/tfplan

  apply:
    name: 'Terraform Apply'
    runs-on: ubuntu-latest
    needs: plan
    if: github.event_name == 'push' && github.ref == 'refs/heads/main' || github.event_name == 'workflow_dispatch'
    environment: dev

    steps:
      - name: Checkout Repository
        uses: actions/checkout@v3

      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v2
        with:
          role-to-assume: arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/github-actions-role
          aws-region: us-east-1

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.2.0

      - name: Terraform Init
        working-directory: ./terraform
        run: terraform init -backend-config="bucket=${{ secrets.TF_STATE_BUCKET }}" -backend-config="key=${{ secrets.TF_STATE_KEY }}" -backend-config="region=us-east-1"

      - name: Download Plan Artifact
        uses: actions/download-artifact@v3
        with:
          name: tfplan
          path: ./terraform

      - name: Terraform Apply
        working-directory: ./terraform
        run: terraform apply -auto-approve tfplan

  test-api:
    name: 'Test API Deployment'
    runs-on: ubuntu-latest
    needs: apply
    environment: dev

    steps:
      - name: Checkout Repository
        uses: actions/checkout@v3

      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v2
        with:
          role-to-assume: arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/github-actions-role
          aws-region: us-east-1

      - name: Fetch API Endpoint
        run: |
          API_ENDPOINT=$(aws cloudformation describe-stacks --stack-name resume-backend-dev --query "Stacks[0].Outputs[?OutputKey=='ApiEndpoint'].OutputValue" --output text)
          echo "API_ENDPOINT=$API_ENDPOINT" >> $GITHUB_ENV

      - name: Test API Response
        run: |
          response=$(curl -s "$API_ENDPOINT/count")
          echo "API Response: $response"

          # Check if the response contains a count field
          echo $response | grep -q '"count":'
          if [ $? -eq 0 ]; then
            echo "API test successful"
          else
            echo "API test failed"
            exit 1
          fi

This workflow is more complex and includes:

  1. Running Python tests for the Lambda function
  2. Validating Terraform syntax and formatting
  3. Planning Terraform changes (with PR comments for review)
  4. Applying Terraform changes to the environment
  5. Testing the deployed API to ensure it's functioning

Implementing Multi-Environment Deployments 🌍

One of the most valuable CI/CD patterns is deploying to multiple environments. Let's modify our backend workflow to support both development and production environments:

# Additional job for production deployment after dev is successful
  promote-to-prod:
    name: 'Promote to Production'
    runs-on: ubuntu-latest
    needs: test-api
    environment: production
    if: github.event_name == 'workflow_dispatch'

    steps:
      - name: Checkout Repository
        uses: actions/checkout@v3

      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v2
        with:
          role-to-assume: arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/github-actions-role
          aws-region: us-east-1

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.2.0

      - name: Terraform Init
        working-directory: ./terraform/environments/prod
        run: terraform init -backend-config="bucket=${{ secrets.TF_STATE_BUCKET }}" -backend-config="key=${{ secrets.TF_STATE_KEY_PROD }}" -backend-config="region=us-east-1"

      - name: Terraform Plan
        working-directory: ./terraform/environments/prod
        run: terraform plan -var="environment=prod" -var="domain_name=${{ secrets.DOMAIN_NAME_PROD }}" -out=tfplan

      - name: Terraform Apply
        working-directory: ./terraform/environments/prod
        run: terraform apply -auto-approve tfplan

      - name: Test Production API
        run: |
          API_ENDPOINT=$(aws cloudformation describe-stacks --stack-name resume-backend-prod --query "Stacks[0].Outputs[?OutputKey=='ApiEndpoint'].OutputValue" --output text)
          response=$(curl -s "$API_ENDPOINT/count")
          echo "API Response: $response"

          # Check if the response contains a count field
          echo $response | grep -q '"count":'
          if [ $? -eq 0 ]; then
            echo "Production API test successful"
          else
            echo "Production API test failed"
            exit 1
          fi

Terraform Structure for Multiple Environments

To support multiple environments, I've reorganized my Terraform configuration:

terraform/
├── modules/
│   ├── backend/
│   │   ├── api_gateway.tf
│   │   ├── dynamodb.tf
│   │   ├── lambda.tf
│   │   ├── variables.tf
│   │   └── outputs.tf
├── environments/
│   ├── dev/
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   └── outputs.tf
│   └── prod/
│       ├── main.tf
│       ├── variables.tf
│       └── outputs.tf

Each environment directory contains its own Terraform configuration that references the shared modules.

Implementing GitHub Security Best Practices 🔒

To enhance the security of our CI/CD pipelines, I've implemented several additional measures:

1. Supply Chain Security with Dependabot

Create a file at .github/dependabot.yml in both repositories:

version: 2
updates:
  - package-ecosystem: "github-actions"
    directory: "/"
    schedule:
      interval: "weekly"
    open-pull-requests-limit: 10

  # For frontend
  - package-ecosystem: "npm"
    directory: "/"
    schedule:
      interval: "weekly"
    open-pull-requests-limit: 10

  # For backend
  - package-ecosystem: "pip"
    directory: "/"
    schedule:
      interval: "weekly"
    open-pull-requests-limit: 10

This configuration automatically updates dependencies and identifies security vulnerabilities.

2. Code Scanning with CodeQL

Create a file at .github/workflows/codeql.yml in the backend repository:

name: "CodeQL"

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]
  schedule:
    - cron: '0 0 * * 0'  # Run weekly

jobs:
  analyze:
    name: Analyze
    runs-on: ubuntu-latest
    permissions:
      actions: read
      contents: read
      security-events: write

    strategy:
      fail-fast: false
      matrix:
        language: [ 'python', 'javascript' ]

    steps:
    - name: Checkout repository
      uses: actions/checkout@v3

    - name: Initialize CodeQL
      uses: github/codeql-action/init@v2
      with:
        languages: ${{ matrix.language }}

    - name: Perform CodeQL Analysis
      uses: github/codeql-action/analyze@v2

This workflow scans our code for security vulnerabilities and coding problems.

3. Branch Protection Rules

I've set up branch protection rules for the main branch in both repositories:

  • Require pull request reviews before merging
  • Require status checks to pass before merging
  • Require signed commits
  • Do not allow bypassing the above settings

Adding Verification Tests to the Workflow 🧪

In addition to unit tests, I've added end-to-end integration tests to verify that the frontend and backend work together correctly:

1. Frontend-Backend Integration Test

Create a file at tests/integration-test.js in the frontend repository:

const axios = require('axios');
const assert = require('assert');

// URLs to test - these should be passed as environment variables
const WEBSITE_URL = process.env.WEBSITE_URL || 'https://resume.yourdomain.com';
const API_URL = process.env.API_URL || 'https://api.yourdomain.com/count';

// Test that the API returns a valid response
async function testAPI() {
  try {
    console.log(`Testing API at ${API_URL}`);
    const response = await axios.get(API_URL);

    // Verify the API response contains a count
    assert(response.status === 200, `API returned status ${response.status}`);
    assert(response.data.count !== undefined, 'API response missing count field');
    assert(typeof response.data.count === 'number', 'Count is not a number');

    console.log(`API test successful. Count: ${response.data.count}`);
    return true;
  } catch (error) {
    console.error('API test failed:', error.message);
    return false;
  }
}

// Test that the website loads and contains necessary elements
async function testWebsite() {
  try {
    console.log(`Testing website at ${WEBSITE_URL}`);
    const response = await axios.get(WEBSITE_URL);

    // Verify the website loads
    assert(response.status === 200, `Website returned status ${response.status}`);

    // Check that the page contains some expected content
    assert(response.data.includes('<html'), 'Response is not HTML');
    assert(response.data.includes('id="count"'), 'Counter element not found');

    console.log('Website test successful');
    return true;
  } catch (error) {
    console.error('Website test failed:', error.message);
    return false;
  }
}

// Run all tests
async function runTests() {
  const apiResult = await testAPI();
  const websiteResult = await testWebsite();

  if (apiResult && websiteResult) {
    console.log('All integration tests passed!');
    process.exit(0);
  } else {
    console.error('Some integration tests failed');
    process.exit(1);
  }
}

// Run the tests
runTests();

Then add a step to the workflow:

- name: Run Integration Tests
  run: |
    npm install axios
    node tests/integration-test.js
  env:
    WEBSITE_URL: ${{ secrets.WEBSITE_URL }}
    API_URL: ${{ secrets.API_URL }}

Implementing Secure GitHub Action Secrets 🔐

For our GitHub Actions workflows, I've set up the following repository secrets:

  • AWS_ACCOUNT_ID: The AWS account ID used for OIDC authentication
  • S3_BUCKET_NAME: The name of the S3 bucket for the website
  • CLOUDFRONT_DISTRIBUTION_ID: The ID of the CloudFront distribution
  • WEBSITE_URL: The URL of the deployed website
  • API_URL: The URL of the deployed API
  • TF_STATE_BUCKET: The bucket for Terraform state
  • TF_STATE_KEY: The key for Terraform state (dev)
  • TF_STATE_KEY_PROD: The key for Terraform state (prod)
  • DOMAIN_NAME: The domain name for the dev environment
  • DOMAIN_NAME_PROD: The domain name for the prod environment

These secrets are protected by GitHub and only exposed to authorized workflow runs.

Managing Manual Approvals for Production Deployments 🚦

For production deployments, I've added a manual approval step using GitHub Environments:

  1. Go to your repository settings
  2. Navigate to Environments
  3. Create a new environment called "production"
  4. Enable "Required reviewers" and add yourself
  5. Configure "Deployment branches" to limit deployments to specific branches

Now, production deployments will require explicit approval from an authorized reviewer.

Monitoring Deployment Status and Notifications 📊

To stay informed about deployment status, I've added notifications to the workflow:

- name: Notify Deployment Success
  if: success()
  uses: rtCamp/action-slack-notify@v2
  env:
    SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
    SLACK_TITLE: Deployment Successful
    SLACK_MESSAGE: "✅ Deployment to ${{ github.workflow }} was successful!"
    SLACK_COLOR: good

- name: Notify Deployment Failure
  if: failure()
  uses: rtCamp/action-slack-notify@v2
  env:
    SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
    SLACK_TITLE: Deployment Failed
    SLACK_MESSAGE: "❌ Deployment to ${{ github.workflow }} failed!"
    SLACK_COLOR: danger

This sends notifications to a Slack channel when deployments succeed or fail.

Implementing Additional Security for AWS CloudFront 🔒

To enhance the security of our CloudFront distribution, I've added a custom response headers policy:

resource "aws_cloudfront_response_headers_policy" "security_headers" {
  name = "security-headers-policy"

  security_headers_config {
    content_security_policy {
      content_security_policy = "default-src 'self'; img-src 'self'; script-src 'self'; style-src 'self'; object-src 'none';"
      override = true
    }

    content_type_options {
      override = true
    }

    frame_options {
      frame_option = "DENY"
      override = true
    }

    referrer_policy {
      referrer_policy = "same-origin"
      override = true
    }

    strict_transport_security {
      access_control_max_age_sec = 31536000
      include_subdomains = true
      preload = true
      override = true
    }

    xss_protection {
      mode_block = true
      protection = true
      override = true
    }
  }
}

Then reference this policy in the CloudFront distribution:

resource "aws_cloudfront_distribution" "website" {
  # ... other configuration ...

  default_cache_behavior {
    # ... other configuration ...
    response_headers_policy_id = aws_cloudfront_response_headers_policy.security_headers.id
  }
}

Lessons Learned 💡

Implementing CI/CD for this project taught me several valuable lessons:

  1. Start Simple, Then Iterate: My first workflow was basic - just syncing files to S3. As I gained confidence, I added testing, multiple environments, and security features.

  2. Security Is Non-Negotiable: Using OIDC for authentication instead of long-lived credentials was a game-changer for security. This approach follows AWS best practices and eliminates credential management headaches.

  3. Test Everything: Automated tests at every level (unit, integration, end-to-end) catch issues early. The time invested in writing tests paid off with more reliable deployments.

  4. Environment Separation: Keeping development and production environments separate allowed me to test changes safely before affecting the live site.

  5. Infrastructure as Code Works: Using Terraform to define all infrastructure components made the CI/CD process much more reliable. Everything is tracked, versioned, and repeatable.

My Integration Challenges and Solutions 🧩

During implementation, I encountered several challenges:

  1. CORS Issues: The API and website needed proper CORS configuration to work together. Adding the correct headers in both Lambda and API Gateway fixed this.

  2. Environment Variables: Managing different configurations for dev and prod was tricky. I solved this by using GitHub environment variables and separate Terraform workspaces.

  3. Cache Invalidation Delays: Changes to the website sometimes weren't visible immediately due to CloudFront caching. Adding proper cache invalidation to the workflow fixed this.

  4. State Locking: When multiple workflow runs executed simultaneously, they occasionally conflicted on Terraform state. Using DynamoDB for state locking resolved this issue.

DevOps Mod: Multi-Stage Pipeline with Pull Request Environments 🚀

To extend this challenge further, I implemented a feature that creates temporary preview environments for pull requests:

  create_preview:
    name: 'Create Preview Environment'
    runs-on: ubuntu-latest
    if: github.event_name == 'pull_request'

    steps:
      - name: Checkout Repository
        uses: actions/checkout@v3

      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v2
        with:
          role-to-assume: arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/github-actions-role
          aws-region: us-east-1

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.2.0

      - name: Generate Unique Environment Name
        run: |
          PR_NUMBER=${{ github.event.pull_request.number }}
          BRANCH_NAME=$(echo ${{ github.head_ref }} | tr -cd '[:alnum:]' | tr '[:upper:]' '[:lower:]')
          ENV_NAME="pr-${PR_NUMBER}-${BRANCH_NAME}"
          echo "ENV_NAME=${ENV_NAME}" >> $GITHUB_ENV

      - name: Terraform Init
        working-directory: ./terraform
        run: terraform init -backend-config="bucket=${{ secrets.TF_STATE_BUCKET }}" -backend-config="key=preview/${{ env.ENV_NAME }}/terraform.tfstate" -backend-config="region=us-east-1"

      - name: Terraform Apply
        working-directory: ./terraform
        run: |
          terraform apply -auto-approve \
            -var="environment=${{ env.ENV_NAME }}" \
            -var="domain_name=pr-${{ github.event.pull_request.number }}.${{ secrets.DOMAIN_NAME }}"

      - name: Comment Preview URL
        uses: actions/github-script@v6
        with:
          github-token: ${{ secrets.GITHUB_TOKEN }}
          script: |
            const output = `## 🚀 Preview Environment Deployed

            Preview URL: https://pr-${{ github.event.pull_request.number }}.${{ secrets.DOMAIN_NAME }}

            API Endpoint: https://api-pr-${{ github.event.pull_request.number }}.${{ secrets.DOMAIN_NAME }}/count

            This environment will be automatically deleted when the PR is closed.`;

            github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: output
            })

And add a cleanup job to delete the preview environment when the PR is closed:

  cleanup_preview:
    name: 'Cleanup Preview Environment'
    runs-on: ubuntu-latest
    if: github.event_name == 'pull_request' && github.event.action == 'closed'

    steps:
      # Similar to create_preview but with terraform destroy

Security Mod: Implementing AWS Secrets Manager for API Keys 🔐

To enhance the security of our API, I added API key authentication using AWS Secrets Manager:

# Create a secret to store the API key
resource "aws_secretsmanager_secret" "api_key" {
  name        = "resume-api-key-${var.environment}"
  description = "API key for the Resume API"
}

# Generate a random API key
resource "random_password" "api_key" {
  length  = 32
  special = false
}

# Store the API key in Secrets Manager
resource "aws_secretsmanager_secret_version" "api_key" {
  secret_id     = aws_secretsmanager_secret.api_key.id
  secret_string = random_password.api_key.result
}

# Add API key to API Gateway
resource "aws_api_gateway_api_key" "visitor_counter" {
  name = "visitor-counter-key-${var.environment}"
}

resource "aws_api_gateway_usage_plan" "visitor_counter" {
  name = "visitor-counter-usage-plan-${var.environment}"

  api_stages {
    api_id = aws_api_gateway_rest_api.visitor_counter.id
    stage  = aws_api_gateway_deployment.visitor_counter.stage_name
  }

  quota_settings {
    limit  = 1000
    period = "DAY"
  }

  throttle_settings {
    burst_limit = 10
    rate_limit  = 5
  }
}

resource "aws_api_gateway_usage_plan_key" "visitor_counter" {
  key_id        = aws_api_gateway_api_key.visitor_counter.id
  key_type      = "API_KEY"
  usage_plan_id = aws_api_gateway_usage_plan.visitor_counter.id
}

# Update the Lambda function to verify the API key
resource "aws_lambda_function" "visitor_counter" {
  # ... existing configuration ...

  environment {
    variables = {
      DYNAMODB_TABLE = aws_dynamodb_table.visitor_counter.name
      ALLOWED_ORIGIN = var.website_domain
      API_KEY_SECRET = aws_secretsmanager_secret.api_key.name
    }
  }
}

Then, modify the Lambda function to retrieve and validate the API key:

import boto3
import json
import os

# Initialize Secrets Manager client
secretsmanager = boto3.client('secretsmanager')

def get_api_key():
    """Retrieve the API key from Secrets Manager"""
    secret_name = os.environ['API_KEY_SECRET']
    response = secretsmanager.get_secret_value(SecretId=secret_name)
    return response['SecretString']

def lambda_handler(event, context):
    # Verify API key
    api_key = event.get('headers', {}).get('x-api-key')
    expected_api_key = get_api_key()

    if api_key != expected_api_key:
        return {
            'statusCode': 403,
            'headers': {
                'Content-Type': 'application/json'
            },
            'body': json.dumps({
                'error': 'Forbidden',
                'message': 'Invalid API key'
            })
        }

    # Rest of the function...

Next Steps ⏭️

With our CI/CD pipelines in place, our Cloud Resume Challenge implementation is complete! In the final post, we'll reflect on the project as a whole, discuss lessons learned, and explore potential future enhancements.


Up Next: [Cloud Resume Challenge with Terraform: Final Thoughts & Lessons Learned] 🔗

Share on Share on

Cloud Resume Challenge with Terraform: Building the Backend API 🚀

In our previous posts, we set up the frontend infrastructure for our resume website using Terraform. Now it's time to build the backend API that will power our visitor counter.

Backend Architecture Overview 🏗️

Let's take a look at the serverless architecture we'll be implementing:

┌─────────────┐     ┌─────────────┐     ┌─────────────┐
│             │     │             │     │             │
│ API Gateway ├─────► Lambda      ├─────► DynamoDB    │
│             │     │             │     │             │
└─────────────┘     └─────────────┘     └─────────────┘
       │                   │                   │
       │                   │                   │
       ▼                   ▼                   ▼
┌─────────────┐     ┌─────────────┐     ┌─────────────┐
│             │     │             │     │             │
│ CloudWatch  │     │ CloudWatch  │     │ CloudWatch  │
│   Logs      │     │   Logs      │     │   Logs      │
│             │     │             │     │             │
└─────────────┘     └─────────────┘     └─────────────┘

This architecture includes:

  1. API Gateway: Exposes our Lambda function as a REST API
  2. Lambda Function: Contains the Python code to increment and return the visitor count
  3. DynamoDB: Stores the visitor count data
  4. CloudWatch: Monitors and logs activity across all services

My Approach to DynamoDB Design 💾

Before diving into the Terraform code, I want to share my thought process on DynamoDB table design. When I initially approached this challenge, I had to decide between two approaches:

  1. Single-counter approach: A simple table with just one item for the counter
  2. Visitor log approach: A more detailed table that logs each visit with timestamps

I chose the second approach for a few reasons:

  • It allows for more detailed analytics in the future
  • It provides a history of visits that can be queried
  • It demonstrates a more realistic use case for DynamoDB

Here's my table design:

Attribute Type Description
visit_id String Primary key (UUID)
timestamp String ISO8601 timestamp of the visit
visitor_ip String Hashed IP address for privacy
user_agent String Browser/device information
path String Page path visited

This approach gives us flexibility while keeping the solution serverless and cost-effective.

Implementing the Backend API with Terraform 🛠️

Now, let's start implementing our backend infrastructure using Terraform. We'll create modules for each component, starting with DynamoDB.

1. DynamoDB Table for Visitor Counting 📊

Create a file at modules/backend/dynamodb.tf:

resource "aws_dynamodb_table" "visitor_counter" {
  name           = "ResumeVisitorCounter-${var.environment}"
  billing_mode   = "PAY_PER_REQUEST"  # On-demand capacity for cost savings
  hash_key       = "visit_id"

  attribute {
    name = "visit_id"
    type = "S"
  }

  # Add TTL for automatic data cleanup after 90 days
  ttl {
    attribute_name = "expiration_time"
    enabled        = true
  }

  point_in_time_recovery {
    enabled = true  # Enable PITR for recovery options
  }

  # Use server-side encryption
  server_side_encryption {
    enabled = true
  }

  tags = {
    Name        = "Resume Visitor Counter"
    Environment = var.environment
    Project     = "Cloud Resume Challenge"
  }
}

# Create a GSI for timestamp-based queries
resource "aws_dynamodb_table_item" "counter_init" {
  table_name = aws_dynamodb_table.visitor_counter.name
  hash_key   = aws_dynamodb_table.visitor_counter.hash_key

  # Initialize the counter with a value of 0
  item = jsonencode({
    "visit_id": {"S": "total"},
    "count": {"N": "0"}
  })

  # Only create this item on initial deployment
  lifecycle {
    ignore_changes = [item]
  }
}

I've implemented several enhancements:

  • Point-in-time recovery for data protection
  • TTL for automatic cleanup of old records
  • Server-side encryption for security
  • An initial counter item to ensure we don't have "cold start" issues

2. Lambda Function for the API Logic 🏗️

Now, let's create our Lambda function. First, we'll need the Python code. Create a file at modules/backend/lambda/visitor_counter.py:

import boto3
import json
import os
import uuid
import logging
from datetime import datetime, timedelta
import hashlib
from botocore.exceptions import ClientError

# Set up logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)

# Initialize DynamoDB client
dynamodb = boto3.resource('dynamodb')
table_name = os.environ['DYNAMODB_TABLE']
table = dynamodb.Table(table_name)

def lambda_handler(event, context):
    """
    Lambda handler to process API Gateway requests for visitor counting.
    Increments the visitor counter and returns the updated count.
    """
    logger.info(f"Processing event: {json.dumps(event)}")

    try:
        # Extract request information
        request_context = event.get('requestContext', {})
        http_method = event.get('httpMethod', '')
        path = event.get('path', '')
        headers = event.get('headers', {})
        ip_address = request_context.get('identity', {}).get('sourceIp', 'unknown')
        user_agent = headers.get('User-Agent', 'unknown')

        # Generate a unique visit ID
        visit_id = str(uuid.uuid4())

        # Hash the IP address for privacy
        hashed_ip = hashlib.sha256(ip_address.encode()).hexdigest()

        # Get current timestamp
        timestamp = datetime.utcnow().isoformat()

        # Calculate expiration time (90 days from now)
        expiration_time = int((datetime.utcnow() + timedelta(days=90)).timestamp())

        # Log the visit
        table.put_item(
            Item={
                'visit_id': visit_id,
                'timestamp': timestamp,
                'visitor_ip': hashed_ip,
                'user_agent': user_agent,
                'path': path,
                'expiration_time': expiration_time
            }
        )

        # Update the total counter
        response = table.update_item(
            Key={'visit_id': 'total'},
            UpdateExpression='ADD #count :incr',
            ExpressionAttributeNames={'#count': 'count'},
            ExpressionAttributeValues={':incr': 1},
            ReturnValues='UPDATED_NEW'
        )

        count = int(response['Attributes']['count'])

        # Return the response
        return {
            'statusCode': 200,
            'headers': {
                'Content-Type': 'application/json',
                'Access-Control-Allow-Origin': os.environ['ALLOWED_ORIGIN'],
                'Access-Control-Allow-Methods': 'GET, OPTIONS',
                'Access-Control-Allow-Headers': 'Content-Type'
            },
            'body': json.dumps({
                'count': count,
                'message': 'Visitor count updated successfully'
            })
        }

    except ClientError as e:
        logger.error(f"DynamoDB error: {e}")
        return {
            'statusCode': 500,
            'headers': {
                'Content-Type': 'application/json',
                'Access-Control-Allow-Origin': os.environ.get('ALLOWED_ORIGIN', '*')
            },
            'body': json.dumps({
                'error': 'Database error',
                'message': str(e)
            })
        }
    except Exception as e:
        logger.error(f"General error: {e}")
        return {
            'statusCode': 500,
            'headers': {
                'Content-Type': 'application/json',
                'Access-Control-Allow-Origin': os.environ.get('ALLOWED_ORIGIN', '*')
            },
            'body': json.dumps({
                'error': 'Server error',
                'message': str(e)
            })
        }

def options_handler(event, context):
    """
    Handler for OPTIONS requests to support CORS
    """
    return {
        'statusCode': 200,
        'headers': {
            'Access-Control-Allow-Origin': os.environ.get('ALLOWED_ORIGIN', '*'),
            'Access-Control-Allow-Methods': 'GET, OPTIONS',
            'Access-Control-Allow-Headers': 'Content-Type'
        },
        'body': ''
    }

Now, let's create the Lambda function using Terraform. Create a file at modules/backend/lambda.tf:

# Archive the Lambda function code
data "archive_file" "lambda_zip" {
  type        = "zip"
  source_file = "${path.module}/lambda/visitor_counter.py"
  output_path = "${path.module}/lambda/visitor_counter.zip"
}

# Create the Lambda function
resource "aws_lambda_function" "visitor_counter" {
  filename         = data.archive_file.lambda_zip.output_path
  function_name    = "resume-visitor-counter-${var.environment}"
  role             = aws_iam_role.lambda_role.arn
  handler          = "visitor_counter.lambda_handler"
  source_code_hash = data.archive_file.lambda_zip.output_base64sha256
  runtime          = "python3.9"
  timeout          = 10  # Increased timeout for better error handling
  memory_size      = 128

  environment {
    variables = {
      DYNAMODB_TABLE = aws_dynamodb_table.visitor_counter.name
      ALLOWED_ORIGIN = var.website_domain
    }
  }

  tracing_config {
    mode = "Active"  # Enable X-Ray tracing
  }

  tags = {
    Name        = "Resume Visitor Counter Lambda"
    Environment = var.environment
    Project     = "Cloud Resume Challenge"
  }
}

# Create an IAM role for the Lambda function
resource "aws_iam_role" "lambda_role" {
  name = "resume-visitor-counter-lambda-role-${var.environment}"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "lambda.amazonaws.com"
        }
      }
    ]
  })
}

# Create a custom policy for the Lambda function with least privilege
resource "aws_iam_policy" "lambda_policy" {
  name        = "resume-visitor-counter-lambda-policy-${var.environment}"
  description = "IAM policy for the visitor counter Lambda function"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = [
          "dynamodb:GetItem",
          "dynamodb:PutItem",
          "dynamodb:UpdateItem"
        ]
        Effect   = "Allow"
        Resource = aws_dynamodb_table.visitor_counter.arn
      },
      {
        Action = [
          "logs:CreateLogGroup",
          "logs:CreateLogStream",
          "logs:PutLogEvents"
        ]
        Effect   = "Allow"
        Resource = "arn:aws:logs:*:*:*"
      },
      {
        Action = [
          "xray:PutTraceSegments",
          "xray:PutTelemetryRecords"
        ]
        Effect   = "Allow"
        Resource = "*"
      }
    ]
  })
}

# Attach the policy to the IAM role
resource "aws_iam_role_policy_attachment" "lambda_policy_attachment" {
  role       = aws_iam_role.lambda_role.name
  policy_arn = aws_iam_policy.lambda_policy.arn
}

# Create a CloudWatch log group for the Lambda function
resource "aws_cloudwatch_log_group" "lambda_log_group" {
  name              = "/aws/lambda/${aws_lambda_function.visitor_counter.function_name}"
  retention_in_days = 30

  tags = {
    Environment = var.environment
    Project     = "Cloud Resume Challenge"
  }
}

# Create a Lambda function for handling OPTIONS requests (CORS)
resource "aws_lambda_function" "options_handler" {
  filename         = data.archive_file.lambda_zip.output_path
  function_name    = "resume-visitor-counter-options-${var.environment}"
  role             = aws_iam_role.lambda_role.arn
  handler          = "visitor_counter.options_handler"
  source_code_hash = data.archive_file.lambda_zip.output_base64sha256
  runtime          = "python3.9"
  timeout          = 10
  memory_size      = 128

  environment {
    variables = {
      ALLOWED_ORIGIN = var.website_domain
    }
  }

  tags = {
    Name        = "Resume Options Handler Lambda"
    Environment = var.environment
    Project     = "Cloud Resume Challenge"
  }
}

I've implemented several security and operational improvements:

  • Least privilege IAM policies
  • X-Ray tracing for performance monitoring
  • Proper CORS handling with a dedicated OPTIONS handler
  • CloudWatch log group with retention policy
  • Privacy-enhancing IP address hashing

3. API Gateway for Exposing the Lambda Function 🔗

Create a file at modules/backend/api_gateway.tf:

# Create the API Gateway REST API
resource "aws_api_gateway_rest_api" "visitor_counter" {
  name        = "resume-visitor-counter-${var.environment}"
  description = "API for the resume visitor counter"

  endpoint_configuration {
    types = ["REGIONAL"]
  }

  tags = {
    Name        = "Resume Visitor Counter API"
    Environment = var.environment
    Project     = "Cloud Resume Challenge"
  }
}

# Create a resource for the API
resource "aws_api_gateway_resource" "visitor_counter" {
  rest_api_id = aws_api_gateway_rest_api.visitor_counter.id
  parent_id   = aws_api_gateway_rest_api.visitor_counter.root_resource_id
  path_part   = "count"
}

# Create a GET method for the API
resource "aws_api_gateway_method" "get" {
  rest_api_id   = aws_api_gateway_rest_api.visitor_counter.id
  resource_id   = aws_api_gateway_resource.visitor_counter.id
  http_method   = "GET"
  authorization_type = "NONE"

  # Add API key requirement if needed
  # api_key_required = true
}

# Create an OPTIONS method for the API (for CORS)
resource "aws_api_gateway_method" "options" {
  rest_api_id   = aws_api_gateway_rest_api.visitor_counter.id
  resource_id   = aws_api_gateway_resource.visitor_counter.id
  http_method   = "OPTIONS"
  authorization_type = "NONE"
}

# Set up the GET method integration with Lambda
resource "aws_api_gateway_integration" "lambda_get" {
  rest_api_id = aws_api_gateway_rest_api.visitor_counter.id
  resource_id = aws_api_gateway_resource.visitor_counter.id
  http_method = aws_api_gateway_method.get.http_method

  integration_http_method = "POST"
  type                    = "AWS_PROXY"
  uri                     = aws_lambda_function.visitor_counter.invoke_arn
}

# Set up the OPTIONS method integration with Lambda
resource "aws_api_gateway_integration" "lambda_options" {
  rest_api_id = aws_api_gateway_rest_api.visitor_counter.id
  resource_id = aws_api_gateway_resource.visitor_counter.id
  http_method = aws_api_gateway_method.options.http_method

  integration_http_method = "POST"
  type                    = "AWS_PROXY"
  uri                     = aws_lambda_function.options_handler.invoke_arn
}

# Create a deployment for the API
resource "aws_api_gateway_deployment" "visitor_counter" {
  depends_on = [
    aws_api_gateway_integration.lambda_get,
    aws_api_gateway_integration.lambda_options
  ]

  rest_api_id = aws_api_gateway_rest_api.visitor_counter.id
  stage_name  = var.environment

  lifecycle {
    create_before_destroy = true
  }
}

# Add permission for API Gateway to invoke the Lambda function
resource "aws_lambda_permission" "api_gateway_lambda" {
  statement_id  = "AllowAPIGatewayInvoke"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.visitor_counter.function_name
  principal     = "apigateway.amazonaws.com"

  # The /* part allows invocation from any stage, method and resource path
  # within API Gateway
  source_arn = "${aws_api_gateway_rest_api.visitor_counter.execution_arn}/*/${aws_api_gateway_method.get.http_method}${aws_api_gateway_resource.visitor_counter.path}"
}

# Add permission for API Gateway to invoke the OPTIONS Lambda function
resource "aws_lambda_permission" "api_gateway_options_lambda" {
  statement_id  = "AllowAPIGatewayInvokeOptions"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.options_handler.function_name
  principal     = "apigateway.amazonaws.com"

  source_arn = "${aws_api_gateway_rest_api.visitor_counter.execution_arn}/*/${aws_api_gateway_method.options.http_method}${aws_api_gateway_resource.visitor_counter.path}"
}

# Enable CloudWatch logging for API Gateway
resource "aws_api_gateway_account" "main" {
  cloudwatch_role_arn = aws_iam_role.api_gateway_cloudwatch.arn
}

resource "aws_iam_role" "api_gateway_cloudwatch" {
  name = "api-gateway-cloudwatch-role-${var.environment}"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "apigateway.amazonaws.com"
        }
      }
    ]
  })
}

resource "aws_iam_role_policy_attachment" "api_gateway_cloudwatch" {
  role       = aws_iam_role.api_gateway_cloudwatch.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonAPIGatewayPushToCloudWatchLogs"
}

# Set up method settings for logging and throttling
resource "aws_api_gateway_method_settings" "settings" {
  rest_api_id = aws_api_gateway_rest_api.visitor_counter.id
  stage_name  = aws_api_gateway_deployment.visitor_counter.stage_name
  method_path = "*/*"

  settings {
    metrics_enabled        = true
    logging_level          = "INFO"
    data_trace_enabled     = true
    throttling_rate_limit  = 100
    throttling_burst_limit = 50
  }
}

# Create a custom domain for the API
resource "aws_api_gateway_domain_name" "api" {
  domain_name              = "api.${var.domain_name}"
  regional_certificate_arn = var.certificate_arn

  endpoint_configuration {
    types = ["REGIONAL"]
  }

  tags = {
    Name        = "Resume API Domain"
    Environment = var.environment
    Project     = "Cloud Resume Challenge"
  }
}

# Create a base path mapping for the custom domain
resource "aws_api_gateway_base_path_mapping" "api" {
  api_id      = aws_api_gateway_rest_api.visitor_counter.id
  stage_name  = aws_api_gateway_deployment.visitor_counter.stage_name
  domain_name = aws_api_gateway_domain_name.api.domain_name
}

# Create a Route 53 record for the API domain
resource "aws_route53_record" "api" {
  name    = aws_api_gateway_domain_name.api.domain_name
  type    = "A"
  zone_id = var.hosted_zone_id

  alias {
    name                   = aws_api_gateway_domain_name.api.regional_domain_name
    zone_id                = aws_api_gateway_domain_name.api.regional_zone_id
    evaluate_target_health = false
  }
}

The API Gateway configuration includes several enhancements:

  • CloudWatch logging and metrics
  • Rate limiting and throttling to prevent abuse
  • Custom domain for a professional API endpoint
  • Proper Route 53 DNS configuration

4. Variables and Outputs 📝

Create files at modules/backend/variables.tf and modules/backend/outputs.tf:

variables.tf:

variable "environment" {
  description = "Deployment environment (e.g., dev, prod)"
  type        = string
  default     = "dev"
}

variable "website_domain" {
  description = "Domain of the resume website (for CORS)"
  type        = string
}

variable "domain_name" {
  description = "Base domain name for custom API endpoint"
  type        = string
}

variable "hosted_zone_id" {
  description = "Route 53 hosted zone ID"
  type        = string
}

variable "certificate_arn" {
  description = "ARN of the ACM certificate for the API domain"
  type        = string
}

outputs.tf:

output "api_endpoint" {
  description = "Endpoint URL of the API Gateway"
  value       = aws_api_gateway_deployment.visitor_counter.invoke_url
}

output "api_custom_domain" {
  description = "Custom domain for the API"
  value       = aws_api_gateway_domain_name.api.domain_name
}

output "dynamodb_table_name" {
  description = "Name of the DynamoDB table"
  value       = aws_dynamodb_table.visitor_counter.name
}

5. Source Control for Backend Code 📚

An important aspect of the Cloud Resume Challenge is using source control. We'll create a GitHub repository for our backend code. Here's how I organize my repository:

resume-backend/
├── .github/
│   └── workflows/
│       └── deploy.yml  # GitHub Actions workflow (we'll create this in next post)
├── lambda/
│   └── visitor_counter.py
├── terraform/
│   ├── modules/
│   │   ├── backend/
│   │   │   ├── api_gateway.tf
│   │   │   ├── dynamodb.tf
│   │   │   ├── lambda.tf
│   │   │   ├── variables.tf
│   │   │   └── outputs.tf
│   ├── main.tf
│   ├── variables.tf
│   └── outputs.tf
├── tests/
│   └── test_visitor_counter.py  # Python unit tests
└── README.md

Implementing Python Tests 🧪

For step 11 of the Cloud Resume Challenge, we need to include tests for our Python code. Create a file at tests/test_visitor_counter.py:

import unittest
import json
import os
import sys
from unittest.mock import patch, MagicMock

# Add lambda directory to the path so we can import the function
sys.path.append(os.path.join(os.path.dirname(__file__), '..', 'lambda'))

import visitor_counter

class TestVisitorCounter(unittest.TestCase):
    """Test cases for the visitor counter Lambda function."""

    @patch('visitor_counter.table')
    def test_lambda_handler_success(self, mock_table):
        """Test successful execution of the lambda_handler function."""
        # Mock the DynamoDB responses
        mock_put_response = MagicMock()
        mock_update_response = {
            'Attributes': {
                'count': 42
            }
        }
        mock_table.put_item.return_value = mock_put_response
        mock_table.update_item.return_value = mock_update_response

        # Set required environment variables
        os.environ['DYNAMODB_TABLE'] = 'test-table'
        os.environ['ALLOWED_ORIGIN'] = 'https://example.com'

        # Create a test event
        event = {
            'httpMethod': 'GET',
            'path': '/count',
            'headers': {
                'User-Agent': 'test-agent'
            },
            'requestContext': {
                'identity': {
                    'sourceIp': '127.0.0.1'
                }
            }
        }

        # Call the function
        response = visitor_counter.lambda_handler(event, {})

        # Assert response is correct
        self.assertEqual(response['statusCode'], 200)
        self.assertEqual(response['headers']['Content-Type'], 'application/json')
        self.assertEqual(response['headers']['Access-Control-Allow-Origin'], 'https://example.com')

        # Parse the body and check the count
        body = json.loads(response['body'])
        self.assertEqual(body['count'], 42)
        self.assertEqual(body['message'], 'Visitor count updated successfully')

        # Verify that DynamoDB was called correctly
        mock_table.put_item.assert_called_once()
        mock_table.update_item.assert_called_once_with(
            Key={'visit_id': 'total'},
            UpdateExpression='ADD #count :incr',
            ExpressionAttributeNames={'#count': 'count'},
            ExpressionAttributeValues={':incr': 1},
            ReturnValues='UPDATED_NEW'
        )

    @patch('visitor_counter.table')
    def test_lambda_handler_error(self, mock_table):
        """Test error handling in the lambda_handler function."""
        # Simulate a DynamoDB error
        mock_table.update_item.side_effect = Exception("Test error")

        # Set required environment variables
        os.environ['DYNAMODB_TABLE'] = 'test-table'
        os.environ['ALLOWED_ORIGIN'] = 'https://example.com'

        # Create a test event
        event = {
            'httpMethod': 'GET',
            'path': '/count',
            'headers': {
                'User-Agent': 'test-agent'
            },
            'requestContext': {
                'identity': {
                    'sourceIp': '127.0.0.1'
                }
            }
        }

        # Call the function
        response = visitor_counter.lambda_handler(event, {})

        # Assert response indicates an error
        self.assertEqual(response['statusCode'], 500)
        self.assertEqual(response['headers']['Content-Type'], 'application/json')

        # Parse the body and check the error message
        body = json.loads(response['body'])
        self.assertIn('error', body)
        self.assertIn('message', body)

    def test_options_handler(self):
        """Test the OPTIONS handler for CORS support."""
        # Set required environment variables
        os.environ['ALLOWED_ORIGIN'] = 'https://example.com'

        # Create a test event
        event = {
            'httpMethod': 'OPTIONS',
            'path': '/count',
            'headers': {
                'Origin': 'https://example.com'
            }
        }

        # Call the function
        response = visitor_counter.options_handler(event, {})

        # Assert response is correct for OPTIONS
        self.assertEqual(response['statusCode'], 200)
        self.assertEqual(response['headers']['Access-Control-Allow-Origin'], 'https://example.com')
        self.assertEqual(response['headers']['Access-Control-Allow-Methods'], 'GET, OPTIONS')
        self.assertEqual(response['headers']['Access-Control-Allow-Headers'], 'Content-Type')

if __name__ == '__main__':
    unittest.main()

This test suite covers:

  • Successful API calls
  • Error handling
  • CORS OPTIONS request handling

To run these tests, you would use the following command:

python -m unittest tests/test_visitor_counter.py

Testing the API Manually 🧪

Once you've deployed the API, you can test it manually using tools like cURL or Postman. Here's how to test with cURL:

# Get the current visitor count
curl -X GET https://api.yourdomain.com/count

# Test CORS pre-flight request
curl -X OPTIONS https://api.yourdomain.com/count \
  -H "Origin: https://yourdomain.com" \
  -H "Access-Control-Request-Method: GET" \
  -H "Access-Control-Request-Headers: Content-Type"

For Postman:

  1. Create a new GET request to your API endpoint (https://api.yourdomain.com/count)
  2. Send the request and verify you get a 200 response with a JSON body
  3. Create a new OPTIONS request to test CORS
  4. Add headers: Origin: https://yourdomain.com, Access-Control-Request-Method: GET
  5. Send the request and verify you get a 200 response with the correct CORS headers

Setting Up CloudWatch Monitoring and Alarms ⚠️

Adding monitoring and alerting is a critical part of any production-grade API. Let's add CloudWatch alarms to notify us if something goes wrong:

# Add to modules/backend/monitoring.tf

# Alarm for Lambda errors
resource "aws_cloudwatch_metric_alarm" "lambda_errors" {
  alarm_name          = "lambda-visitor-counter-errors-${var.environment}"
  comparison_operator = "GreaterThanThreshold"
  evaluation_periods  = 1
  metric_name         = "Errors"
  namespace           = "AWS/Lambda"
  period              = 60
  statistic           = "Sum"
  threshold           = 0
  alarm_description   = "This alarm monitors for errors in the visitor counter Lambda function"

  dimensions = {
    FunctionName = aws_lambda_function.visitor_counter.function_name
  }

  # Add SNS topic ARN if you want notifications
  # alarm_actions     = [aws_sns_topic.alerts.arn]
  # ok_actions        = [aws_sns_topic.alerts.arn]
}

# Alarm for API Gateway 5XX errors
resource "aws_cloudwatch_metric_alarm" "api_5xx_errors" {
  alarm_name          = "api-visitor-counter-5xx-errors-${var.environment}"
  comparison_operator = "GreaterThanThreshold"
  evaluation_periods  = 1
  metric_name         = "5XXError"
  namespace           = "AWS/ApiGateway"
  period              = 60
  statistic           = "Sum"
  threshold           = 0
  alarm_description   = "This alarm monitors for 5XX errors in the visitor counter API"

  dimensions = {
    ApiName = aws_api_gateway_rest_api.visitor_counter.name
    Stage   = aws_api_gateway_deployment.visitor_counter.stage_name
  }

  # Add SNS topic ARN if you want notifications
  # alarm_actions     = [aws_sns_topic.alerts.arn]
  # ok_actions        = [aws_sns_topic.alerts.arn]
}

# Dashboard for monitoring the API
resource "aws_cloudwatch_dashboard" "api_dashboard" {
  dashboard_name = "visitor-counter-dashboard-${var.environment}"

  dashboard_body = jsonencode({
    widgets = [
      {
        type   = "metric"
        x      = 0
        y      = 0
        width  = 12
        height = 6
        properties = {
          metrics = [
            ["AWS/ApiGateway", "Count", "ApiName", aws_api_gateway_rest_api.visitor_counter.name, "Stage", aws_api_gateway_deployment.visitor_counter.stage_name]
          ]
          period = 300
          stat   = "Sum"
          region = "us-east-1"
          title  = "API Requests"
        }
      },
      {
        type   = "metric"
        x      = 12
        y      = 0
        width  = 12
        height = 6
        properties = {
          metrics = [
            ["AWS/ApiGateway", "4XXError", "ApiName", aws_api_gateway_rest_api.visitor_counter.name, "Stage", aws_api_gateway_deployment.visitor_counter.stage_name],
            ["AWS/ApiGateway", "5XXError", "ApiName", aws_api_gateway_rest_api.visitor_counter.name, "Stage", aws_api_gateway_deployment.visitor_counter.stage_name]
          ]
          period = 300
          stat   = "Sum"
          region = "us-east-1"
          title  = "API Errors"
        }
      },
      {
        type   = "metric"
        x      = 0
        y      = 6
        width  = 12
        height = 6
        properties = {
          metrics = [
            ["AWS/Lambda", "Invocations", "FunctionName", aws_lambda_function.visitor_counter.function_name],
            ["AWS/Lambda", "Errors", "FunctionName", aws_lambda_function.visitor_counter.function_name]
          ]
          period = 300
          stat   = "Sum"
          region = "us-east-1"
          title  = "Lambda Invocations and Errors"
        }
      },
      {
        type   = "metric"
        x      = 12
        y      = 6
        width  = 12
        height = 6
        properties = {
          metrics = [
            ["AWS/Lambda", "Duration", "FunctionName", aws_lambda_function.visitor_counter.function_name]
          ]
          period = 300
          stat   = "Average"
          region = "us-east-1"
          title  = "Lambda Duration"
        }
      }
    ]
  })
}

Debugging Common API Issues 🐛

During my implementation, I encountered several challenges:

  1. CORS Issues: The most common problem was with CORS configuration. Make sure your API Gateway and Lambda function both return the proper CORS headers.

  2. IAM Permission Errors: Initially, I gave my Lambda function too many permissions, then too few. The policy shown above represents the minimal set of permissions needed.

  3. DynamoDB Initialization: The counter needs to be initialized with a value. I solved this by adding an item to the table during deployment.

  4. API Gateway Integration: Make sure your Lambda function and API Gateway are correctly integrated. Check for proper resource paths and method settings.

Lessons Learned 💡

  1. DynamoDB Design: My initial design was too simple. Adding more fields like timestamp and user-agent provides valuable analytics data.

  2. Error Handling: Robust error handling is critical for serverless applications. Without proper logging, debugging becomes nearly impossible.

  3. Testing Strategy: Writing tests before implementing the Lambda function (test-driven development) helped me think through edge cases and error scenarios.

  4. Security Considerations: Privacy is important. Hashing IP addresses and implementing proper IAM policies ensures we protect user data.

API Security Considerations 🔒

Security was a primary concern when building this API. Here are the key security measures I implemented:

  1. Least Privilege IAM Policies: The Lambda function has only the minimal permissions needed.

  2. Input Validation: The Lambda function validates and sanitizes all input.

  3. Rate Limiting: API Gateway is configured with throttling to prevent abuse.

  4. HTTPS Only: All API endpoints use HTTPS with modern TLS settings.

  5. CORS Configuration: Only the resume website domain is allowed to make cross-origin requests.

  6. Privacy Protection: IP addresses are hashed to protect visitor privacy.

These measures help protect against common API vulnerabilities like injection attacks, denial of service, and data exposure.

Enhancements and Mods 🚀

Here are some ways to extend this part of the challenge:

Developer Mod: Schemas and Dreamers

Instead of using DynamoDB, consider implementing a relational database approach:

resource "aws_db_subnet_group" "database" {
  name       = "resume-database-subnet-group"
  subnet_ids = var.private_subnet_ids
}

resource "aws_security_group" "database" {
  name        = "resume-database-sg"
  description = "Security group for the resume database"
  vpc_id      = var.vpc_id

  ingress {
    from_port       = 5432
    to_port         = 5432
    protocol        = "tcp"
    security_groups = [aws_security_group.lambda.id]
  }
}

resource "aws_db_instance" "postgresql" {
  allocated_storage      = 20
  storage_type           = "gp2"
  engine                 = "postgres"
  engine_version         = "13.4"
  instance_class         = "db.t3.micro"
  db_name                = "resumedb"
  username               = "postgres"
  password               = var.db_password
  parameter_group_name   = "default.postgres13"
  db_subnet_group_name   = aws_db_subnet_group.database.name
  vpc_security_group_ids = [aws_security_group.database.id]
  skip_final_snapshot    = true
  multi_az               = false

  tags = {
    Name        = "Resume Database"
    Environment = var.environment
  }
}

This approach introduces interesting networking challenges and requires modifications to your Lambda function to connect to PostgreSQL.

DevOps Mod: Monitor Lizard

Enhance monitoring with X-Ray traces and custom CloudWatch metrics:

# Add to Lambda function configuration
tracing_config {
  mode = "Active"
}

# Add X-Ray policy
resource "aws_iam_policy" "lambda_xray" {
  name        = "lambda-xray-policy-${var.environment}"
  description = "IAM policy for X-Ray tracing"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = [
          "xray:PutTraceSegments",
          "xray:PutTelemetryRecords"
        ]
        Effect   = "Allow"
        Resource = "*"
      }
    ]
  })
}

resource "aws_iam_role_policy_attachment" "lambda_xray" {
  role       = aws_iam_role.lambda_role.name
  policy_arn = aws_iam_policy.lambda_xray.arn
}

Then modify your Lambda function to emit custom metrics:

import boto3
from aws_xray_sdk.core import xray_recorder
from aws_xray_sdk.core import patch_all

# Patch all supported libraries for X-Ray
patch_all()

cloudwatch = boto3.client('cloudwatch')

# Inside lambda_handler
cloudwatch.put_metric_data(
    Namespace='ResumeMetrics',
    MetricData=[
        {
            'MetricName': 'VisitorCount',
            'Value': count,
            'Unit': 'Count'
        }
    ]
)

Security Mod: Check Your Privilege

Implement AWS WAF to protect your API from common web attacks:

resource "aws_wafv2_web_acl" "api" {
  name        = "api-waf-${var.environment}"
  description = "WAF for the resume API"
  scope       = "REGIONAL"

  default_action {
    allow {}
  }

  rule {
    name     = "AWSManagedRulesCommonRuleSet"
    priority = 0

    override_action {
      none {}
    }

    statement {
      managed_rule_group_statement {
        name        = "AWSManagedRulesCommonRuleSet"
        vendor_name = "AWS"
      }
    }

    visibility_config {
      cloudwatch_metrics_enabled = true
      metric_name                = "AWSManagedRulesCommonRuleSetMetric"
      sampled_requests_enabled   = true
    }
  }

  rule {
    name     = "RateLimit"
    priority = 1

    action {
      block {}
    }

    statement {
      rate_based_statement {
        limit              = 100
        aggregate_key_type = "IP"
      }
    }

    visibility_config {
      cloudwatch_metrics_enabled = true
      metric_name                = "RateLimitMetric"
      sampled_requests_enabled   = true
    }
  }

  visibility_config {
    cloudwatch_metrics_enabled = true
    metric_name                = "APIWebACLMetric"
    sampled_requests_enabled   = true
  }
}

resource "aws_wafv2_web_acl_association" "api" {
  resource_arn = aws_api_gateway_stage.visitor_counter.arn
  web_acl_arn  = aws_wafv2_web_acl.api.arn
}

Next Steps ⏭️

With our backend API completed, we're ready to connect it to our frontend in the next post. We'll integrate the JavaScript visitor counter with our API and then automate the deployment process using GitHub Actions.

Stay tuned to see how we bring the full stack together!


Up Next: [Cloud Resume Challenge with Terraform: Automating Deployments with GitHub Actions] 🔗

Share on Share on

Cloud Resume Challenge with Terraform: Deploying the Static Website 🚀

Introduction 🌍

In the previous post, we set up our Terraform environment and outlined the architecture for our Cloud Resume Challenge project. Now it's time to start building! In this post, we'll focus on deploying the first component: the static website that will host our resume.

Frontend Architecture Overview 🏗️

Let's look at the specific architecture we'll implement for our frontend:

┌───────────┐     ┌────────────┐     ┌──────────┐     ┌────────────┐
│           │     │            │     │          │     │            │
│  Route 53 ├─────► CloudFront ├─────►    S3    │     │    ACM     │
│           │     │            │     │          │     │ Certificate│
└───────────┘     └────────────┘     └──────────┘     └────────────┘
      ▲                                    ▲                 │
      │                                    │                 │
      └────────────────────────────────────┴─────────────────┘
           DNS & Certificate Validation

The frontend consists of:

  1. S3 Bucket: Hosts our HTML, CSS, and JavaScript files
  2. CloudFront: Provides CDN capabilities for global distribution and HTTPS
  3. Route 53: Manages our custom domain's DNS
  4. ACM: Provides SSL/TLS certificate for HTTPS

My HTML/CSS Resume Design Approach 🎨

Before diving into Terraform, I spent some time creating my resume in HTML and CSS. Rather than starting from scratch, I decided to use a minimalist approach with a focus on readability.

Here's a snippet of my HTML structure:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Matthew's Cloud Resume</title>
    <link rel="stylesheet" href="styles.css">
</head>
<body>
    <header>
        <h1>Matthew Johnson</h1>
        <p>Cloud Engineer</p>
    </header>

    <section id="contact">
        <!-- Contact information -->
    </section>

    <section id="skills">
        <!-- Skills list -->
    </section>

    <section id="experience">
        <!-- Work experience -->
    </section>

    <section id="education">
        <!-- Education history -->
    </section>

    <section id="certifications">
        <!-- AWS certifications -->
    </section>

    <section id="projects">
        <!-- Project descriptions including this challenge -->
    </section>

    <section id="counter">
        <p>This page has been viewed <span id="count">0</span> times.</p>
    </section>

    <footer>
        <!-- Footer content -->
    </footer>

    <script src="counter.js"></script>
</body>
</html>

For CSS, I went with a responsive design that works well on both desktop and mobile devices:

:root {
    --primary-color: #0066cc;
    --secondary-color: #f4f4f4;
    --text-color: #333;
    --heading-color: #222;
}

body {
    font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
    line-height: 1.6;
    color: var(--text-color);
    max-width: 800px;
    margin: 0 auto;
    padding: 1rem;
}

header {
    text-align: center;
    margin-bottom: 2rem;
}

h1, h2, h3 {
    color: var(--heading-color);
}

section {
    margin-bottom: 2rem;
}

/* Responsive design */
@media (max-width: 600px) {
    body {
        padding: 0.5rem;
    }
}

These files will be uploaded to our S3 bucket once we've provisioned it with Terraform.

Deploying the Static Website with Terraform 🌐

Now, let's implement the Terraform code for our frontend infrastructure. We'll create modules for each component, starting with S3.

1. S3 Module for Website Hosting 📂

Create a file at modules/frontend/s3.tf:

resource "aws_s3_bucket" "website" {
  bucket = var.website_bucket_name

  tags = {
    Name        = "Resume Website"
    Environment = var.environment
    Project     = "Cloud Resume Challenge"
  }
}

resource "aws_s3_bucket_website_configuration" "website" {
  bucket = aws_s3_bucket.website.id

  index_document {
    suffix = "index.html"
  }

  error_document {
    key = "error.html"
  }
}

resource "aws_s3_bucket_cors_configuration" "website" {
  bucket = aws_s3_bucket.website.id

  cors_rule {
    allowed_headers = ["*"]
    allowed_methods = ["GET", "HEAD"]
    allowed_origins = ["*"]  # In production, restrict to your domain
    expose_headers  = ["ETag"]
    max_age_seconds = 3000
  }
}

resource "aws_s3_bucket_policy" "website" {
  bucket = aws_s3_bucket.website.id
  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Sid       = "PublicReadGetObject"
        Effect    = "Allow"
        Principal = "*"
        Action    = "s3:GetObject"
        Resource  = "${aws_s3_bucket.website.arn}/*"
      }
    ]
  })
}

# Enable versioning for rollback capability
resource "aws_s3_bucket_versioning" "website" {
  bucket = aws_s3_bucket.website.id
  versioning_configuration {
    status = "Enabled"
  }
}

# Add encryption for security
resource "aws_s3_bucket_server_side_encryption_configuration" "website" {
  bucket = aws_s3_bucket.website.id

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}

Notice that I've included CORS configuration, which will be essential later when we integrate with our API. I also added encryption and versioning for better security and disaster recovery.

2. ACM Certificate Module 🔒

Create a file at modules/frontend/acm.tf:

resource "aws_acm_certificate" "website" {
  domain_name       = var.domain_name
  validation_method = "DNS"

  subject_alternative_names = ["www.${var.domain_name}"]

  lifecycle {
    create_before_destroy = true
  }

  tags = {
    Name        = "Resume Website Certificate"
    Environment = var.environment
  }
}

resource "aws_acm_certificate_validation" "website" {
  certificate_arn         = aws_acm_certificate.website.arn
  validation_record_fqdns = [for record in aws_route53_record.certificate_validation : record.fqdn]

  # Wait for DNS propagation
  timeouts {
    create = "30m"
  }
}

3. Route 53 for DNS Configuration 📡

Create a file at modules/frontend/route53.tf:

data "aws_route53_zone" "selected" {
  name         = var.root_domain_name
  private_zone = false
}

resource "aws_route53_record" "website" {
  zone_id = data.aws_route53_zone.selected.zone_id
  name    = var.domain_name
  type    = "A"

  alias {
    name                   = aws_cloudfront_distribution.website.domain_name
    zone_id                = aws_cloudfront_distribution.website.hosted_zone_id
    evaluate_target_health = false
  }
}

resource "aws_route53_record" "www" {
  zone_id = data.aws_route53_zone.selected.zone_id
  name    = "www.${var.domain_name}"
  type    = "A"

  alias {
    name                   = aws_cloudfront_distribution.website.domain_name
    zone_id                = aws_cloudfront_distribution.website.hosted_zone_id
    evaluate_target_health = false
  }
}

resource "aws_route53_record" "certificate_validation" {
  for_each = {
    for dvo in aws_acm_certificate.website.domain_validation_options : dvo.domain_name => {
      name   = dvo.resource_record_name
      record = dvo.resource_record_value
      type   = dvo.resource_record_type
    }
  }

  allow_overwrite = true
  name            = each.value.name
  records         = [each.value.record]
  ttl             = 60
  type            = each.value.type
  zone_id         = data.aws_route53_zone.selected.zone_id
}

4. CloudFront Distribution for CDN and HTTPS 🌍

Create a file at modules/frontend/cloudfront.tf:

resource "aws_cloudfront_distribution" "website" {
  origin {
    domain_name = aws_s3_bucket.website.bucket_regional_domain_name
    origin_id   = "S3-${var.website_bucket_name}"

    s3_origin_config {
      origin_access_identity = aws_cloudfront_origin_access_identity.website.cloudfront_access_identity_path
    }
  }

  enabled             = true
  is_ipv6_enabled     = true
  default_root_object = "index.html"
  aliases             = [var.domain_name, "www.${var.domain_name}"]
  price_class         = "PriceClass_100"

  default_cache_behavior {
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = "S3-${var.website_bucket_name}"

    forwarded_values {
      query_string = false
      cookies {
        forward = "none"
      }
    }

    viewer_protocol_policy = "redirect-to-https"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
    compress               = true
  }

  # Cache behaviors for specific patterns
  ordered_cache_behavior {
    path_pattern     = "*.js"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = "S3-${var.website_bucket_name}"

    forwarded_values {
      query_string = false
      cookies {
        forward = "none"
      }
    }

    viewer_protocol_policy = "redirect-to-https"
    min_ttl                = 0
    default_ttl            = 86400
    max_ttl                = 31536000
    compress               = true
  }

  ordered_cache_behavior {
    path_pattern     = "*.css"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = "S3-${var.website_bucket_name}"

    forwarded_values {
      query_string = false
      cookies {
        forward = "none"
      }
    }

    viewer_protocol_policy = "redirect-to-https"
    min_ttl                = 0
    default_ttl            = 86400
    max_ttl                = 31536000
    compress               = true
  }

  # Restrict access to North America and Europe
  restrictions {
    geo_restriction {
      restriction_type = "whitelist"
      locations        = ["US", "CA", "GB", "DE", "FR", "ES", "IT"]
    }
  }

  viewer_certificate {
    acm_certificate_arn      = aws_acm_certificate.website.arn
    ssl_support_method       = "sni-only"
    minimum_protocol_version = "TLSv1.2_2021"
  }

  # Add custom error response
  custom_error_response {
    error_code            = 404
    response_code         = 404
    response_page_path    = "/error.html"
    error_caching_min_ttl = 10
  }

  tags = {
    Name        = "Resume Website CloudFront"
    Environment = var.environment
  }

  depends_on = [aws_acm_certificate_validation.website]
}

resource "aws_cloudfront_origin_access_identity" "website" {
  comment = "Access identity for Resume Website CloudFront"
}

# Update S3 bucket policy to allow access from CloudFront
resource "aws_s3_bucket_policy" "cloudfront_access" {
  bucket = aws_s3_bucket.website.id
  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Sid       = "AllowCloudFrontServicePrincipal"
        Effect    = "Allow"
        Principal = {
          Service = "cloudfront.amazonaws.com"
        }
        Action    = "s3:GetObject"
        Resource  = "${aws_s3_bucket.website.arn}/*"
        Condition = {
          StringEquals = {
            "AWS:SourceArn" = aws_cloudfront_distribution.website.arn
          }
        }
      }
    ]
  })
}

I've implemented several security enhancements:

  • Using origin access control for CloudFront
  • Restricting the content to specific geographic regions
  • Setting TLS to more modern protocols
  • Creating custom error pages
  • Adding better cache controls for different file types

5. Variables and Outputs 📝

Create files at modules/frontend/variables.tf and modules/frontend/outputs.tf:

variables.tf:

variable "website_bucket_name" {
  description = "Name of the S3 bucket to store website content"
  type        = string
}

variable "domain_name" {
  description = "Domain name for the website"
  type        = string
}

variable "root_domain_name" {
  description = "Root domain name to find Route 53 hosted zone"
  type        = string
}

variable "environment" {
  description = "Deployment environment (e.g., dev, prod)"
  type        = string
  default     = "dev"
}

outputs.tf:

output "website_bucket_name" {
  description = "Name of the S3 bucket hosting the website"
  value       = aws_s3_bucket.website.id
}

output "cloudfront_distribution_id" {
  description = "ID of the CloudFront distribution"
  value       = aws_cloudfront_distribution.website.id
}

output "website_domain" {
  description = "Domain name of the website"
  value       = var.domain_name
}

output "cloudfront_domain_name" {
  description = "CloudFront domain name"
  value       = aws_cloudfront_distribution.website.domain_name
}

6. Main Module Configuration 🔄

Now, let's create the main configuration in main.tf that uses our frontend module:

provider "aws" {
  region = "us-east-1"
}

module "frontend" {
  source = "./modules/frontend"

  website_bucket_name = "my-resume-website-${var.environment}"
  domain_name         = var.domain_name
  root_domain_name    = var.root_domain_name
  environment         = var.environment
}

In variables.tf at the root level:

variable "environment" {
  description = "Deployment environment (e.g., dev, prod)"
  type        = string
  default     = "dev"
}

variable "domain_name" {
  description = "Domain name for the website"
  type        = string
}

variable "root_domain_name" {
  description = "Root domain name to find Route 53 hosted zone"
  type        = string
}

7. Uploading Content to S3 📤

We can use Terraform to upload our website files to S3:

# Add to modules/frontend/s3.tf
resource "aws_s3_object" "html" {
  bucket       = aws_s3_bucket.website.id
  key          = "index.html"
  source       = "${path.module}/../../website/index.html"
  content_type = "text/html"
  etag         = filemd5("${path.module}/../../website/index.html")
}

resource "aws_s3_object" "css" {
  bucket       = aws_s3_bucket.website.id
  key          = "styles.css"
  source       = "${path.module}/../../website/styles.css"
  content_type = "text/css"
  etag         = filemd5("${path.module}/../../website/styles.css")
}

resource "aws_s3_object" "js" {
  bucket       = aws_s3_bucket.website.id
  key          = "counter.js"
  source       = "${path.module}/../../website/counter.js"
  content_type = "application/javascript"
  etag         = filemd5("${path.module}/../../website/counter.js")
}

resource "aws_s3_object" "error_page" {
  bucket       = aws_s3_bucket.website.id
  key          = "error.html"
  source       = "${path.module}/../../website/error.html"
  content_type = "text/html"
  etag         = filemd5("${path.module}/../../website/error.html")
}

Testing Your Deployment 🧪

After applying these Terraform configurations, you'll want to test that everything is working correctly:

# Initialize Terraform
terraform init

# Plan the deployment
terraform plan -var="domain_name=resume.yourdomain.com" -var="root_domain_name=yourdomain.com" -var="environment=dev"

# Apply the changes
terraform apply -var="domain_name=resume.yourdomain.com" -var="root_domain_name=yourdomain.com" -var="environment=dev"

Once deployment is complete, verify:

  1. Your domain resolves to your CloudFront distribution
  2. HTTPS is working correctly
  3. Your resume appears as expected
  4. The website is accessible from different locations

Troubleshooting Common Issues ⚠️

During my implementation, I encountered several challenges:

  1. ACM Certificate Validation Delays: It can take up to 30 minutes for certificate validation to complete. Be patient or use the AWS console to monitor progress.

  2. CloudFront Distribution Propagation: CloudFront changes can take 15-20 minutes to propagate globally. If your site isn't loading correctly, wait and try again.

  3. S3 Bucket Policy Conflicts: If you receive errors about conflicting bucket policies, ensure that you're not applying multiple policies to the same bucket.

  4. CORS Configuration: Without proper CORS headers, your JavaScript won't be able to communicate with your API when we build it in the next post.

CORS Configuration for API Integration 🔄

The Cloud Resume Challenge requires a JavaScript visitor counter that communicates with an API. To prepare for this, I've added CORS configuration to our S3 bucket. When we implement the API in the next post, we'll need to ensure it allows requests from our domain.

Here's the JavaScript snippet we'll use for the counter (to be implemented fully in the next post):

// counter.js
document.addEventListener('DOMContentLoaded', function() {
  // We'll need to fetch from our API
  // Example: https://api.yourdomain.com/visitor-count

  // For now, just a placeholder
  document.getElementById('count').innerText = 'Loading...';

  // This will be implemented fully when we create our API
  // fetch('https://api.yourdomain.com/visitor-count')
  //   .then(response => response.json())
  //   .then(data => {
  //     document.getElementById('count').innerText = data.count;
  //   })
  //   .catch(error => console.error('Error fetching visitor count:', error));
});

Lessons Learned 💡

  1. Domain Verification: I initially struggled with ACM certificate validation. The key lesson was to ensure that the Route 53 hosted zone existed before attempting to create validation records.

  2. Terraform State Management: When modifying existing resources, it's important to understand how Terraform tracks state. A single typo can lead to resource recreation rather than updates.

  3. Performance Optimization: Adding specific cache behaviors for CSS and JS files significantly improved page load times. It's worth taking the time to optimize these settings.

  4. Security Considerations: Setting up proper bucket policies and CloudFront origin access identity is critical to prevent direct access to your S3 bucket while still allowing CloudFront to serve content.

Enhancements and Mods 🚀

Here are some ways to extend this part of the challenge:

Developer Mod: Static Site Generator

Instead of writing plain HTML/CSS, consider using a static site generator like Hugo or Jekyll:

  1. Install Hugo: brew install hugo (on macOS) or equivalent for your OS
  2. Create a new site: hugo new site resume-site
  3. Choose a theme or create your own
  4. Generate the site: hugo -D
  5. Modify your Terraform to upload the public directory contents to S3

This approach gives you templating capabilities, making it easier to update and maintain your resume.

DevOps Mod: Content Invalidation Lambda

Create a Lambda function that automatically invalidates CloudFront cache when new content is uploaded to S3:

resource "aws_lambda_function" "invalidation" {
  filename      = "lambda_function.zip"
  function_name = "cloudfront-invalidation"
  role          = aws_iam_role.lambda_role.arn
  handler       = "index.handler"
  runtime       = "nodejs14.x"

  environment {
    variables = {
      DISTRIBUTION_ID = aws_cloudfront_distribution.website.id
    }
  }
}

resource "aws_s3_bucket_notification" "bucket_notification" {
  bucket = aws_s3_bucket.website.id

  lambda_function {
    lambda_function_arn = aws_lambda_function.invalidation.arn
    events              = ["s3:ObjectCreated:*", "s3:ObjectRemoved:*"]
  }
}

Security Mod: Implement DNSSEC

To prevent DNS spoofing attacks, implement DNSSEC for your domain:

resource "aws_route53_key_signing_key" "example" {
  hosted_zone_id             = data.aws_route53_zone.selected.id
  key_management_service_arn = aws_kms_key.dnssec.arn
  name                       = "example"
}

resource "aws_route53_hosted_zone_dnssec" "example" {
  hosted_zone_id = aws_route53_key_signing_key.example.hosted_zone_id
}

resource "aws_kms_key" "dnssec" {
  customer_master_key_spec = "ECC_NIST_P256"
  deletion_window_in_days  = 7
  key_usage                = "SIGN_VERIFY"
  policy = jsonencode({
    Statement = [
      {
        Action = [
          "kms:DescribeKey",
          "kms:GetPublicKey",
          "kms:Sign",
        ],
        Effect = "Allow",
        Principal = {
          Service = "dnssec-route53.amazonaws.com"
        },
        Resource = "*"
      },
      {
        Action = "kms:*",
        Effect = "Allow",
        Principal = {
          AWS = "*"
        },
        Resource = "*"
      }
    ]
    Version = "2012-10-17"
  })
}

Next Steps ⏭️

With our static website infrastructure in place, we now have a live resume hosted on AWS with a custom domain and HTTPS. In the next post, we'll build the backend API using API Gateway, Lambda, and DynamoDB to track visitor counts.

Stay tuned to see how we implement the serverless backend and connect it to our frontend!


Up Next: [Cloud Resume Challenge with Terraform: Building the Backend API] 🔗

Share on Share on

Cloud Resume Challenge with Terraform: Introduction & Setup 🚀

Introduction 🌍

The Cloud Resume Challenge is a hands-on project designed to build a real-world cloud application while showcasing your skills in AWS, serverless architecture, and automation. Many implementations of this challenge use AWS SAM or manual setup via the AWS console, but in this series, I will demonstrate how to build the entire infrastructure using Terraform. 💡

My Journey to Terraform 🧰

When I first discovered the Cloud Resume Challenge, I was immediately intrigued by the hands-on approach to learning cloud technologies. Having some experience with traditional IT but wanting to transition to a more cloud-focused role, I saw this challenge as the perfect opportunity to showcase my skills.

I chose Terraform over AWS SAM or CloudFormation because:

  1. Multi-cloud flexibility - While this challenge focuses on AWS, Terraform skills transfer to Azure, GCP, and other providers
  2. Declarative approach - I find the HCL syntax more intuitive than YAML for defining infrastructure
  3. Industry adoption - In my research, I found that Terraform was highly sought after in job postings
  4. Strong community - The extensive module registry and community support made learning easier

This series reflects my personal journey through the challenge, including the obstacles I overcame and the lessons I learned along the way.

Why Terraform? 🛠️

Terraform allows for Infrastructure as Code (IaC), which:

  • Automates resource provisioning 🤖
  • Ensures consistency across environments ✅
  • Improves security by managing configurations centrally 🔒
  • Enables version control for infrastructure changes 📝

This series assumes basic knowledge of Terraform and will focus on highlighting key Terraform code snippets rather than full configuration files.

Project Overview 🏗️

Let's visualize the architecture we'll be building throughout this series:

Basic Project Diagram

AWS Services Used ☁️

The project consists of the following AWS components:

  • Frontend: Static website hosted on S3 and delivered via CloudFront.
  • Backend API: API Gateway, Lambda, and DynamoDB to track visitor counts.
  • Security: IAM roles, API Gateway security, and AWS Certificate Manager (ACM) for HTTPS 🔐.
  • Automation: CI/CD with GitHub Actions to deploy infrastructure and update website content ⚡.

Terraform Module Breakdown 🧩

To keep the infrastructure modular and maintainable, we will define Terraform modules for each major component:

  1. S3 Module 📂: Manages the static website hosting.
  2. CloudFront Module 🌍: Ensures fast delivery and HTTPS encryption.
  3. Route 53 Module 📡: Handles DNS configuration.
  4. DynamoDB Module 📊: Stores visitor count data.
  5. Lambda Module 🏗️: Defines the backend API logic.
  6. API Gateway Module 🔗: Exposes the Lambda function via a REST API.
  7. ACM Module 🔒: Provides SSL/TLS certificates for secure communication.

Setting Up Terraform ⚙️

Before deploying any resources, we need to set up Terraform and backend state management to store infrastructure changes securely.

1. Install Terraform & AWS CLI 🖥️

Ensure you have the necessary tools installed:

# Install Terraform
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install terraform

# Install AWS CLI
curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
sudo installer -pkg AWSCLIV2.pkg -target /

2. Configure AWS Credentials Securely 🔑

Terraform interacts with AWS via credentials. Setting these up securely is crucial to avoid exposing sensitive information.

Setting up AWS Account Structure

Following cloud security best practices, I recommend creating a proper AWS account structure:

  1. Create a management AWS account for your organization
  2. Enable Multi-Factor Authentication (MFA) on the root account
  3. Create separate AWS accounts for development and production environments
  4. Set up AWS IAM Identity Center (formerly SSO) for secure access

If you're just getting started, you can begin with a simpler setup:

# Configure AWS CLI with a dedicated IAM user (not root account)
aws configure

# Test your configuration
aws sts get-caller-identity

Set up IAM permissions for Terraform by ensuring your IAM user has the necessary policies for provisioning resources. Start with a least privilege approach and add permissions as needed.

3. Set Up Remote Backend for Terraform State 🏢

Using a remote backend (such as an S3 bucket) prevents local state loss and enables collaboration.

Project Directory Structure

Here's how I've organized my Terraform project:

cloud-resume-challenge/
├── modules/
│   ├── frontend/
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   └── outputs.tf
│   ├── backend/
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   └── outputs.tf
│   └── networking/
│       ├── main.tf
│       ├── variables.tf
│       └── outputs.tf
├── environments/
│   ├── dev/
│   │   └── main.tf
│   └── prod/
│       └── main.tf
├── terraform.tf (backend config)
├── variables.tf
├── outputs.tf
└── main.tf
Define the backend in terraform.tf
terraform {
  backend "s3" {
    bucket         = "my-terraform-state-bucket"
    key            = "cloud-resume/state.tfstate"
    region         = "us-east-1"
    encrypt        = true
    dynamodb_table = "terraform-lock"
  }

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.16"
    }
  }

  required_version = ">= 1.2.0"
}
Create S3 Bucket and DynamoDB Table for Backend

Before you can use an S3 backend, you need to create the bucket and DynamoDB table. I prefer to do this via Terraform as well, using a separate configuration:

# backend-setup/main.tf
provider "aws" {
  region = "us-east-1"
}

resource "aws_s3_bucket" "terraform_state" {
  bucket = "my-terraform-state-bucket"
}

resource "aws_s3_bucket_versioning" "terraform_state" {
  bucket = aws_s3_bucket.terraform_state.id
  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_server_side_encryption_configuration" "terraform_state" {
  bucket = aws_s3_bucket.terraform_state.id
  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}

resource "aws_dynamodb_table" "terraform_locks" {
  name         = "terraform-lock"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "LockID"

  attribute {
    name = "LockID"
    type = "S"
  }
}

Run these commands to set up your backend:

cd backend-setup
terraform init
terraform apply
cd ..
terraform init  # Initialize with the S3 backend

A Note on Security 🔒

Throughout this series, I'll be emphasizing security best practices. Some key principles to keep in mind:

  1. Never commit AWS credentials to your repository
  2. Use IAM roles with least privilege for all resources
  3. Enable encryption for sensitive data
  4. Implement proper security groups and network ACLs
  5. Regularly rotate credentials and keys

These principles will be applied to our infrastructure as we build it in the upcoming posts.

Lessons Learned 💡

In my initial attempts at setting up the Terraform environment, I encountered several challenges:

  1. State file management: I initially stored state locally, which caused problems when working from different computers. Switching to S3 backend solved this issue.

  2. Module organization: I tried several directory structures before settling on the current one. Organizing by component type rather than AWS service made the most sense for this project.

  3. Version constraints: Not specifying version constraints for providers led to unexpected behavior when Terraform updated. Always specify your provider versions!

Next Steps ⏭️

In the next post, we'll build the static website infrastructure with S3, CloudFront, Route 53, and ACM. We'll create Terraform modules for each component and deploy them together to host our resume.

Developer Mod: Advanced Terraform Techniques 🚀

If you're familiar with Terraform and want to take this challenge further, consider implementing these enhancements:

  1. Terraform Cloud Integration: Connect your repository to Terraform Cloud for enhanced collaboration and run history.

  2. Terratest: Add infrastructure tests using the Terratest framework to validate your configurations.

  3. Custom Terraform Modules: Create reusable modules and publish them to the Terraform Registry.

  4. Terraform Workspaces: Use workspaces to manage multiple environments (dev, staging, prod) within the same Terraform configuration.


Up Next: [Cloud Resume Challenge with Terraform: Deploying the Static Website] 🔗

Share on Share on

📊 Monitoring an IIS-Based Web Farm with Azure Application Insights

In this guide, you'll learn how to:

✅ Set up Application Insights on an IIS-based web farm.
✅ Configure Log Analytics, Data Collection Rules, and Data Collection Endpoints.
✅ Use PowerShell to install the Application Insights agent.
✅ Monitor live metrics, failures, performance, and logs in real-time.

By the end, you'll have a fully monitored IIS-based web farm using Azure! 🎯


🏗️ Step 1: Enabling Application Insights on IIS Servers

To effectively monitor your IIS-based application, you need to configure Azure Application Insights and ensure all required components are installed on your Azure VMs.

🛠️ Prerequisites

Before proceeding, ensure you have:

  • An active Azure Subscription with permissions to create and manage resources.
  • A Log Analytics Workspace (LAW) to store collected telemetry data.
  • Azure Monitor Agent (AMA) installed on your IIS VMs.
  • Necessary permissions to create Data Collection Rules (DCRs) and Data Collection Endpoints (DCEs).

Create a Log Analytics Workspace

  1. Go to Azure PortalSearch for "Log Analytics Workspaces"Create.
  2. Provide the following details:
  3. Subscription: Select your Azure subscription.
  4. Resource Group: Choose or create a new one.
  5. Name: Enter a unique name (e.g., log-corpapp-prod-uksouth).
  6. Region: Same as your IIS VMs.
  7. Click "Review + Create" and deploy the workspace.

🔗 Microsoft Learn: Log Analytics Workspace

Create a Data Collection Endpoint (DCE)

  1. Navigate to MonitorData Collection Endpoints.
  2. Click "+ Create" and provide:
  3. Name: e.g., dce-corpapp-prod-uksouth.
  4. Subscription & Resource Group: Same as your IIS VMs.
  5. Region: Same as Log Analytics Workspace.
  6. Review & create the endpoint.

🔗 Microsoft Learn: Data Collection Endpoints

Create a Data Collection Rule (DCR)

  1. Go to MonitorData Collection Rules+ Create.
  2. Configure:
  3. Name: dcr-corpapp-iis-prod-uksouth
  4. Subscription & Resource Group: Same as above.
  5. Region: Same as DCE & LAW.
  6. Define data sources:
  7. Windows Event Logs: Add System, Application, etc.
  8. Log Levels: Select relevant levels (Error, Warning, Information).
  9. Set Destination:
  10. Choose "Log Analytics Workspace" → Select the previously created workspace.
  11. Associate with IIS VMs (WEB01 - WEB05).
  12. Review & Create the rule.

🔗 Microsoft Learn: Data Collection Rules

Install the Azure Monitor Agent (AMA)

  1. Navigate to each IIS VM.
  2. Under "Monitoring", select "Extensions".
  3. Click "+ Add"AzureMonitorWindowsAgent → Install.
  4. Repeat for all IIS VMs.

🔗 Microsoft Learn: Azure Monitor Agent

Enable Application Insights

  1. Navigate to Azure PortalSearch for "Application Insights".
  2. Click "+ Create" → Provide:
  3. Subscription & Resource Group: Same as VMs.
  4. Name: insights-corpapp-prod-uksouth-001.
  5. Region: Same as your IIS VMs.
  6. Application Type: ASP.NET Web Application.
  7. Click "Review + Create" and deploy.

🔗 Microsoft Learn: Enable Application Insights

Install the Application Insights Agent

Use the following PowerShell script to install the agent on all of the IIS servers:

# Install the Application Insights Agent
$instrumentationKey = "YOUR-INSTRUMENTATION-KEY"
Install-PackageProvider -Name NuGet -Force
Install-Module -Name ApplicationInsightsWebTracking -Force
Enable-ApplicationInsightsMonitoring -InstrumentationKey $instrumentationKey
Restart-Service W3SVC

📊 Step 2: Using Application Insights for Monitoring

With everything set up, it's time to monitor and analyze application performance! 🔍

📌 Overview Dashboard

  • Displays high-level health metrics, failed requests, and response times. 📸 Insights Overview

📌 Application Map

  • Shows dependencies and interactions between components. 📸 Application Map

📌 Live Metrics

  • Monitor real-time requests, server performance, and failures. 📸 Live Metrics

📌 Failures & Exceptions

  • Identify and diagnose failed requests & top exceptions. 📸 Failures & Exceptions

📌 Performance Monitoring

  • Analyze response times, dependencies & bottlenecks. 📸 Performance Overview

📌 Logs & Queries

  • Run Kusto Query Language (KQL) queries for deep insights.

Example query to find failed requests:

requests
| where timestamp > ago(24h)
| where success == false
| project timestamp, name, resultCode, url
| order by timestamp desc

📸 Query Results


Next Steps

🎯 Continue monitoring logs & alerts for trends.
🎯 Optimize Application Insights sampling to reduce telemetry costs.
🎯 Automate reporting for key performance metrics.

By following this guide, you'll have a robust, real-time monitoring setup for your IIS web farm, ensuring optimal performance and quick issue resolution! 🚀

Share on Share on

📢 Uninstalling PaperCut MF Client via Intune – A Step-by-Step Guide 🚀

🔍 Scenario Overview

Managing software across an enterprise can be a headache, especially when it comes to removing outdated applications. Recently, I needed to uninstall the PaperCut MF Client from multiple Windows PCs in my environment. The challenge? Ensuring a clean removal without user intervention and no leftover files.

Rather than relying on manual uninstallation, we used Microsoft Intune to deploy a PowerShell script that handles the removal automatically. This blog post details the full process, from script development to deployment and testing.


🎯 The Goal

✅ Uninstall the PaperCut MF Client silently
✅ Ensure no residual files are left behind
✅ Deploy the solution via Intune as a PowerShell script (NOT as a Win32 app)
✅ Test both locally and remotely before large-scale deployment


🛠 Step 1: Writing the Uninstall Script

We first created a PowerShell script to:

  1. Stop PaperCut-related processes
  2. Run the built-in uninstaller (unins000.exe) if present
  3. Use MSIEXEC to remove the MSI-based install
  4. Forcefully delete any remaining files and registry entries

📝 The Uninstall Script

# Define variables
$UninstallExePath = "C:\Program Files (x86)\PaperCut MF Client\unins000.exe"
$MsiProductCode = "{5B4B80DE-34C4-11E9-9CA9-F53BB8A68831}"  # Replace with actual Product Code
$LogFile = "C:\ProgramData\Custom-Intune-Scripts\Papercut-Uninstall.log"
$InstallPath = "C:\Program Files (x86)\PaperCut MF Client"

# Function to log output
Function Write-Log {
    param ([string]$Message)
    $TimeStamp = Get-Date -Format "yyyy-MM-dd HH:mm:ss"
    "$TimeStamp - $Message" | Out-File -Append -FilePath $LogFile
}

Write-Log "Starting PaperCut MF Client uninstallation process."

# Stop any running PaperCut processes before uninstalling
$Processes = @("pc-client", "pc-client-java", "pc-client-local-cache")  # Common PaperCut processes
foreach ($Process in $Processes) {
    if (Get-Process -Name $Process -ErrorAction SilentlyContinue) {
        Write-Log "Stopping process: $Process"
        Stop-Process -Name $Process -Force -ErrorAction SilentlyContinue
    }
}

# Check if unins000.exe exists
if (Test-Path $UninstallExePath) {
    Write-Log "Found unins000.exe at $UninstallExePath. Initiating uninstallation."
    Start-Process -FilePath $UninstallExePath -ArgumentList "/SILENT" -NoNewWindow -Wait
    Write-Log "Uninstallation process completed using unins000.exe."
} else {
    Write-Log "unins000.exe not found. Attempting MSI uninstallation using Product Code $MsiProductCode."
    Start-Process -FilePath "msiexec.exe" -ArgumentList "/x $MsiProductCode /qn /norestart" -NoNewWindow -Wait
}

# Forcefully delete the remaining installation folder
if (Test-Path $InstallPath) {
    Write-Log "Residual files found at $InstallPath. Attempting to remove forcefully."
    takeown /F "$InstallPath" /R /D Y | Out-Null
    icacls "$InstallPath" /grant Administrators:F /T /C /Q | Out-Null
    Remove-Item -Path $InstallPath -Recurse -Force -ErrorAction SilentlyContinue
    if (-not (Test-Path $InstallPath)) {
        Write-Log "SUCCESS: Residual files successfully removed."
    } else {
        Write-Log "ERROR: Failed to remove residual files. Manual intervention may be required."
    }
} else {
    Write-Log "No residual files found."
}

Write-Log "PaperCut MF Client uninstallation script execution finished."

🧪 Step 2: Testing the Script Locally

Before deploying via Intune, it's best to test locally:

  1. Open PowerShell as Administrator
  2. Run the script manually:
powershell.exe -ExecutionPolicy Bypass -File "C:\Path\To\Script.ps1"
  1. Verify:
  2. Check C:\Program Files (x86)\PaperCut MF Client to confirm deletion
  3. Check C:\ProgramData\AXA-Custom-Intune-Scripts\Papercut-Uninstall.log for success logs

🌍 Step 3: Running the Script on a Remote PC

If you need to test the script remotely before deploying via Intune:

$RemotePC = "COMPUTER-NAME"  # Change this to the target PC name
Invoke-Command -ComputerName $RemotePC -FilePath "C:\Path\To\Script.ps1" -Credential (Get-Credential)

📡 Step 4: Deploying via Intune

Instead of packaging the script as a .intunewin file, we will deploy it as a PowerShell script in Intune.

🎯 Steps to Deploy in Intune

  1. Go to Microsoft Endpoint Manager admin center (endpoint.microsoft.com)
  2. Navigate to Devices > Scripts
  3. Click Add > Windows 10 and later
  4. Upload the PowerShell script (Papercut-Uninstall.ps1)
  5. Configure settings:
  6. Run script using the logged-on credentials?No (runs as SYSTEM)
  7. Enforce script signature check?No
  8. Run script in 64-bit PowerShell Host?Yes
  9. Assign the script to device groups (not users)
  10. Monitor deployment logs in Intune

📌 Final Thoughts

By using Intune and PowerShell, we successfully automated the silent uninstallation of PaperCut MF Client. This approach ensures a zero-touch removal with no residual files, keeping endpoints clean and manageable. 🚀

Got questions or need enhancements? Drop them in the comments! 😊

Share on Share on