Over the course of this blog series, we've successfully completed the Cloud Resume Challenge using Terraform as our infrastructure-as-code tool. Let's recap what we've accomplished:
Set up our development environment with Terraform and AWS credentials
Deployed a static website using S3, CloudFront, Route 53, and ACM
Built a serverless backend API with API Gateway, Lambda, and DynamoDB
Implemented CI/CD pipelines with GitHub Actions for automated deployments
Added security enhancements like OIDC authentication and least-privilege IAM policies
The final architecture we've created looks like this:
The most valuable aspect of this project is that we've built a completely automated, production-quality cloud solution. Every component is defined as code, enabling us to track changes, rollback if needed, and redeploy the entire infrastructure with minimal effort.
Challenge: As the project grew, managing Terraform state became more complex, especially when working across different environments.
Solution: I restructured the project to use workspaces and remote state with careful output references between modules. This improved state organization and made multi-environment deployments more manageable.
The ability to define, version, and automate infrastructure is increasingly essential in modern IT environments. This project showcases expertise with Terraform that can be applied to any cloud provider or on-premises infrastructure.
Setting up CI/CD workflows that automate testing and deployment demonstrates key DevOps skills that organizations need to accelerate their development cycles.
The backend API implementation shows understanding of event-driven, serverless architecture patterns that are becoming standard for new cloud applications.
The security considerations throughout the project - from IAM roles to OIDC authentication - demonstrate the ability to build secure systems from the ground up.
While this solution is relatively inexpensive, it's good practice to set up AWS Budgets and alerts to monitor costs. My current monthly costs are approximately:
Enhance reliability and performance by deploying to multiple AWS regions:
module"frontend_us_east_1"{source="./modules/frontend"providers={aws=aws.us_east_1} # Configuration for US East region}module"frontend_eu_west_1"{source="./modules/frontend"providers={aws=aws.eu_west_1} # Configuration for EU West region}resource"aws_route53_health_check""primary_region"{fqdn=module.frontend_us_east_1.cloudfront_domain_nameport=443type="HTTPS"resource_path="/"failure_threshold=3request_interval=30}resource"aws_route53_record""global"{zone_id=data.aws_route53_zone.selected.zone_idname=var.domain_nametype="CNAME"failover_routing_policy{type="PRIMARY"}health_check_id=aws_route53_health_check.primary_region.idset_identifier="primary"records=[module.frontend_us_east_1.cloudfront_domain_name]ttl=300}
I've moved from basic cloud knowledge to being able to architect and implement complex, multi-service solutions. The hands-on experience with Terraform has been particularly valuable, as it's a highly sought-after skill in the job market.
This project now serves as both my resume and a demonstration of my cloud engineering capabilities. I've included the GitHub repository links on my resume, allowing potential employers to see the code behind the deployment.
Sharing this project through blog posts has connected me with the broader cloud community. The feedback and discussions have been invaluable for refining my approach and learning from others.
The Cloud Resume Challenge has been an invaluable learning experience. By implementing it with Terraform, I've gained practical experience with both AWS services and infrastructure as code - skills that are directly applicable to professional cloud engineering roles.
What makes this challenge particularly powerful is how it combines so many aspects of modern cloud development:
Front-end web development
Back-end serverless APIs
Infrastructure as code
CI/CD automation
Security implementation
DNS configuration
Content delivery networks
If you're following along with this series, I encourage you to customize and extend the project to showcase your unique skills and interests. The foundational architecture we've built provides a flexible platform that can evolve with your career.
For those just starting their cloud journey, this challenge offers a perfect blend of practical skills in a realistic project that demonstrates end-to-end capabilities. It's far more valuable than isolated tutorials or theoretical knowledge alone.
The cloud engineering field continues to evolve rapidly, but the principles we've applied throughout this project - automation, security, scalability, and operational excellence - remain constants regardless of which specific technologies are in favor.
While this concludes our Cloud Resume Challenge series, my cloud learning journey continues. Some areas I'm exploring next include:
Kubernetes and container orchestration
Infrastructure testing frameworks
Cloud cost optimization
Multi-cloud deployments
Infrastructure security scanning
Service mesh implementations
I hope this series has been helpful in your own cloud journey. Feel free to reach out with questions or to share your own implementations of the challenge!
This post concludes our Cloud Resume Challenge with Terraform series. Thanks for following along!
In our previous posts, we built the frontend and backend components of our cloud resume project. Now it's time to take our implementation to the next level by implementing continuous integration and deployment (CI/CD) with GitHub Actions.
When I first started this challenge, I manually ran terraform apply every time I made a change. This quickly became tedious and error-prone. As a cloud engineer, I wanted to demonstrate a professional approach to infrastructure management by implementing proper CI/CD pipelines.
Automating deployments offers several key benefits:
Consistency: Every deployment follows the same process
Efficiency: No more manual steps or waiting around
Safety: Automated tests catch issues before they reach production
Auditability: Each change is tracked with a commit and workflow run
This approach mirrors how professional cloud teams work and is a crucial skill for any cloud engineer.
Then, create an IAM role that GitHub Actions can assume:
# oidc-role.tfresource"aws_iam_role""github_actions"{name="github-actions-role"assume_role_policy=jsonencode({Version="2012-10-17"Statement=[{Action="sts:AssumeRoleWithWebIdentity"Effect="Allow"Principal={Federated=aws_iam_openid_connect_provider.github.arn}Condition={StringEquals={"token.actions.githubusercontent.com:aud"="sts.amazonaws.com"}StringLike={"token.actions.githubusercontent.com:sub"="repo:${var.github_org}/${var.github_repo}:*"}}}]})}# Attach policies to the roleresource"aws_iam_role_policy_attachment""terraform_permissions"{role=aws_iam_role.github_actions.namepolicy_arn=aws_iam_policy.terraform_permissions.arn}resource"aws_iam_policy""terraform_permissions"{name="terraform-deployment-policy"description="Policy for Terraform deployments via GitHub Actions"policy=jsonencode({Version="2012-10-17"Statement=[{Action=["s3:*","cloudfront:*","route53:*","acm:*","lambda:*","apigateway:*","dynamodb:*","logs:*","iam:GetRole","iam:PassRole","iam:CreateRole","iam:DeleteRole","iam:PutRolePolicy","iam:DeleteRolePolicy","iam:AttachRolePolicy","iam:DetachRolePolicy"]Effect="Allow"Resource="*"}]})}
For a production environment, I would use more fine-grained permissions, but this policy works for our demonstration.
Let's create a simple Cypress test to verify that our visitor counter is working. First, create a package.json file in the root of your frontend repository:
{"name":"cloud-resume-frontend","version":"1.0.0","description":"Frontend for Cloud Resume Challenge","scripts":{"test":"cypress open","test:ci":"cypress run"},"devDependencies":{"cypress":"^12.0.0"}}
Then create a Cypress test at tests/cypress/integration/counter.spec.js:
describe('Resume Website Tests',()=>{beforeEach(()=>{// Visit the home page before each testcy.visit('/');});it('should load the resume page',()=>{// Check that we have a titlecy.get('h1').should('be.visible');// Check that key sections existcy.contains('Experience').should('be.visible');cy.contains('Education').should('be.visible');cy.contains('Skills').should('be.visible');});it('should load and display the visitor counter',()=>{// Check that the counter element existscy.get('#count').should('exist');// Wait for the counter to update (should not remain at 0)cy.get('#count',{timeout:10000}).should('not.contain','0').should('not.contain','Loading');// Verify the counter shows a numbercy.get('#count').invoke('text').then(parseFloat).should('be.gt',0);});});
One of the most valuable CI/CD patterns is deploying to multiple environments. Let's modify our backend workflow to support both development and production environments:
# Additional job for production deployment after dev is successfulpromote-to-prod:name:'PromotetoProduction'runs-on:ubuntu-latestneeds:test-apienvironment:productionif:github.event_name == 'workflow_dispatch'steps:-name:Checkout Repositoryuses:actions/checkout@v3-name:Configure AWS Credentialsuses:aws-actions/configure-aws-credentials@v2with:role-to-assume:arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/github-actions-roleaws-region:us-east-1-name:Setup Terraformuses:hashicorp/setup-terraform@v2with:terraform_version:1.2.0-name:Terraform Initworking-directory:./terraform/environments/prodrun:terraform init -backend-config="bucket=${{ secrets.TF_STATE_BUCKET }}" -backend-config="key=${{ secrets.TF_STATE_KEY_PROD }}" -backend-config="region=us-east-1"-name:Terraform Planworking-directory:./terraform/environments/prodrun:terraform plan -var="environment=prod" -var="domain_name=${{ secrets.DOMAIN_NAME_PROD }}" -out=tfplan-name:Terraform Applyworking-directory:./terraform/environments/prodrun:terraform apply -auto-approve tfplan-name:Test Production APIrun:|API_ENDPOINT=$(aws cloudformation describe-stacks --stack-name resume-backend-prod --query "Stacks[0].Outputs[?OutputKey=='ApiEndpoint'].OutputValue" --output text)response=$(curl -s "$API_ENDPOINT/count")echo "API Response: $response"# Check if the response contains a count fieldecho $response | grep -q '"count":'if [ $? -eq 0 ]; thenecho "Production API test successful"elseecho "Production API test failed"exit 1fi
Create a file at .github/dependabot.yml in both repositories:
version:2updates:-package-ecosystem:"github-actions"directory:"/"schedule:interval:"weekly"open-pull-requests-limit:10# For frontend-package-ecosystem:"npm"directory:"/"schedule:interval:"weekly"open-pull-requests-limit:10# For backend-package-ecosystem:"pip"directory:"/"schedule:interval:"weekly"open-pull-requests-limit:10
This configuration automatically updates dependencies and identifies security vulnerabilities.
Create a file at tests/integration-test.js in the frontend repository:
constaxios=require('axios');constassert=require('assert');// URLs to test - these should be passed as environment variablesconstWEBSITE_URL=process.env.WEBSITE_URL||'https://resume.yourdomain.com';constAPI_URL=process.env.API_URL||'https://api.yourdomain.com/count';// Test that the API returns a valid responseasyncfunctiontestAPI(){try{console.log(`Testing API at ${API_URL}`);constresponse=awaitaxios.get(API_URL);// Verify the API response contains a countassert(response.status===200,`API returned status ${response.status}`);assert(response.data.count!==undefined,'API response missing count field');assert(typeofresponse.data.count==='number','Count is not a number');console.log(`API test successful. Count: ${response.data.count}`);returntrue;}catch(error){console.error('API test failed:',error.message);returnfalse;}}// Test that the website loads and contains necessary elementsasyncfunctiontestWebsite(){try{console.log(`Testing website at ${WEBSITE_URL}`);constresponse=awaitaxios.get(WEBSITE_URL);// Verify the website loadsassert(response.status===200,`Website returned status ${response.status}`);// Check that the page contains some expected contentassert(response.data.includes('<html'),'Response is not HTML');assert(response.data.includes('id="count"'),'Counter element not found');console.log('Website test successful');returntrue;}catch(error){console.error('Website test failed:',error.message);returnfalse;}}// Run all testsasyncfunctionrunTests(){constapiResult=awaittestAPI();constwebsiteResult=awaittestWebsite();if(apiResult&&websiteResult){console.log('All integration tests passed!');process.exit(0);}else{console.error('Some integration tests failed');process.exit(1);}}// Run the testsrunTests();
Implementing CI/CD for this project taught me several valuable lessons:
Start Simple, Then Iterate: My first workflow was basic - just syncing files to S3. As I gained confidence, I added testing, multiple environments, and security features.
Security Is Non-Negotiable: Using OIDC for authentication instead of long-lived credentials was a game-changer for security. This approach follows AWS best practices and eliminates credential management headaches.
Test Everything: Automated tests at every level (unit, integration, end-to-end) catch issues early. The time invested in writing tests paid off with more reliable deployments.
Environment Separation: Keeping development and production environments separate allowed me to test changes safely before affecting the live site.
Infrastructure as Code Works: Using Terraform to define all infrastructure components made the CI/CD process much more reliable. Everything is tracked, versioned, and repeatable.
During implementation, I encountered several challenges:
CORS Issues: The API and website needed proper CORS configuration to work together. Adding the correct headers in both Lambda and API Gateway fixed this.
Environment Variables: Managing different configurations for dev and prod was tricky. I solved this by using GitHub environment variables and separate Terraform workspaces.
Cache Invalidation Delays: Changes to the website sometimes weren't visible immediately due to CloudFront caching. Adding proper cache invalidation to the workflow fixed this.
State Locking: When multiple workflow runs executed simultaneously, they occasionally conflicted on Terraform state. Using DynamoDB for state locking resolved this issue.
And add a cleanup job to delete the preview environment when the PR is closed:
cleanup_preview:name:'CleanupPreviewEnvironment'runs-on:ubuntu-latestif:github.event_name == 'pull_request' && github.event.action == 'closed'steps:# Similar to create_preview but with terraform destroy
To enhance the security of our API, I added API key authentication using AWS Secrets Manager:
# Create a secret to store the API keyresource"aws_secretsmanager_secret""api_key"{name="resume-api-key-${var.environment}"description="API key for the Resume API"}# Generate a random API keyresource"random_password""api_key"{length=32special=false}# Store the API key in Secrets Managerresource"aws_secretsmanager_secret_version""api_key"{secret_id=aws_secretsmanager_secret.api_key.idsecret_string=random_password.api_key.result}# Add API key to API Gatewayresource"aws_api_gateway_api_key""visitor_counter"{name="visitor-counter-key-${var.environment}"}resource"aws_api_gateway_usage_plan""visitor_counter"{name="visitor-counter-usage-plan-${var.environment}"api_stages{api_id=aws_api_gateway_rest_api.visitor_counter.idstage=aws_api_gateway_deployment.visitor_counter.stage_name}quota_settings{limit=1000period="DAY"}throttle_settings{burst_limit=10rate_limit=5}}resource"aws_api_gateway_usage_plan_key""visitor_counter"{key_id=aws_api_gateway_api_key.visitor_counter.idkey_type="API_KEY"usage_plan_id=aws_api_gateway_usage_plan.visitor_counter.id}# Update the Lambda function to verify the API keyresource"aws_lambda_function""visitor_counter"{ # ... existing configuration ...environment{variables={DYNAMODB_TABLE=aws_dynamodb_table.visitor_counter.nameALLOWED_ORIGIN=var.website_domainAPI_KEY_SECRET=aws_secretsmanager_secret.api_key.name}}}
Then, modify the Lambda function to retrieve and validate the API key:
importboto3importjsonimportos# Initialize Secrets Manager clientsecretsmanager=boto3.client('secretsmanager')defget_api_key():"""Retrieve the API key from Secrets Manager"""secret_name=os.environ['API_KEY_SECRET']response=secretsmanager.get_secret_value(SecretId=secret_name)returnresponse['SecretString']deflambda_handler(event,context):# Verify API keyapi_key=event.get('headers',{}).get('x-api-key')expected_api_key=get_api_key()ifapi_key!=expected_api_key:return{'statusCode':403,'headers':{'Content-Type':'application/json'},'body':json.dumps({'error':'Forbidden','message':'Invalid API key'})}# Rest of the function...
With our CI/CD pipelines in place, our Cloud Resume Challenge implementation is complete! In the final post, we'll reflect on the project as a whole, discuss lessons learned, and explore potential future enhancements.
Up Next: [Cloud Resume Challenge with Terraform: Final Thoughts & Lessons Learned] 🔗
The Cloud Resume Challenge is a hands-on project designed to build a real-world cloud application while showcasing your skills in AWS, serverless architecture, and automation. Many implementations of this challenge use AWS SAM or manual setup via the AWS console, but in this series, I will demonstrate how to build the entire infrastructure using Terraform. 💡
When I first discovered the Cloud Resume Challenge, I was immediately intrigued by the hands-on approach to learning cloud technologies. Having some experience with traditional IT but wanting to transition to a more cloud-focused role, I saw this challenge as the perfect opportunity to showcase my skills.
I chose Terraform over AWS SAM or CloudFormation because:
Multi-cloud flexibility - While this challenge focuses on AWS, Terraform skills transfer to Azure, GCP, and other providers
Declarative approach - I find the HCL syntax more intuitive than YAML for defining infrastructure
Industry adoption - In my research, I found that Terraform was highly sought after in job postings
Strong community - The extensive module registry and community support made learning easier
This series reflects my personal journey through the challenge, including the obstacles I overcame and the lessons I learned along the way.
Following cloud security best practices, I recommend creating a proper AWS account structure:
Create a management AWS account for your organization
Enable Multi-Factor Authentication (MFA) on the root account
Create separate AWS accounts for development and production environments
Set up AWS IAM Identity Center (formerly SSO) for secure access
If you're just getting started, you can begin with a simpler setup:
# Configure AWS CLI with a dedicated IAM user (not root account)awsconfigure
# Test your configurationawsstsget-caller-identity
Set up IAM permissions for Terraform by ensuring your IAM user has the necessary policies for provisioning resources. Start with a least privilege approach and add permissions as needed.
Before you can use an S3 backend, you need to create the bucket and DynamoDB table. I prefer to do this via Terraform as well, using a separate configuration:
In my initial attempts at setting up the Terraform environment, I encountered several challenges:
State file management: I initially stored state locally, which caused problems when working from different computers. Switching to S3 backend solved this issue.
Module organization: I tried several directory structures before settling on the current one. Organizing by component type rather than AWS service made the most sense for this project.
Version constraints: Not specifying version constraints for providers led to unexpected behavior when Terraform updated. Always specify your provider versions!
In the next post, we'll build the static website infrastructure with S3, CloudFront, Route 53, and ACM. We'll create Terraform modules for each component and deploy them together to host our resume.
Using the method detailed in this post, I successfully passed the AZ-400 exam while creating a reusable study system. This approach helped me transform 34+ hours of MSLearn content into structured, searchable revision notes that I could quickly reference during my exam preparation.
Let me walk you through how I developed this system and how you can apply it to your own certification journey.
Studying for Microsoft certification exams like AZ-400 can be overwhelming due to the vast amount of content available. Microsoft Learn alone provides over 34 hours of recommended reading, making it difficult to retain everything effectively.
To tackle this challenge, I developed a structured method using MSLearn, third-party exam questions, and ChatGPT to create a comprehensive revision guide. This method helped me organize knowledge into concise notes, cheat sheets, glossaries, and knowledge checks, ultimately leading to a successful exam pass!
This guide documents my step-by-step process so that you can replicate or adapt it for your own Microsoft exam preparation.
To ensure comprehensive coverage of the exam syllabus, I structured my studies around the official Microsoft Learn learning paths. Each path covers a key topic required for AZ-400 certification, including DevOps principles, CI/CD, infrastructure as code, and security best practices. I systematically worked through these collections, summarizing important concepts, capturing key insights, and using ChatGPT to refine the content into structured notes.
Below are the learning paths I followed, each linking directly to its respective Microsoft Learn module:
These resources formed the foundation of my study plan, ensuring alignment with the official exam objectives. I used these collections as the basis for my revision notes, AI-generated summaries, and knowledge checks.
Before diving into the detailed steps, here's an overview of the complete workflow:
MSLearn Content → Link Collection → ChatGPT Summarization → GitHub Storage → Practice Testing → Final Review
Estimated time investment per module:
Manual link collection: ~15 minutes
AI summarization and refinement: ~30-60 minutes
Review and validation: ~30 minutes
Total per module: ~1-1.75 hours (compared to 3-4 hours of traditional study)
These estimates are based on my experience after processing several modules. As you'll see in the learning curve section below, your first few modules might take longer as you refine your workflow.
Let's dive into each step of the process in detail.
Initial Setup: I created a dedicated folder structure on my computer with sub-folders for each learning path, mirroring the eventual GitHub repository structure.
After each lesson: Captured all relevant hyperlinks and stored them in a .txt file within the appropriate folder. This was as simple as copy-pasting links while reading.
At the end of each module: Consolidated all links into the text file and organized them by topic.
Mapped content to official exam objectives: Fed the exam study guide into ChatGPT to check alignment, ensuring I wasn't missing critical areas.
## Branch Policies in Azure ReposBranch policies help teams protect important branches by:
-Requiring code reviews before merging
-Setting minimum number of reviewers (typically 2+)
-Enforcing build validation to prevent broken code
-Restricting direct pushes to protected branches
### Key Configuration Options:| Policy | Purpose | Real-world Usage |
|--------|---------|------------------|
| Minimum reviewers | Ensures code quality | Set to 2+ for production code |
| Build validation | Prevents broken builds | Configure with main CI pipeline |
| Comment resolution | Tracks issue fixes | Require resolution before merge |
Lesson Learned: Consistent link collection during the learning process is much more efficient than trying to gather everything after completing a module. I developed a habit of copying links as I encountered them, which saved significant time later.
💡 Future Improvement: Automating this process via a script could save time. A PowerShell or Python script could potentially scrape MSLearn modules for relevant links.
To turn raw MSLearn material into usable study notes, I fed collected links into ChatGPT and asked it to scrape and summarize key points.
I used ChatGPT 4 for this process, as it provided better context handling and more accurate summaries than earlier versions.
The summarization workflow consisted of the following steps:
1️⃣ Collected MSLearn Links – Compiled all module-related links into a text file.
2️⃣ Fed the Links into ChatGPT – Asked ChatGPT to analyze and summarize key information.
3️⃣ Refined the Output Iteratively – Adjusted prompts to enhance clarity and completeness.
Well-structured prompts were essential for generating clear and accurate summaries. Below is an example of my initial prompt:
prompt - ChatGPT first iteration
Please create a .md file in the same format as the previous ones and include the following:
Summarize key information within each unit, including diagrams, tables, and exercises and labs.
List steps performed and order of steps/workflow, where applicable.
Use tables primarily for comparing differences between items.
Include:
Key exam points.
Points to remember.
Prerequisite information.
Include any service limits - maximum minutes per month for a particular tier, difference services available in varying tiers/services/SKUs for example
Permissions required for activities.
Provide real-world applications, troubleshooting scenarios, and advanced tips.
Highlight common pitfalls or mistakes to avoid.
Review the canvas and add/remove any relevant information.
Use the web to search for supplementary material where necessary, and summarize this information within the notes.
Avoid external links—include all relevant information directly in the notes.
Ensure all "Learning objectives" in Unit 1 are met by the material included in the output .md file(s)
Ensure no content is included that doesn't have a real-world example or exam related reference included
Review the output you have created at the end, and make any further improvements automatically be manually revising the file or implementing your comments.
Here is a list of the links contained in this module.
Using the parameters outlined above create a comprehensive exam cram resource cheat sheet, that can be used for my AZ-400 exam prep.
The resulting output needs contain material relevant to the AZ-400 study guide:
https://learn.microsoft.com/en-us/credentials/certifications/resources/study-guides/az-400
Let me know when you are ready for the module links?
While this prompt worked initially, I found it had several limitations:
It was too lengthy and complex for ChatGPT to consistently follow all instructions
## Key Exam Points-Understand the different branch policies in Azure Repos
-Know how to configure pull request approvals
-Understand branch policy permissions
Problems:
Too generic with "understand" and "know how" statements
Lacks specific examples and actionable information
No clear formatting structure
Refined Output (After Improved Prompt):
## Branch Policies in Azure DevOps### Key Exam Points-Branch policies in Azure Repos protect branches by enforcing code review and build validation
-Required reviewers policy must be configured with minimum count (2+ recommended for production)
-Build validation policy links CI pipeline to PR process, ensuring code builds successfully
-Policy bypasses can be granted to specific users or groups (Project Administrators have bypass by default)
-Branch policies are set at repository level under Branches → [...] → Branch Policies
### Common ScenariosWhen setting up branch policies for a large team:
1. Configure minimum 2 reviewers for main branch
2. Enable "Comment resolution" to require addressing feedback
3. Link build validation to prevent broken builds
4. Set reset votes when new changes are pushed
Using AI to generate structured content wasn't always seamless. Here are some key challenges and how I addressed them:
Challenge
Solution
Example
ChatGPT lost context in long sessions
Processed module-by-module instead of bulk inputs
Split "Azure Pipelines" module into 3 separate prompts
Overwrote useful content in iterations
Manually saved outputs before requesting refinements
Created checkpoint files labeled v1, v2, etc.
Large data inputs led to incomplete summaries
Used multiple iterations, focusing on key areas of each module
First pass: core concepts; Second pass: examples and scenarios
Hallucinations on technical details
Cross-validated against official documentation
Corrected service limits and permission details
Generic "understand X" statements
Explicitly requested specific actionable information
Replaced "Understand CI/CD" with actual pipeline YAML examples
Breaking down content into smaller chunks and applying manual validation helped ensure better results.
Learning Curve: My first module took nearly 2 hours to process completely, as I was still figuring out the optimal prompt structure and workflow. By my fifth module, I had reduced this to about 45 minutes through improved prompting and a more streamlined approach.
To improve content accuracy, I introduced an additional review prompt:
ChatGPT prompt - second iteration
Objective:
Create a .md file that acts as a comprehensive AZ-400 exam cram resource cheat sheet.
Instructions:
Act as my Azure DevOps training expert with a focus on preparing me for the AZ-400 exam.
The output must adhere to the structure and content requirements outlined below:
Content Requirements:
Each file should contain no more than 750 words (excluding text that make up hyperlinks)
Summarize Key Information:
Include summaries for each unit, diagram, table, exercise, and lab where applicable.
Use clear and concise explanations.
List Steps/Workflows:
Summarize steps performed in labs/exercises and the order of steps/workflows where applicable.
Use Tables:
Create tables primarily for comparing differences between items (examples, but not limited to - features, tiers, SKUs etc ).
Key Exam Points: Highlight crucial information likely to appear in the exam and provide actual examples.
Do not use generic phrases like "Understand...." and "Know how to....".
I need you to provide the information I need to know for each exam tip.
Points to Remember: Provide concise, high-priority notes for studying.
Prerequisite Information: Mention anything needed to understand or implement concepts.
Service Limits: Include tier limitations (e.g., maximum minutes per month), service availability by SKU, etc.
Permissions Required: Specify roles/permissions necessary for activities.
Practical Applications:
Provide real-world applications, troubleshooting scenarios, and advanced tips.
Highlight common pitfalls or mistakes to avoid.
Relevance:
Ensure the output aligns with the Microsoft AZ-400 study guide
(https://learn.microsoft.com/en-us/credentials/certifications/resources/study-guides/az-400)
Exclude any content that lacks real-world examples or exam-related references.
Final Review:
Evaluate the output to ensure all "Learning Objectives" in Unit 1 are met.
Automatically revise the file manually if needed to enhance clarity and completeness.
Prompt me for a list of URL's or an existing .md file when you have understood the instructions.
Depending on the results, I would often break the prompt up further, and just use a specific part.
For example, once I was happy with the results of a certain output I would re-enter the "Final Review:
Evaluate the output to ensure all "Learning Objectives" in Unit 1 are met."
Automatically revise the file manually if needed to enhance clarity and completeness." prompt, once or maybe several times until was happy with the finished output.
A typical module would go through 2-3 iterations:
Initial generation - Creates the basic structure and content
Content enhancement - Adds real-world examples and specifics
Final validation - Checks against learning objectives and improves clarity
For complex topics like Azure Pipelines, I might need 4-5 iterations to fully refine the content.
A full module processing cycle typically took about 30-45 minutes, compared to 2-3 hours of traditional study and note-taking. The time investment was front-loaded, but paid dividends during revision.
Based on my experience, these additional resources provided the best value:
Tutorials Dojo Practice Exams - Excellent explanations and documentation links
MeasureUp Official Practice Tests - Most similar to actual exam format
WhizLabs Labs - Hands-on practice for key scenarios
The combination of AI-summarized MSLearn content and targeted practice questions created a comprehensive exam preparation strategy.
Real-World Application Example: During a practice exam, I encountered a question about configuring branch policies with required reviewers. Using my GitHub repository's search function, I quickly found the related notes I had created, which included the exact setting location and recommended configuration values. This allowed me to answer correctly and understand the underlying concept, rather than just memorizing an answer.
Practice question mentions "Which Azure DevOps deployment strategy minimizes downtime during releases?"
One of the answers mnetions "Blue/Green" deployment
Search repository for "Blue/Green"
Results show multiple matching files.
Quickly identify that "Blue/Green deployment" is the correct answer based on my notes.
Verify with documentation reference that Blue/Green deployments maintain two identical production environments, allowing for instant switching between versions.
During practice exams, I could typically locate key information in under 30 seconds using this method, compared to several minutes when using traditional notes or searching documentation directly.
These figures are based on my own experience and tracking of study time. Your results may vary depending on your familiarity with the subject matter and the tools involved. The key insight is that the most significant time savings came from condensing the initial reading phase while maintaining or even improving knowledge retention through structured notes.
🔹 Further break down the ChatGPT input process into smaller steps
🔹 Explore alternative AI tools like Claude or Bard to compare summary quality
🔹 Consider automating link collection from MSLearn using a simple web scraper
🔹 Create a standardized template for each module type from the beginning
🔹 Add more visual elements like diagrams to represent complex relationships
Here's a simplified prompt template to get you started:
I'm studying for the [EXAM CODE] certification. Please help me create concise, exam-focused notes for the following module: [MODULE NAME]
For each key concept, please:
1. Explain it in 1-2 sentences
2. Provide a real-world example or scenario
3. Note any configuration options or limitations
4. Mention if it's likely to appear on the exam
Please format your response in Markdown with clear headings and avoid generic "understand X" statements.
Here are the links to the module content:
[PASTE LINKS HERE]
1️⃣ Set up a GitHub repo for your notes.
2️⃣ Manually collect MSLearn hyperlinks as you study.
3️⃣ Use ChatGPT to summarize module-by-module.
4️⃣ Validate third-party questions with official docs.
5️⃣ Store and search your notes in GitHub for quick reference.
Q: Is this approach considered cheating?
A: No. This method enhances learning by actively engaging with the material rather than replacing study. You're creating custom notes by directing AI to extract and organize information you need to know.
Q: How much technical knowledge do I need to implement this approach?
A: Basic GitHub knowledge and familiarity with markdown formatting are helpful but not essential. The core process can be adapted to use any note-taking system.
Q: Does this work for all Microsoft certification exams?
A: Yes, this approach works well for any exam with structured MSLearn paths.
Q: How do you handle inaccurate information from AI?
A: Always verify key technical details against official documentation. When in doubt, trust Microsoft's documentation over AI-generated content.
Q: How long did it take you to become proficient with this workflow?
A: After completing about 3-4 modules, I had established an efficient workflow. The learning curve is relatively quick if you're already familiar with GitHub and ChatGPT.
This method made my exam prep structured and efficient, though it required significant manual effort. If you're preparing for a Microsoft certification, consider trying this approach!
The combination of AI-powered summarization, structured GitHub storage, and focused practice testing created a powerful study system that both saved time and improved retention.
The most valuable aspect wasn't just passing the exam, but creating a reusable knowledge base that continues to serve as a reference in my professional work. While traditional study methods might help you pass an exam, this approach helps build a lasting resource.
💡 Have you used AI tools for exam prep? Share your thoughts in the comments!