Web App Deployment on AWS using Lift and Shift Approach
Web App Deployment on AWS using Lift and Shift Approach
This project demonstrates the design and deployment of a scalable, secure, and highly available application architecture on Amazon Web Services (AWS). The solution leverages core AWS services such as Elastic Load Balancing, EC2 Auto Scaling, Amazon S3, Route 53, and private DNS zones to build a production-ready environment. The architecture was designed to meet modern cloud-native best practices, ensuring performance, scalability, and reliability while maintaining secure access controls.
The architecture consists of three primary layers:
Frontend Layer (User Access & Load Balancing)
End users connect through HTTPS using a custom domain name managed via DNS zones.
An Application Load Balancer (ALB) distributes incoming traffic securely across multiple Tomcat application server instances.
Security Groups are applied to ensure only HTTPS and necessary application traffic (e.g., port 8080) are allowed.
Application Layer (Compute & Auto Scaling)
Tomcat Instances handle application logic and serve dynamic requests.
Instances are deployed within an Auto Scaling Group to ensure elasticity—automatically adding or removing instances based on traffic demand.
This ensures high availability and cost efficiency.
Backend Layer (Data & Messaging)
Includes MySQL Instances for relational database storage.
Memcache Instances improve performance by caching frequently accessed data.
RabbitMQ Instances enable message queuing for asynchronous communication.
Each backend service is deployed inside its own Security Group for controlled access.
Amazon S3: Used for storing static assets (such as images, CSS, and JavaScript files). This offloads workload from the compute layer and delivers faster content to users.
Amazon Route 53: Provides domain registration, global DNS resolution, and health checks for routing traffic to healthy instances.
DNS Private Zones: Private DNS entries (db01, mc01, rmq01) map to backend services, simplifying communication between components without exposing IP addresses publicly.
All traffic is encrypted using HTTPS at the load balancer level.
Security Groups segment layers of the architecture:
ALB allows only HTTPS traffic from users.
Tomcat instances only accept traffic from the ALB.
Backend services (MySQL, Memcache, RabbitMQ) only accept traffic from the application layer.
This layered approach aligns with the principle of least privilege.
The Auto Scaling Group ensures the application can handle sudden increases or decreases in traffic by adjusting the number of EC2 instances.
Using multiple instances across Availability Zones ensures high availability, minimizing downtime if a single instance or zone fails.
Elastic Load Balancer works seamlessly with Auto Scaling to distribute requests evenly among healthy instances.
Scalability – Auto Scaling adapts resources dynamically.
High Availability – Redundancy across multiple instances and zones ensures reliability.
Security – Security Groups and HTTPS protect against unauthorized access.
Performance – Memcache speeds up data retrieval, RabbitMQ optimizes communication, and S3 improves static content delivery.
Flexibility – Route 53 and DNS zones enable easy service discovery and domain management.
Cost Optimization – Pay-as-you-go model with Auto Scaling prevents overprovisioning.
This type of architecture is suitable for:
E-commerce platforms requiring high uptime and quick response times.
Enterprise web applications handling unpredictable workloads.
SaaS products that need elasticity and reliability.
Dynamic web applications that integrate messaging queues and caching layers.
Traditional on-premises or datacenter workloads face multiple challenges:
Complex management of servers and teams.
Scaling limitations—manual provisioning when traffic increases.
High upfront CapEx and recurring OpEx costs.
Time-consuming manual operations with a high risk of human error.
Limited flexibility compared to modern cloud environments.
Migrate and run an existing application workload from on-premises infrastructure to AWS Cloud using a Lift and Shift strategy. The goal is to:
Reduce infrastructure complexity.
Enable elastic scalability.
Minimize cost through pay-as-you-go pricing.
Improve availability, reliability, and security.
Automate repetitive infrastructure tasks.
Instead of maintaining workloads in a datacenter, the application is hosted on AWS using Infrastructure as a Service (IaaS).
Key benefits:
Pay-as-you-go pricing model.
Elastic scaling with Auto Scaling Groups.
Managed security with IAM, Security Groups, ACM, and Route 53.
Automation through user-data scripts, AMIs, and ASGs.
Amazon EC2 – Compute instances for application & backend services (Tomcat, MySQL, Memcached, RabbitMQ).
Elastic Load Balancer (ALB) – Distributes traffic securely across Tomcat instances.
Auto Scaling Group (ASG) – Automatically scales Tomcat instances based on load.
Amazon S3 – Stores application artifacts for deployment.
Amazon Route 53 – Private DNS resolution for backend services.
AWS Certificate Manager (ACM) – Manages SSL/TLS certificates for HTTPS traffic.
IAM – Provides secure access control & instance roles.
Amazon EBS – Persistent storage for EC2 instances.
Architecture Description
Users access the application through a domain name (e.g., myapp.com).
Route 53 resolves the domain to the Application Load Balancer (ALB).
ACM provides SSL certificates so traffic flows via HTTPS.
The ALB accepts requests (port 443) and forwards them to Tomcat instances (port 8080).
Tomcat instances (in an Auto Scaling Group) run the application and connect to backend services.
Backend services (MySQL, RabbitMQ, Memcached) run on dedicated EC2 instances in a separate Security Group.
Application artifacts are built locally, uploaded to S3, and deployed to Tomcat servers.
Step-by-Step Execution Process
This section provides a detailed walkthrough of how I implemented the cloud migration project, from initial setup to deploying a fully functional and scalable architecture. Each step includes the key actions taken, AWS services used, and screenshots to illustrate the process.
Step 1: Login to AWS Management Console
I logged into the AWS Console as the IAM user account (secured with MFA) and prepared my environment. This included setting the region and ensuring billing alarms were in place.
Logged into the AWS Console with an IAM account (MFA enabled) and set the target region. Confirmed I had permissions to create EC2, ALB, S3, Route 53, ACM, IAM resources.
Why it matters: Ensures you’re operating in the right region and using an account with least-privilege.
Step 2: Create a Key Pair for Secure Access
Generated an EC2 Key Pair to enable SSH access to my instances securely. The private key was downloaded and stored locally.
Created a key pair so I could SSH into instances for debugging. Downloaded .pem and secured it locally (chmod 400 my-key.pem).
Step 3: Create Security Groups
Configured three Security Groups:
Load Balancer SG – Allows inbound HTTPS (443) from the internet.
Tomcat Application SG – Allows inbound traffic only from Load Balancer SG on port 8080 and SSH (22) from my IP.
Backend Services SG – Allows inbound traffic only from Application SG on MySQL (3306), Memcache (11211), and RabbitMQ (5672).
Allows inbound traffic only from Application SG on MySQL (3306), Memcache (11211), and RabbitMQ (5672)
Allows inbound traffic only from Load Balancer SG on port 8080 and SSH (22) from my IP
Allows inbound HTTPS (443) from the internet
Step 4: Launch EC2 Instances
Provisioned EC2 instances for:
Tomcat application server
MySQL database
RabbitMQ message broker
Memcache caching service
Each instance was placed in the correct Security Group and initialized with User Data scripts for automated setup.
Each instance was placed in the correct Security Group and initialized with User Data scripts for automated setup.
Launched separate EC2 instances for each backend service. Used user-data scripts to install and configure each service (so they’re ready on first boot). Placed them in the backend SG.
Tomcat application server
MySQL database
RabbitMQ message broker
Memcache caching service
Step 5: Configure Route 53 Private Hosted Zone
Created a private hosted zone in Route 53 for internal service discovery. Added records such as:
db01.myprofile.in → MySQL instance
mc01.myprofile.in → Memcache
rmq01.myprofile.in → RabbitMQ
db01.myprofile.in → MySQL instance
mc01.myprofile.in → Memcache
rmq01.myprofile.in → RabbitMQ
Created a private hosted zone and added A records pointing service names to backend instance private IPs
Using names instead of IPs simplifies app configuration and future scaling.
Step 6: Build Application Artifact with Maven
On my local development machine, I built the Java web application artifact (.war) using Maven. This required JDK and AWS CLI installed.
Built the .war artifact locally with Maven; verified tests and JDK are available.
Step 7: Upload Artifact to S3
Uploaded the .war artifact into an S3 bucket to act as central storage for deployment artifacts. IAM permissions were configured for controlled access.
Created an S3 bucket and uploaded vprofile-v2.war to s3://myprofile-las-artifacts16/artifacts/. Created IAM policy granting read access to the specific path.
Step 8: Download and Deploy Artifact to Tomcat Server
Connected to the Tomcat EC2 instance, installed Tomcat, and deployed the .war file by fetching it from S3 using the AWS CLI.
Launched a Tomcat EC2 instance (in sg-app) and used a user-data script that installs Java/Tomcat, installs AWS CLI, and downloads the WAR from S3 to /opt/tomcat/webapps/. I also created a simple /health endpoint.
Why it matters: Automating deployment reduces manual steps and drift.
Step 9: Set Up Application Load Balancer (ALB)
Created an Application Load Balancer (ALB) with:
Listener on HTTPS (443) using an ACM certificate.
Target Group forwarding requests to Tomcat instances on port 8080.
Health checks configured for /health endpoint.
Created a Target Group (protocol HTTP, port 8080) with health check path /health. Created an internet-facing ALB in at least two AZs, attached sg-alb, and pointed the listener to the Target Group.
Step 10: Configure SSL with AWS Certificate Manager (ACM)
Generated an SSL/TLS certificate in ACM and attached it to the ALB, ensuring encrypted HTTPS traffic.
Requested a certificate in ACM then validated via DNS. Attached the certificate to the ALB HTTPS (443) listener.
Step 11: Map Domain with Route 53 & Namecheap
Configured DNS so that the custom domain (purchased from Namecheap) points to the ALB endpoint via Route 53.
Step 12: Test HTTPS Access
Verified secure access by opening the domain in a browser. Confirmed the application loaded successfully over HTTPS.
Tested the app via browser and CLI to ensure HTTPS loads and the app responds
Step 13: Create AMI of Tomcat Instance
Created an Amazon Machine Image (AMI) of the Tomcat instance, which serves as a template for scaling.
Created an AMI from the working Tomcat instance (with Java/Tomcat preinstalled and startup script in place). This AMI is used by the ASG for consistent instances. This serves as a template for scaling
Step 14: Configure Auto Scaling
Set up an Auto Scaling Group (ASG) with the following parameters:
Launch Template using Tomcat AMI.
Minimum: 1 instance, Maximum: 4 instances.
Scaling policies based on CPU utilization.
Launch Template using Tomcat AMI.
Minimum: 1 instance, Maximum: 4 instances.
Scaling policies based on CPU utilization.
Created a Launch Template referencing the AMI, instance type, SGs, IAM instance profile. Then created an ASG (min: 1, desired: 2, max: 4) attached to the ALB target group. Configured target-tracking scaling policy (e.g., target CPU = 60%) or request-count per target.
Step 15: Final Testing and Validation
Conducted end-to-end testing:
Verified ALB routing.
Checked scaling by generating load (CPU stress test).
Ensured backend services (MySQL, RabbitMQ, Memcache) were reachable via private DNS.
Ran end-to-end tests:
Confirm ALB health checks show healthy instances.
Simulated load to trigger scaling and observed ASG spin up new instances.
Verified Tomcat logs, DB connectivity via private DNS, and that cache/messaging worked.
Ensured backend services (MySQL, RabbitMQ, Memcache) were reachable via private DNS.
Conclusion
This project successfully migrated a monolithic application from an on-premises data center to AWS, implementing:
Load balancing with SSL
Secure multi-tier architecture
Automated deployments using S3 and User Data
Auto Scaling for elasticity
DNS-based service discovery
By completing this project, I demonstrated my ability to design, implement, and optimize AWS-based production workloads, following cloud best practices in scalability, availability, and security.