Wednesday, July 9, 2025

AWS Project day 2

 Class 63rd AWS Project day 2 July 9th

Using all below services

IAM, Completed
KMS, Encrypt the password Completed
VPC, 
EC2,(LB,AS,EBS,EFS),
Local to RDS,     Completed
DynamoDB,
Lambda,    Completed
API Gateway,
S3,   Completed
CloudFront  Completed
Route53,
ACM

Last Class 62nd Above service completed, using CloudFront Images stored in private bucket 
It should retrieved from the s3 private bucket 
Step1:
Need Create CloudFront distributed url select the Origin private while you creating 

Step2: I have uploaded app_cloudfront.py take reference and execute rename app.py
 >python app.py
https://github.com/Vakatisubbu/DigitalLibrary


Step3:Image column CloudFront, image stored database.


Step4: KMS , password encrypted 


Lambda using our project 

lambda_function.py


import json
import pymysql

db_config = {  
    'host': 'aurorabd.csn64oem2jvs.us-east-1.rds.amazonaws.com',
    'user': 'admin',
    'password': 'admin123',
    'database': 'digital_library'
}

def lambda_handler(event, context):
    try:
        # 🔍 Debug full event structure
        print("FULL EVENT:", json.dumps(event))

        # ✅ Pull query parameters
        body = event if isinstance(event, dict) else {}
        print("Query Params:", json.dumps(body))

        # Extract
        name = body.get('name')
        mobile = body.get('mobile')
        email = body.get('email')
        password = body.get('password')
        gender = body.get('gender')
        location = body.get('location')
        image = body.get('image')

        if not all([name, mobile, email, password,gender,location,image]):
            return {
                "statusCode": 400,
                "body": json.dumps({
                    "status": "fail",
                    "message": "Missing required fields",
                    "received": body  # <-- debug: show what was received
                })
            }

        conn = pymysql.connect(**db_config)
        cursor = conn.cursor()

        cursor.execute("SELECT * FROM users WHERE email = %s", (email,))
        if cursor.fetchone():
            cursor.close()
            conn.close()
            return {
                "statusCode": 409,
                "body": json.dumps({"status": "fail", "message": "Email already exists"})
            }

        cursor.execute("""
            INSERT INTO users (name, mobile, email, password, gender, location, image)
            VALUES (%s, %s, %s, %s, %s, %s, %s)
        """, (name, mobile, email, password, gender, location, image))
        conn.commit()
        cursor.close()
        conn.close()

        return {
            "statusCode": 200,
            "body": json.dumps({"status": "success", "message": "User created successfully"})
        }

    except Exception as e:
        print("Exception:", str(e))
        return {
            "statusCode": 409,
            "body": json.dumps({"status": "error", "message": str(e)})
        }


C:\Users\Administrator\Desktop\Lambda>pip install pymysql -t.
Collecting pymysql
  Using cached PyMySQL-1.1.1-py3-none-any.whl.metadata (4.4 kB)
Using cached PyMySQL-1.1.1-py3-none-any.whl (44 kB)
Installing collected packages: pymysql
Successfully installed pymysql-1.1.1

Select this files Zip it

pymysql
PyMySQL-1.1.1.dist-info
lambda_function.py

Step5: upload the zip folder 
Click Deploy
Step6: 
API Gateway  >Reset API > Resoure>SignUp> create Methods >GET >


Click Integration edit m
Mapping template
{
   "name": "$input.params('name')",
   "mobile": "$input.params('mobile')",
   "email": "$input.params('email')",
   "password":"$input.params('password')",
   "gender": "$input.params('gender')",
   "location":"$input.params('location')",
   "image": "$input.params('image')"
}
Click Save 
Click Deploy API 
Step7:
Tested Query Script parameter 

Step8: Record save successfully in database.

Step9: Create one more  function name 

ccit-lambda-function-Sigin.py
While Selecting the role Same role Lambda function we can not use,until unless the role has administrator permission
import json
import pymysql
import logging

logger = logging.getLogger()
logger.setLevel(logging.INFO)

db_config = {
'host': 'aurorabd.csn64oem2jvs.us-east-1.rds.amazonaws.com',
'user': 'admin',
'password': 'admin123',
'database': 'digital_library',
'connect_timeout': 5
}


def lambda_handler(event, context):
try:
# Debug logging
logger.info(f"Raw event structure: {json.dumps(event)}")

# Handle both proxy and non-proxy integrations
if event.get('queryStringParameters'):
data = event['queryStringParameters']
elif event.get('body'):
try:
body = event['body']
data = json.loads(body) if isinstance(body, str) else body
except json.JSONDecodeError:
data = {}
else:
data = event

logger.info(f"Parsed data: {data}")

email = data.get('email')
password = data.get('password')

if not email or not password:
return {
"statusCode": 400,
"headers": {"Content-Type": "application/json"},
"body": json.dumps({
"status": "fail",
"message": "Email and password required",
"received": data
})
}

# Database connection
try:
conn = pymysql.connect(**db_config)
with conn.cursor() as cursor:
cursor.execute(
"SELECT id, name, email FROM users WHERE email = %s AND password = %s",
(email, password)
)
user = cursor.fetchone()

if user:
response = {
"statusCode": 200,
"headers": {"Content-Type": "application/json"},
"body": json.dumps({
"status": "success",
"user": {
"id": user[0],
"name": user[1],
"email": user[2]
}
})
}
else:
response = {
"statusCode": 401,
"headers": {"Content-Type": "application/json"},
"body": json.dumps({
"status": "fail",
"message": "Invalid credentials"
})
}

logger.info(f"Returning response: {response}")
return response

except pymysql.MySQLError as e:
logger.error(f"Database error: {str(e)}")
return {
"statusCode": 500,
"headers": {"Content-Type": "application/json"},
"body": json.dumps({
"status": "error",
"message": "Database error"
})
}

except Exception as e:
logger.error(f"Unexpected error: {str(e)}")
return {
"statusCode": 500,
"headers": {"Content-Type": "application/json"},
"body": json.dumps({
"status": "error",
"message": "Internal server error"
})
}

C:\Users\Administrator\Desktop\Lambda\SignIn>pip install pymysql -t.
Collecting pymysql
  Using cached PyMySQL-1.1.1-py3-none-any.whl.metadata (4.4 kB)
Using cached PyMySQL-1.1.1-py3-none-any.whl (44 kB)
Installing collected packages: pymysql
Successfully installed pymysql-1.1.1

Step7: Create resource Signin 
Delete the existing Option method 
Click Create Method

Step8:Intergration
{
  "email": "$input.params('email')",
  "password": "$input.params('password')"
}
Click Save 
Step 10: Create one more Lambda Function 

For test 

{
  "body": "{\"email\":\"prabhu@gmail.com\",\"password\":\"prabhu\"}",
  "queryStringParameters": null
}

Signup the detail using lambda function


import json
import pymysql
import logging

logger = logging.getLogger()
logger.setLevel(logging.INFO)

db_config = {
'host': 'aurorabd.csn64oem2jvs.us-east-1.rds.amazonaws.com',
'user': 'admin',
'password': 'admin123',
'database': 'digital_library',
'connect_timeout': 5
}


def lambda_handler(event, context):
try:
# Debug logging
logger.info(f"Raw event structure: {json.dumps(event)}")

# Handle both proxy and non-proxy integrations
if event.get('queryStringParameters'):
data = event['queryStringParameters']
elif event.get('body'):
try:
body = event['body']
data = json.loads(body) if isinstance(body, str) else body
except json.JSONDecodeError:
data = {}
else:
data = event

logger.info(f"Parsed data: {data}")

email = data.get('email')
password = data.get('password')

if not email or not password:
return {
"statusCode": 400,
"headers": {"Content-Type": "application/json"},
"body": json.dumps({
"status": "fail",
"message": "Email and password required",
"received": data
})
}

# Database connection
try:
conn = pymysql.connect(**db_config)
with conn.cursor() as cursor:
cursor.execute(
"SELECT id, name, email FROM users WHERE email = %s AND password = %s",
(email, password)
)
user = cursor.fetchone()

if user:
response = {
"statusCode": 200,
"headers": {"Content-Type": "application/json"},
"body": json.dumps({
"status": "success",
"user": {
"id": user[0],
"name": user[1],
"email": user[2]
}
})
}
else:
response = {
"statusCode": 401,
"headers": {"Content-Type": "application/json"},
"body": json.dumps({
"status": "fail",
"message": "Invalid credentials"
})
}

logger.info(f"Returning response: {response}")
return response

except pymysql.MySQLError as e:
logger.error(f"Database error: {str(e)}")
return {
"statusCode": 500,
"headers": {"Content-Type": "application/json"},
"body": json.dumps({
"status": "error",
"message": "Database error"
})
}

except Exception as e:
logger.error(f"Unexpected error: {str(e)}")
return {
"statusCode": 500,
"headers": {"Content-Type": "application/json"},
"body": json.dumps({
"status": "error",
"message": "Internal server error"
})
}

Step11:


--Thanks 

Tuesday, July 8, 2025

Class 62nd AWS Trusted Advisor

Class 62nd AWS Trusted Advisor and Project day 1 ,July 8th
What is the trusted advisor?
ACM -Aws certificate manager 
What is the AWS ACM?
What are SSL/TLS Certificates?
Types of AWS ACM?
Key features of AWS ACM.
How AWS ACM Works?
Limitations of AWS ACM.

Ticket cases :
Account and Billing related queries 
Service limit increases
Technical Assistance
 
Basic Support -Free                           Support 
Account and Billing related queries  yes
Service limit increases                         yes
Technical Assistance                            no 

No technical assistance from AWS 
AWS Developer forums access 
Knowledge base articles and AWS docs
Trusted advisor :only core area checks 

Developer Support -From 29$ per month          Support 
Account and Billing related queries                         yes
Service limit increases                                              yes
Technical Assistance                                                 yes
 
12-24 Local business hours support 
Char and email support from cloud support associate
Start from $29/-per month 
Trusted advisor :only core area checks 
One user can raise ticket

Business Support -From 100$ per month          Support 
Account and Billing related queries                         yes
Service limit increases                                              yes
Technical Assistance                                                 yes

With in 1hr support available
24*7 support 
Cloud engineer provides the help
Email,phone and char support
Trusted advisor:Full area checks
Multiple users can raise tickets

Enterprise Support -From 15000$ per month         (Company related only)
$15000/-per month
with in 15 mints support available
Sr. Cloud engineer support
AWS Training and allocate TAM(Technical Account manager)
Annual operational review and architectural reviews
Trusted advisor: Full area checks
Multiple users can raise ticket

AWS trusted advisor
AWS trusted advisor is a cloud optimization service provided by  amazon web services(AWS).
It is an essential tools for AWS users who want to ensure their cloud environment is optimized, secure, and cost-effective.
it helps users optimize their AWS infrastructure by offering real-time guidance in five key areas: Cost Optimization, performance, security, fault tolerance, service limits
and Operational excellence.
AWS Trusted Advisor continuously scans you AWS environment, providing insights though a dashboard that displays check and recommendations.
User can then implement these recommendations directly or with the assistance of AWS support.

AWS Trusted advisor
Benefits
Cost savings: By identifying unused or underutilize resources, trusted advisor helps reduce costs.
Improved Security: It ensures that your AWS Environment adheres to security best practices.
Enhances performance: it offers suggestions to improve the performance of you AWS workload.
Increase Fault tolerance: Recommendations are provided to improve the resilience of your architecture.

Practical:
As we are free limitation trusted advisor only few checks perform completely in our account level
(Cost Optimation, performance, Security,Fault tolerance,service limitations,Operation excellence) Recommendation will provide ,below navigation
>Trusted Advisor>Recommendations
Step1:
As see below ,one of the S3 bucket security policy was broken, given public access

Step2: 1 Immediate action,3 for investigation optional case ,0 if you are not security compliance showing here

Some security group ,some issue unrestricted ,ports are enabled All traffic ,need to remove this security group
Step3: After deleted security immediate trusted advisor ,refresh automatically
ACM -AWS Certificate manager 
 
What is the AWS ACM?
What are SSL/TLS certificates?
Types of AWS ACM?
Key Features of AWS ACM.
How AWS ACM Works?
Limitations of AWS ACM.
What is the AWS ACM?(free)
Aws certificate Manage(ACM) is a manager service that allows you to provision, mange, and deploy
SSL/TLS certificates for securing websites and applications running on AWS.
It simplifies the processes of obtaining and renewing certificates, making it easier to enable HTTPS(secure communication) for your domains.
What are SSL/TLS Certificates?
SSL (Secure sockets Layer) and TLS(Transport Layer Security) certificates are digital certificates that secure communication between a web server and a user's browser by encrypting data.
What is an SSL certificate?
SSL Was the original protocol designed to secure internet communication.
it established an encrypted connection between clients(browsers) and servers. However, SSL is now outdated and insecure.
what is a TLS Certificate?
TLS is the modern and more secure replacement for SSL,it provides the same encryption but with stronger security algorithms.
Most people still call them "SSL Certificates" but in reality, today's certificates use TLS encryption
ACM Certificates for internal user only 
AWS ACM(Amazon Certificate manager) provides free SSL/TLS certificates for use with AWS services like ALB,cloudfront,and API gateway.
For customer installations(Ec2,On-prem servers),you may need to buy a certificate from thrid-party providers like dificert, Globalsign, or Let's Encrypt(free option).
Are SSL/TLS Certificates Free?
ACM certificates for internal use only
AWS ACM(Amazon Certificate manager) Provides free SSL/TLS Certificates for use with AWS services like ALB,cloudfront,and API Gateway.
For custom installations(Ec2,on-prem servers),you may need to buy a certificate from third-party providers like digicert,globalsign,or Lets encrypt(free option)

Types of AWS ACM ?
AWS Certificate manager(ACM) provides certificates in 2 types 
Public and private certificates
Feature :    Public                                                      Private (internal)
Visibility  Trusted globally by  all browsers         Only Trusted  within internal networks 
Use case   Used for websites,APIs,and external  Used for internal services like intranet,VPNS,and                            Servers                                                private APIs
Issued by Public certificate authorities(CA)like  AWS private CA(your own CA)
                 amazon,digicert            
Pricing      Free (in AWS ACM)                           Requires AWS private CA(paid)
validation  Doamin validation required               No Public validation needed
examples https://yourwebsite.com(accessuble https://internal.yourcompany.com(used within the                                to the world)                                                   company)
                    

Key features of AWS ACM.
Free SSL.TLS certificates
 .ACM Provides free SSL/TLS certificates for domains managed within AWS.
 These Certificates are issued by Amazon trust services.
Automatic renewal
 ACM automatically renews certificates  before they expire, eliminating manual efforts
  
Domain validation(DV)
To get a certifcate,you must verify domain ownership using:
 1.Emal validation(sending a verification email to the domain owner)
 2.DNS validation(Adding a CNAME record in Route 53)
How AWS ACM Works?
1. Requesting a certificate 
    In AWS console, go to certificate manager(ACM) >request certificate.
   Enter your domain name (eg.example.com)
   Choose validation method(Email or DNS)
   If using DNS,ACM Provides a CNAME record to add to Route 53
2. Validation Process
   If using DNS Validation, ACM checks for the correct CNAME records in Route 53
  Once validated,the certificate is issued.
3.Deploying the Certificate
  Attach the issued certificate to an  application load balancer (ALB),CloudFront, or API Gateway
 Configure, a HTTPs listener(port 443) for secure traffic.
limitation  of AWS ACM
 ACM certificates for internal use only 
 You cannot download ACM certificates for use outside AWS(e.g on a standardlone server)
 To Use and external SSL certificate,you must import it into ACM
 On Supports AWS-integrated services 
  ACM certificates can only be used with AWS service like ALB,cloudFront,and API Gateway.
  Cannot be installed on Ec2 instances directly(for that ,use Let's Encrypt or buy a certificate from an external provider)
Regional scope
ACM Certificate are region-specific ,except for cloudfrontm,which uses certificates from ACM in Us-east-1
Pricing:
AWS ACM in free for public SSL/TLS certificates are long as they are used with AWS-intergrated service like:
Application load balancer 
CloudFont(CDN)
API Gateway
AWS APP Runner 
What costs Money?
Private certificates(issuse via AWS Private CA)
If you need an internal(Private)certificate for internal apps, you must use AWS private cerificate Authority(CA),which is not free 
Route 53 Domain registration(optional)
If you domain is registered with route 53,you pay for domain registration(e.g $12 year for .com domains)
CloudFront Usage
ACM certificates are free ,but use them with cloudfront ,you pay for cloudfront data transfer.

                                                         

                                           Cloud Front S3 Static Page Hosting
Step1:Need to create one S3 Bucket ,With Public Access ccitpublicbucket and upload the static 
index.html page 

Step2:Give Bucket policy like this
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Statement2",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::ccitpublicbucket/*"
        }

Step3: Static we can not directly do static page hosted into hosted zone, using cloud front url 
we can able to hosted the static page 
CloudFront creation 
Need to select Origin S3

Click Next select Do not enable security Protections, Click next and create Distribution



Step4: SSL certificate creation
AWS Certificate Manager>Certificate >Request Certificate
Request Public certificate 
Give you Domain name For me:vakatisubbu.xyz
click Request 

While create request you add enter Cname to Route 53

Route 53 cname enter is created 

Step5: Hosted zone give you domain name

Step6:Cname records created ,A record you need add you cloudFront distrinutation endpoint


Step7:After SSL certificate attached to CloudFront
Sep8: Cloud Distribution Validation add Object /* click create optional
Step9: Prior Godaddy website, where you brough the domain ,you need custom dns
nameservers you need enter your NS enters which was exist 4 url add ,after add only  ACM will be issued

Step11:After SSL Certificate, Static page opened  

 
                                                                Project1
Using all below services
IAM,
KMS, Encrypt the password
VPC,
EC2,(LB,AS,EBS,EFS),
Local to RDS,     Completed
DynamoDB,
Lambda,
API Gateway,
S3,   Completed
Route53,
ACM
This Local Mysql Project Library :https://github.com/Vakatisubbu/DigitalLibrary.git



Above Project is Local DB Need change this RDS 

Step1:Create Aurora DBA Mysql
Aurora DB>Mysql
Template>Free Tier
DB Instance Identifier: Auroradb (any name)
Credential setting>Self managed(master password confirm password)
Instance Configuration >Burastable 
Public Access >yes
Click Create database
Using the DB connections detail connect Mysql your local
CREATE DATABASE IF NOT EXISTS digital_library; USE digital_library;

CREATE TABLE IF NOT EXISTS users ( id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(100) NOT NULL, mobile VARCHAR(20) NOT NULL, email VARCHAR(100) NOT NULL UNIQUE, password VARCHAR(255) NOT NULL, gender VARCHAR(10), location VARCHAR(100), image VARCHAR(255) );

CREATE TABLE IF NOT EXISTS books ( id INT AUTO_INCREMENT PRIMARY KEY, title VARCHAR(255) NOT NULL, author VARCHAR(255), available BOOLEAN DEFAULT TRUE );

CREATE TABLE IF NOT EXISTS history ( id INT AUTO_INCREMENT PRIMARY KEY, user_id INT, book_id INT, borrow_date DATETIME, return_date DATETIME, FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE, FOREIGN KEY (book_id) REFERENCES books(id) ON DELETE CASCADE );

INSERT INTO books (title, author, available) VALUES ('Wings of Fire', 'A.P.J. Abdul Kalam', TRUE), ('The Guide', 'R.K. Narayan', TRUE), ('Godan', 'Munshi Premchand', TRUE), ('Train to Pakistan', 'Khushwant Singh', TRUE), ('Ignited Minds', 'A.P.J. Abdul Kalam', TRUE), ('Mahabharata', 'Ved Vyasa', TRUE), ('Python Crash Course', 'Eric Matthes', TRUE), ('Digital Fortress', 'Dan Brown', TRUE), ('You Can Win', 'Shiv Khera', TRUE), ('Zero to One', 'Peter Thiel', TRUE);

Db configuration file you need change the connection details ,run the launch the application

C:\Users\Administrator\Desktop\AWS_projects\DigitalLibrary>python app.py
https://github.com/Vakatisubbu/DigitalLibrary.git
Signup with details User tables insert records successfully.


Login above detail and take some books 
History table Records are inserted successfully.


Login image We plan store in S3 bucket, Our ccitpublicbucket existing code need enable ACL 
After enable 
S3 Images are stored Successfully.
Click the image icon right inspect,get know where image coming , here clearly showing the image coming from S3 bucket.

--Thanks 

Saturday, June 28, 2025

CloudFormation Project

CloudFormation Project

Class 54th AWS CloudFormation June 28th project

Practical

Step1:

Step2: Give any name stackname : ccit-stack-Prj-nonat Click Next 
Step3:Select the role, and check I acknowledge Click Next and then Click Submit



--Thanks 





Cloudformation part2

Cloudformation part2 

Class 54th AWS Cloudformation June 28th

Practical:

Step1:Cloud formation>stack>Create stack >Choose template >upload a template file

Click next 


CFT.yaml file upload

AWSTemplateFormatVersion: '2010-09-09'

Resources:
  CCITAdminUser:
    Type: AWS::IAM::User
    Properties:
      UserName: CCITAdmin
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AdministratorAccess

  CCITDev1User:
    Type: AWS::IAM::User
    Properties:
      UserName: CCITDev1
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AmazonS3FullAccess

  CCITDev2User:
    Type: AWS::IAM::User
    Properties:
      UserName: CCITDev2
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AmazonS3FullAccess

Step2: give stack name any

Nothing change Check the I Acknowledge click next Click submit
Step3: The script will create the new users,ccitadmin,ccitdev1 and ccitdev2 users 
Step4: see three users create successfully and also attached policies for the users 
Step5: Now we are planning to create user group for the users 

# Creating the Dev user group  
  CCITDevGroup:
    Type: AWS::IAM::Group
    Properties:
      GroupName: CCITDevGroup
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AmazonS3FullAccess
        - arn:aws:iam::aws:policy/AWSLambda_FullAccess
       

Step6:click make a direct update >replace existing template 

Step7:Group created successfully. CCITDevGroup this added policies s3,lambda
Step8:adding users to the dev group
     # Adding users to the dev group
  AddUserToGroup:
    Type: AWS::IAM::UserToGroupAddition
    Properties:
      GroupName: !Ref CCITDevGroup
      Users:
        - !Ref CCITDev1User
        - !Ref CCITDev2User  
Step9:Click save existing template replate upload Click submit
Successfully added users to the dev group

Step10:Creating own policies ,iam service ,modify the CFT.yml,replace existing upload and click next click submit
# Create Customer managed policy
  RDSAccessPolicy:
    Type: AWS::IAM::ManagedPolicy
    Properties:
      ManagedPolicyName: RDSPermissions
      PolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Action:
              - "rds:*"
            Resource: "*"  

Step11:RDS permission policy customer managed policy created succcessfully.

Step12: We can attach the policy to the group using below line  - !Ref RDSAccessPolicy save the
file upload replace

Creating the Dev user group  
  CCITDevGroup:
    Type: AWS::IAM::Group
    Properties:
      GroupName: CCITDevGroup
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AmazonS3FullAccess
        - arn:aws:iam::aws:policy/AWSLambda_FullAccess
        - !Ref RDSAccessPolicy

Step13:Succefully added policy to the group

Step14:Creating role for Ec2 resource, we have give permission Ec2instance for s3 bucket access
# Creating the Role for EC2
  EC2InstanceRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: RoleforEC2
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - ec2.amazonaws.com
            Action:
              - "sts:AssumeRole"
      Policies:
        - PolicyName: EC2S3Access
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - "s3:*"
                Resource: "*"    

Click save replace upload and click next and click submit cloudstack ,See below role created and permission Ec2

Step15: Creating Ec2 instance  eu-west-1 imageid: ami-0f4f4482537714bd9 instance type:t2.micro goto
Key pair:AMAZON-LNX-KEY
# Creating EC2 instance
  EC2Instnace:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: t2.micro
      ImageId: ami-0f4f4482537714bd9
      KeyName: AMAZON-LNX-KEY

Step16: See below instance created successfully.

Step17: Instance create we need attach the role to the Ec2instance 
See below role is not attached for the ec2 instance 

# Creating EC2 instance profile

  EC2InstanceProfile:
      Type: AWS::IAM::InstanceProfile
      Properties:
        InstanceProfileName: EC2InstanceProfile
        Roles:
          - !Ref EC2InstanceRole
Step18:
See above code we have hotcode the imageid, this will not generic if same code will not work other region due imageid,so need to overcome this issue using mapping run time take the imagid

As see below same i will cloudstack in other location it will create Ec2 instance based on mapping 

Mappings:
  RegionMap:
    eu-west-3:
      AMI: ami-0f8d3c5dcfaceaa4f
      InstType: t2.micro
      KeyName: Ec2VM-First
    eu-west-2:
      AMI: ami-0f4f4482537714bd9
      InstType: t3.micro
      KeyName: AMAZON-LNX-KEY

# Creating EC2 instance
  EC2Instnace:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: !FindInMap [RegionMap, !Ref "AWS::Region", InstType]
      ImageId: !FindInMap [RegionMap, !Ref "AWS::Region", AMI]
      IamInstanceProfile: !Ref EC2InstanceProfile
      KeyName: !FindInMap [RegionMap, !Ref "AWS::Region", KeyName]

Step19:Once delete the stack automatically all corresponding resouce will be delete automatically


Step20:See instance is terminated automatically ,user,usergroup all related resource will
deleted

Step21:Now Same templated uploaded in the our region paris,upload existing CFT.yaml
file click next and click submit
it will take time complete the instance due it will will process first create user and use groups
and policy ,finally create Ec2 instance
Step22:See below same script paris region Ec2 instance created successfully.
Step23:With out disturb existing ccit-stack-paris, Created one more stack for ccit-stack-vpc for VPC 
creation 
AWSTemplateFormatVersion: '2010-09-09'
Description: Creates a VPC with public and private subnets, Internet Gateway, and route tables
Resources:
  # VPC
  MyVPC:
    Type: AWS::EC2::VPC
    Properties:
      CidrBlock: 10.0.0.0/22
      EnableDnsSupport: true
      EnableDnsHostnames: true
      Tags:
        - Key: Name
          Value: MycustomVPC

  # Internet Gateway
  MyInternetGateway:
    Type: AWS::EC2::InternetGateway
    Properties:
      Tags:
        - Key: Name
          Value: MyInternetGateway

  AttachGateway:
    Type: AWS::EC2::VPCGatewayAttachment
    Properties:
      VpcId: !Ref MyVPC
      InternetGatewayId: !Ref MyInternetGateway

  # Public Subnets
  PublicSubnet1:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref MyVPC
      CidrBlock: 10.0.0.0/24
      MapPublicIpOnLaunch: true
      AvailabilityZone: !Select [0, !GetAZs '']
      Tags:
        - Key: Name
          Value: PublicSubnet1

  PublicSubnet2:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref MyVPC
      CidrBlock: 10.0.1.0/24
      MapPublicIpOnLaunch: true
      AvailabilityZone: !Select [1, !GetAZs '']
      Tags:
        - Key: Name
          Value: PublicSubnet2

  PublicSubnet3:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref MyVPC
      CidrBlock: 10.0.2.0/24
      MapPublicIpOnLaunch: true
      AvailabilityZone: !Select [2, !GetAZs '']
      Tags:
        - Key: Name
          Value: PublicSubnet3

  # Private Subnets
  PrivateSubnet1:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref MyVPC
      CidrBlock: 10.0.3.0/26
      MapPublicIpOnLaunch: false
      AvailabilityZone: !Select [0, !GetAZs '']
      Tags:
        - Key: Name
          Value: PrivateSubnet1

  PrivateSubnet2:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref MyVPC
      CidrBlock: 10.0.3.64/26
      MapPublicIpOnLaunch: false
      AvailabilityZone: !Select [1, !GetAZs '']
      Tags:
        - Key: Name
          Value: PrivateSubnet2

  PrivateSubnet3:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref MyVPC
      CidrBlock: 10.0.3.128/26
      MapPublicIpOnLaunch: false
      AvailabilityZone: !Select [2, !GetAZs '']
      Tags:
        - Key: Name
          Value: PrivateSubnet3

  # Public Route Table and Route
  PublicRouteTable:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId: !Ref MyVPC
      Tags:
        - Key: Name
          Value: PublicRouteTable

  PublicRoute:
    Type: AWS::EC2::Route
    Properties:
      RouteTableId: !Ref PublicRouteTable
      DestinationCidrBlock: 0.0.0.0/0
      GatewayId: !Ref MyInternetGateway

  # Associate Public Subnets with Route Table
  PublicSubnet1RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref PublicSubnet1
      RouteTableId: !Ref PublicRouteTable

  PublicSubnet2RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref PublicSubnet2
      RouteTableId: !Ref PublicRouteTable

  PublicSubnet3RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref PublicSubnet3
      RouteTableId: !Ref PublicRouteTable

  # Private Route Table (no route to internet yet)
  PrivateRouteTable:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId: !Ref MyVPC
      Tags:
        - Key: Name
          Value: PrivateRouteTable

  # Associate Private Subnets with Private Route Table
  PrivateSubnet1RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref PrivateSubnet1
      RouteTableId: !Ref PrivateRouteTable

  PrivateSubnet2RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref PrivateSubnet2
      RouteTableId: !Ref PrivateRouteTable

  PrivateSubnet3RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref PrivateSubnet3
      RouteTableId: !Ref PrivateRouteTable

Step24:Completed successfully.

See below successfully created two subnet2 each subnet three availability zone and internet gateway attached for public subnets


One more feature Metadata is use for general description for understand
Outputblock for output what are resource generated from the script

# This is Output Block

Outputs:

  AdminUser:
    Description: "IAM Admin User created"
    Value: !Ref CCITAdminUser

  DevUser1:
    Description: "IAM Dev User 1 created"
    Value: !Ref CCITDev1User

  DevUser2:
    Description: "IAM Dev User 2 created"
    Value: !Ref CCITDev2User

  DevGroup:
    Description: "IAM Group for Dev Users"
    Value: !Ref CCITDevGroup

  ManagedPolicy:
    Description: "Customer managed policy for RDS Access"
    Value: !Ref RDSAccessPolicy

  EC2Role:
    Description: "IAM Role attached to EC2 Instance"
    Value: !Ref EC2InstanceRole

  InstanceProfile:
    Description: "IAM Instance Profile attached to EC2"
    Value: !Ref EC2InstanceProfile

  EC2InstanceId:
    Description: "Launched EC2 Instance ID"
    Value: !Ref EC2Instnace
    Export:
      Name: EC2InstanceID

  EC2PublicDNS:
    Description: "Public DNS of the EC2 instance"
    Value: !GetAtt EC2Instnace.PublicDnsName

  EC2PublicIP:
    Description: "Public IP of the EC2 instance"
    Value: !GetAtt EC2Instnace.PublicIp

Step25:Full script CFT.yml

AWSTemplateFormatVersion: '2010-09-09'

Mappings:
  RegionMap:
    eu-west-3:
      AMI: ami-0f8d3c5dcfaceaa4f
      InstType: t2.micro
      KeyName: Ec2VM-First
    eu-west-2:
      AMI: ami-0f4f4482537714bd9
      InstType: t3.micro
      KeyName: AMAZON-LNX-KEY

Resources:

  CCITAdminUser:
    Type: AWS::IAM::User
    Properties:
      UserName: CCITAdmin
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AdministratorAccess

  CCITDev1User:
    Type: AWS::IAM::User
    Properties:
      UserName: CCITDev1
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AmazonS3FullAccess

  CCITDev2User:
    Type: AWS::IAM::User
    Properties:
      UserName: CCITDev2
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AmazonS3FullAccess

  CCITDevGroup:
    Type: AWS::IAM::Group
    Properties:
      GroupName: CCITDevGroup
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AmazonS3FullAccess
        - arn:aws:iam::aws:policy/AWSLambda_FullAccess
        - !Ref RDSAccessPolicy

  AddUserToGroup:
    Type: AWS::IAM::UserToGroupAddition
    Properties:
      GroupName: !Ref CCITDevGroup
      Users:
        - !Ref CCITDev1User
        - !Ref CCITDev2User  

  RDSAccessPolicy:
    Type: AWS::IAM::ManagedPolicy
    Properties:
      ManagedPolicyName: RDSPermissions
      PolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Action:
              - "rds:*"
            Resource: "*"  

  EC2InstanceRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: RoleforEC2
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - ec2.amazonaws.com
            Action:
              - "sts:AssumeRole"
      Policies:
        - PolicyName: EC2S3Access
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - "s3:*"
                Resource: "*"    

  EC2InstanceProfile:
    Type: AWS::IAM::InstanceProfile
    Properties:
      InstanceProfileName: EC2InstanceProfile
      Roles:
        - !Ref EC2InstanceRole

  EC2Instance:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: !FindInMap [RegionMap, !Ref "AWS::Region", InstType]
      ImageId: !FindInMap [RegionMap, !Ref "AWS::Region", AMI]
      IamInstanceProfile: !Ref EC2InstanceProfile
      KeyName: !FindInMap [RegionMap, !Ref "AWS::Region", KeyName]

  MyVPC:
    Type: AWS::EC2::VPC
    Properties:
      CidrBlock: 10.0.0.0/22
      EnableDnsSupport: true
      EnableDnsHostnames: true
      Tags:
        - Key: Name
          Value: MycustomVPC

  MyInternetGateway:
    Type: AWS::EC2::InternetGateway
    Properties:
      Tags:
        - Key: Name
          Value: MyInternetGateway

  AttachGateway:
    Type: AWS::EC2::VPCGatewayAttachment
    Properties:
      VpcId: !Ref MyVPC
      InternetGatewayId: !Ref MyInternetGateway

  PublicSubnet1:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref MyVPC
      CidrBlock: 10.0.0.0/24
      MapPublicIpOnLaunch: true
      AvailabilityZone: !Select [0, !GetAZs '']
      Tags:
        - Key: Name
          Value: PublicSubnet1

  PublicSubnet2:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref MyVPC
      CidrBlock: 10.0.1.0/24
      MapPublicIpOnLaunch: true
      AvailabilityZone: !Select [1, !GetAZs '']
      Tags:
        - Key: Name
          Value: PublicSubnet2

  PublicSubnet3:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref MyVPC
      CidrBlock: 10.0.2.0/24
      MapPublicIpOnLaunch: true
      AvailabilityZone: !Select [2, !GetAZs '']
      Tags:
        - Key: Name
          Value: PublicSubnet3

  PrivateSubnet1:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref MyVPC
      CidrBlock: 10.0.3.0/26
      MapPublicIpOnLaunch: false
      AvailabilityZone: !Select [0, !GetAZs '']
      Tags:
        - Key: Name
          Value: PrivateSubnet1

  PrivateSubnet2:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref MyVPC
      CidrBlock: 10.0.3.64/26
      MapPublicIpOnLaunch: false
      AvailabilityZone: !Select [1, !GetAZs '']
      Tags:
        - Key: Name
          Value: PrivateSubnet2

  PrivateSubnet3:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref MyVPC
      CidrBlock: 10.0.3.128/26
      MapPublicIpOnLaunch: false
      AvailabilityZone: !Select [2, !GetAZs '']
      Tags:
        - Key: Name
          Value: PrivateSubnet3

  PublicRouteTable:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId: !Ref MyVPC
      Tags:
        - Key: Name
          Value: PublicRouteTable

  PublicRoute:
    Type: AWS::EC2::Route
    Properties:
      RouteTableId: !Ref PublicRouteTable
      DestinationCidrBlock: 0.0.0.0/0
      GatewayId: !Ref MyInternetGateway

  PublicSubnet1RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref PublicSubnet1
      RouteTableId: !Ref PublicRouteTable

  PublicSubnet2RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref PublicSubnet2
      RouteTableId: !Ref PublicRouteTable

  PublicSubnet3RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref PublicSubnet3
      RouteTableId: !Ref PublicRouteTable

  PrivateRouteTable:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId: !Ref MyVPC
      Tags:
        - Key: Name
          Value: PrivateRouteTable

  PrivateSubnet1RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref PrivateSubnet1
      RouteTableId: !Ref PrivateRouteTable

  PrivateSubnet2RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref PrivateSubnet2
      RouteTableId: !Ref PrivateRouteTable

  PrivateSubnet3RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref PrivateSubnet3
      RouteTableId: !Ref PrivateRouteTable

Outputs:

  AdminUser:
    Description: "IAM Admin User created"
    Value: !Ref CCITAdminUser

  DevUser1:
    Description: "IAM Dev User 1 created"
    Value: !Ref CCITDev1User

  DevUser2:
    Description: "IAM Dev User 2 created"
    Value: !Ref CCITDev2User

  DevGroup:
    Description: "IAM Group for Dev Users"
    Value: !Ref CCITDevGroup

  ManagedPolicy:
    Description: "Customer managed policy for RDS Access"
    Value: !Ref RDSAccessPolicy

  EC2Role:
    Description: "IAM Role attached to EC2 Instance"
    Value: !Ref EC2InstanceRole

  InstanceProfile:
    Description: "IAM Instance Profile attached to EC2"
    Value: !Ref EC2InstanceProfile

  EC2InstanceId:
    Description: "Launched EC2 Instance ID"
    Value: !Ref EC2Instance
    Export:
      Name: EC2InstanceID



Step26: Upload the file click save and click submit the stack ,Competed successfully.

Step27:Below screen shot refence output block the Cloudformation

CloudFormation is faster than the Terraform
Using Git Cloudformation
https://github.com/Vakatisubbu/ec2-cloudform.git

Step1:Create New repository in Git ec2-cloudform >Public
Upload the CFT.yaml file to git commit changes

Step2:Create stack Choose the Sync from Git Click next

Step3:Given any name ccit-git-stack Choose option click add connection


Step4:Give any connection name gitconnectionsubbu click connect to Github
Step5:Click install a new app
Step6:Install authorize,permission

Step7:After Git authorization number will populate automatically,Click connect

Step8:Git connection create with developer tool Click Trigger a release it will build automatically when do commit on Git hub repository.
Step9:Code pipeline option below choose then Click next 

Step10:Create source connection where get the code ,below option and then click next

Step11:Given stack name and yaml file name which is having in the git repository

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AccessGitRepos",
            "Effect": "Allow",
            "Action": [
                "codestar-connections:UseConnection",
                "codeconnections:UseConnection"
            ],
            "Resource": [
                "arn:aws:codestar-connections:*:*:connection/*",
                "arn:aws:codeconnections:*:*:connection/*"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:ResourceAccount": "${aws:PrincipalAccount}"
                }
            }
        }
    ]
}

Step12: Click Create pipeline from template 
Source created codepipelinetemplate  Successfully 

Step13: Source Connected ,deployment failed due to permission issue 

Step14: In the error clearly code for the role permission issue 
Resource handler returned message: "User: arn:aws:sts::216989104632:assumed-role/CodePipelineStarterTemplate-Depl-CloudFormationRole-Qgt6LbG4vO0A/AWSCloudFormation is not authorized to perform: iam:ListAccessKeys on resource: user CCITAdmin because no identity-based policy allows the iam:ListAccessKeys action (Service: Iam, Status Code: 403, Request ID: ff160699-0364-49a9-bc97-0c8935571f4d) (SDK Attempt Count: 1)" (RequestToken: bc9dabd3-1b27-2e4d-0b72-0ac04c39232a, HandlerErrorCode: AccessDenied)
Step15:Check this role in 


Step16:Click the role given administrator permission added ,you can able give single permission Iam permission,Ec2creation..etc. i have given administratorAccess onetime

Step17: Delete the existing stack,some it will not deleted smoothly due some the users created already,if failed to delete retry delete selection force delete

Step18:Click retry option or else you git commit the change it will trigger automatically.

Step19: Click retry stage or Git change some test commit changes
 
Added from Ec2Instanceprofile to 
Ec2Instanceprofile1 
Automaticaly deployment triggered See creating inprocess 


Step20:stack completed successfully and codepipeline also completed

Step21:See below screens shot Ec2 , users and VPC Public,private subnets and interenetway
attached successfully with single scrip




Step21:After completed delete the stack click delete once deleted all resource which is create from cloud from all resource will deleted automaitcally.








--Thanks