Monday, June 9, 2025

Multipart Upload

Multipart Upload

Class 40th AWS Multipart Upload June9th

(AWS Recommended ,you are upload the file more than 100 MB ,the will be split chucks, finally join the chuck into single file that is best approach to upload s3 bucket.)

  • Multipart upload allow users to break a single file into multiple part and upload them independently.
  • You can upload these object parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts 
  • After all parts of your object are uploaded, Amazon S3 assembles these parts and create the object.
  • In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation.
  • You can split the file into 1000 Slices maximum. Each file must be 5 mb in size .Incomplete upload part remain in the S3 and AWS Charges for that.

Practical

Step1: Create one bucket "ccitmultipartupload1

First choose the file 284 MB (split the file) using command, The command you need to run  in git hub 

$ split -b 50M video.mp4 video_part_

Administrator@DESKTOP-AV2PARO MINGW64 ~/Desktop/mulitpart

$ ls -lrt

total 1175740

-rw-r--r-- 1 Administrator 197121 298403108 Dec 12 11:06 video.mp4

-rw-r--r-- 1 Administrator 197121 607142013 Dec 15 09:04 video_20241215_085947.mp4

-rw-r--r-- 1 Administrator 197121  52428800 Jun 11 00:48 video_part_aa

-rw-r--r-- 1 Administrator 197121  52428800 Jun 11 00:48 video_part_ab

-rw-r--r-- 1 Administrator 197121  52428800 Jun 11 00:48 video_part_ac

-rw-r--r-- 1 Administrator 197121  52428800 Jun 11 00:48 video_part_ad

-rw-r--r-- 1 Administrator 197121  52428800 Jun 11 00:48 video_part_ae

-rw-r--r-- 1 Administrator 197121  36259108 Jun 11 00:48 video_part_af

Step2: Upload the files in the newly created bucket using Cli command 

PS C:\Users\Administrator> aws configure

AWS Access Key ID [****************HWK4]: AKIATFBMO7H4BGIOIEW6

AWS Secret Access Key [****************uOLH]: 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

Default region name [eu-west-1]: eu-west-1

Default output format [None]: json

what is the use of this command means upload id will generate the key , based on the key we will join them into single file 

PS C:\Users\Administrator\Desktop\mulitpart>aws s3api create-multipart-upload --bucket ccitmultipartupload2 --key video.mp4 --region eu-west-1

{

    "ServerSideEncryption": "AES256",

    "Bucket": "ccitmultipartupload2",

    "Key": "video.mp4",

    "UploadId": "XOdv_ADNJnTuS38S6iXGxorl0TFVpDFCGoEJ2FoO9wx0cpkIhrzMZks4phjX.y_1HUCWDBZKJ9LNO6sVEUOBSud6Y047fk3CcKI3mAhbo2r_QbK_T1DYkTUtj7jbs5Jl"

}


See here upload S3 bucket showing empty ,until you joining all part of the file the uploaded file will not shown

Step3:
Part1

PS C:\Users\Administrator\Desktop\mulitpart>  aws s3api upload-part --bucket ccitmultipartupload2 --key video.mp4 --part-number 1 --body C:\Users\Administrator\Desktop\mulitpart\Video_part_aa --upload-id XOdv_ADNJnTuS38S6iXGxorl0TFVpDFCGoEJ2FoO9wx0cpkIhrzMZks4phjX.y_1HUCWDBZKJ9LNO6sVEUOBSud6Y047fk3CcKI3mAhbo2r_QbK_T1DYkTUtj7jbs5Jl

{

    "ServerSideEncryption": "AES256",

    "ETag": "\"f6279bc6c5aa4efa009c0d599cd1b206\"",

  

Part2

PS C:\Users\Administrator\Desktop\mulitpart> aws s3api upload-part --bucket ccitmultipartupload2 --key video.mp4 --part-number 2 --body C:\Users\Administrator\Desktop\mulitpart\Video_part_ab --upload-id XOdv_ADNJnTuS38S6iXGxorl0TFVpDFCGoEJ2FoO9wx0cpkIhrzMZks4phjX.y_1HUCWDBZKJ9LNO6sVEUOBSud6Y047fk3CcKI3mAhbo2r_QbK_T1DYkTUtj7jbs5Jl

{

    "ServerSideEncryption": "AES256",

    "ETag": "\"5224ca5d459b8044bb554cfffdc4c122\"",

    "ChecksumCRC64NVME": "VVdRkDOJyYg="

}

Part3

PS C:\Users\Administrator\Desktop\mulitpart> aws s3api upload-part --bucket ccitmultipartupload2 --key video.mp4 --part-number 3 --body C:\Users\Administrator\Desktop\mulitpart\Video_part_ac --upload-id XOdv_ADNJnTuS38S6iXGxorl0TFVpDFCGoEJ2FoO9wx0cpkIhrzMZks4phjX.y_1HUCWDBZKJ9LNO6sVEUOBSud6Y047fk3CcKI3mAhbo2r_QbK_T1DYkTUtj7jbs5Jl

{

    "ServerSideEncryption": "AES256",

    "ETag": "\"e9942afaa877e2a530da4b06d8eae9fb\"",

    "ChecksumCRC64NVME": "9touXfRy/cc="

}

Part4

PS C:\Users\Administrator\Desktop\mulitpart> aws s3api upload-part --bucket ccitmultipartupload2 --key video.mp4 --part-number 4 --body C:\Users\Administrator\Desktop\mulitpart\Video_part_ad --upload-id XOdv_ADNJnTuS38S6iXGxorl0TFVpDFCGoEJ2FoO9wx0cpkIhrzMZks4phjX.y_1HUCWDBZKJ9LNO6sVEUOBSud6Y047fk3CcKI3mAhbo2r_QbK_T1DYkTUtj7jbs5Jl

{

    "ServerSideEncryption": "AES256",

    "ETag": "\"fcdd80466dd85133c87772e41123f3fa\"",

    "ChecksumCRC64NVME": "tctu0jHQJ1A="

}

Part5

PS C:\Users\Administrator\Desktop\mulitpart>aws s3api upload-part --bucket ccitmultipartupload2 --key video.mp4 --part-number 5 --body C:\Users\Administrator\Desktop\mulitpart\Video_part_ae --upload-id XOdv_ADNJnTuS38S6iXGxorl0TFVpDFCGoEJ2FoO9wx0cpkIhrzMZks4phjX.y_1HUCWDBZKJ9LNO6sVEUOBSud6Y047fk3CcKI3mAhbo2r_QbK_T1DYkTUtj7jbs5Jl
{
    "ServerSideEncryption": "AES256",
    "ETag": "\"9318088f5b6055f1f5af8f69bf982b49\"",
    "ChecksumCRC64NVME": "yjZxdkyCmm8="
}

Part6
PS C:\Users\Administrator\Desktop\mulitpart> aws s3api upload-part --bucket ccitmultipartupload2 --key video.mp4 --part-number 6 --body C:\Users\Administrator\Desktop\mulitpart\Video_part_af --upload-id XOdv_ADNJnTuS38S6iXGxorl0TFVpDFCGoEJ2FoO9wx0cpkIhrzMZks4phjX.y_1HUCWDBZKJ9LNO6sVEUOBSud6Y047fk3CcKI3mAhbo2r_QbK_T1DYkTUtj7jbs5Jl
{
    "ServerSideEncryption": "AES256",
    "ETag": "\"d602321594b8219b5d0a857a4e5d34cd\"",
    "ChecksumCRC64NVME": "WSsI3fzEWg8="
}

Complete.jon file created in current directory Vedio.mp4 directory

{
  "Parts": [
    { "ETag": "f6279bc6c5aa4efa009c0d599cd1b206", "PartNumber": 1 },
    { "ETag": "5224ca5d459b8044bb554cfffdc4c122", "PartNumber": 2 },
    { "ETag": "e9942afaa877e2a530da4b06d8eae9fb", "PartNumber": 3 },
    { "ETag": "fcdd80466dd85133c87772e41123f3fa", "PartNumber": 4 },
    { "ETag": "9318088f5b6055f1f5af8f69bf982b49", "PartNumber": 5 },
    { "ETag": "d602321594b8219b5d0a857a4e5d34cd", "PartNumber": 6 }
  ]
}

Final step (use the above Etags and UploadId)
-----------------------------------------------------------
aws s3api complete-multipart-upload --bucket ccitmultipartupload2 --key video.mp4  --upload-id XOdv_ADNJnTuS38S6iXGxorl0TFVpDFCGoEJ2FoO9wx0cpkIhrzMZks4phjX.y_1HUCWDBZKJ9LNO6sVEUOBSud6Y047fk3CcKI3mAhbo2r_QbK_T1DYkTUtj7jbs5Jl --multipart-upload "file://complete.json"

PS C:\Users\Administrator\Desktop\mulitpart> aws s3api complete-multipart-upload `
>>   --bucket ccitmultipartupload2 `
>>   --key video.mp4 `
>>   --upload-id XOdv_ADNJnTuS38S6iXGxorl0TFVpDFCGoEJ2FoO9wx0cpkIhrzMZks4phjX.y_1HUCWDBZKJ9LNO6sVEUOBSud6Y047fk3CcKI3mAhbo2r_QbK_T1DYkTUtj7jbs5Jl `
>>   --multipart-upload "file://complete.json"
{
    "ServerSideEncryption": "AES256",
    "Location": "https://ccitmultipartupload2.s3.eu-west-1.amazonaws.com/video.mp4",
    "Bucket": "ccitmultipartupload2",
    "Key": "video.mp4",
    "ETag": "\"f5827b4132193b51faf9d0aa9d1768e1-6\"",
    "ChecksumCRC64NVME": "JNuhFkcmQ8c=",
    "ChecksumType": "FULL_OBJECT"
}
Full file uploaded successfully..



Using Phyton Program also we can do Split and upload the file to S3 bucket using below code

Step1: Delete the exist split files in the folder ,See here file path hut coded,use using flex  using dynamically uploads
import boto3
from botocore.exceptions import NoCredentialsError, PartialCredentialsError

# AWS S3 credentials
ACCESS_KEY = 'AKIATFBMO7H4BGIOIEW6'
SECRET_KEY = '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'
BUCKET_NAME = 'ccitmultipartupload2'
REGION = 'eu-west-1'

def multipart_upload(file_path, bucket_name, key_name):
"""
Perform a multipart upload to S3.

Args:
file_path (str): Local path to the file to upload.
bucket_name (str): Target S3 bucket.
key_name (str): Target key in the S3 bucket.
"""
s3_client = boto3.client(
's3',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
region_name=REGION
)

# Create a multipart upload
response = s3_client.create_multipart_upload(Bucket=bucket_name, Key=key_name)
upload_id = response['UploadId']
print(f"Multipart upload initiated with UploadId: {upload_id}")

try:
parts = []
part_number = 1
chunk_size = 50 * 1024 * 1024 # 50 MB chunks

# Read the file and upload in chunks
with open(file_path, 'rb') as file:
while True:
data = file.read(chunk_size)
if not data:
break

print(f"Uploading part {part_number}...")
part_response = s3_client.upload_part(
Bucket=bucket_name,
Key=key_name,
PartNumber=part_number,
UploadId=upload_id,
Body=data
)
parts.append({'PartNumber': part_number, 'ETag': part_response['ETag']})
part_number += 1

# Complete the multipart upload
print("Completing multipart upload...")
s3_client.complete_multipart_upload(
Bucket=bucket_name,
Key=key_name,
UploadId=upload_id,
MultipartUpload={'Parts': parts}
)
print("Multipart upload completed successfully!")

except Exception as e:
print(f"Error occurred: {e}")
print("Aborting multipart upload...")
s3_client.abort_multipart_upload(Bucket=bucket_name, Key=key_name, UploadId=upload_id)
print("Multipart upload aborted.")

# Example usage
file_path = 'C:/Users/Administrator/Desktop/mulitpart/video.mp4'
key_name = 'video.mp4'
multipart_upload(file_path, BUCKET_NAME, key_name)

PS C:\Users\Administrator\Desktop\mulitpart> python app.py
Multipart upload initiated with UploadId: 2hLmWOuhdZFPHj0z7bSm0fer3amVbXPnstLAeKx01UT.eG6tHuq7_2Ll32EalvWz8W3WKpo8_vSCuoSvZOQR.Hk9AlLDY_hsbKEpFsoeWDPyC0jMkqVCf5RFxTt4T5b.
Uploading part 1...
Uploading part 2...
Uploading part 3...
Uploading part 4...
Uploading part 5...
Uploading part 6...
Completing multipart upload...
Multipart upload completed successfully!

Step2:



                                               Cross Origin resource Sharing (CORS)

Step1:Create one bucket while create bucket uncheck blocking all public access 
          ccitpublicbucket1
       
Step2: Usign Bucket policy generate give permission to the bucket 
ccitpublicbucket1
and Then Click add statement >Generate policy 
Copy the json 
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Statement1",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::ccitpublicbucket1/*"
    }
  ]
}
Step3:

Click Save , Now the bucket is public any one can able to access the files in the bucket 

Step4: Create one file source.html ,save with below code 

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Source Page</title>
</head>
<body>
    <h1>Source Page</h1>
    <div id="data">
        <p>This is some data in the source page.</p>
        <p>More data here.</p>
    </div>
</body>
</html>

Step4: Create one file Dest.html ,save with below code 
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Read Data from Another Page</title>
</head>
<body>
    <h1>Data from Another HTML Page</h1>
    <button id="loadData">Load Data</button>
    <div id="result"></div>

    <script>
        document.getElementById('loadData').addEventListener('click', () => {
            fetch('https://ccitpublicbucket1.s3.eu-west-1.amazonaws.com/Source.html')
                .then(response => response.text())
                .then(htmlString => {
                    // Parse the HTML string into a DOM
                    const parser = new DOMParser();
                    const doc = parser.parseFromString(htmlString, 'text/html');
                    
                    // Extract data from the parsed HTML
                    const data = doc.querySelector('#data').innerHTML;
                    
                    // Display the extracted data
                    document.getElementById('result').innerHTML = data;
                })
                .catch(error => console.error('Error fetching data:', error));
        });
    </script>
</body>
</html>

Step5: Upload the file to source.html file to  ccitpublicbucket1, the end point of the file 
copy that https://ccitpublicbucket1.s3.eu-west-1.amazonaws.com/Source.html


Step6: Dest.html fetch(https://ccitpublicbucket1.s3.eu-west-1.amazonaws.com/Source.html)
Change that file and save and then upload the dest.html to same public bucket

Step7:While click the endpoint source html source text 
Source.html 
Step8:Destnnation html, while click load data coming from text source html.
Step9: Create one more bucket  create bucket default only unchecked blocking all public access 
          ccitprivatebucket1
Step10: Bucket policy Click Save 
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Statement1",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::ccitprivatebucket1/*"
    }
  ]
}
Step10: Upload only the dest.html file in the private bucket ccitprivatebucket1
open object url, click Loaddata ,As see getting error

Access to fetch at 'https://ccitpublicbucket1.s3.eu-west-1.amazonaws.com/Source.html' from origin 'https://ccitprivatebucket1.s3.eu-west-1.amazonaws.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.

Origin is S3: ccitpublicbucket1 , This is different origin ccitprivatebucket1,so the reason not allowed 

Step11: For Cross origin need to give permission to different origin 

[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "GET",
            "POST",
            "PUT"
        ],
        "AllowedOrigins": [
             "https://ccitprivatebucket1.s3.eu-west-1.amazonaws.com"
        ],
        "ExposeHeaders": [
            "x-amz-server-side-encryption",
            "x-amz-request-id",
            "x-amz-id-2"
        ],
        "MaxAgeSeconds": 3000
    }
]

ccitpublickbucket1>Permission>edit cross-origin give above code click save 

See now data come from one origin to another origin ,This is only warning favicon ,just icon of url not provide just warning error. 

                                                     Cloud Front(Global service)

Content Delivery network :  We have hosted one static website below
While creating bucket uncheck the Block public access, that means allow outside public
below bucket policy added and index page upload

Step1: Static website ,in the ccitpublicbucket1 >permission >Bucket policy 
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Statement1",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::ccitpublicbucket1/*"
    }
  ]
}

and then in Static website hosting and then save 


upload the html static page to the bucket 


Step2:Click the endpoint of the file, if are same region it will open quickly 

if you are in other region ,Canada ,usa , it will take time due Geographically distance to far 

like this situation ,edge caches will help you speed up the website

Step3:For that aws we have cdn, cloud front need enabled ,where your edge location web site will be distributed there .
For ex:- if any access our static page(india) from canada, they edge  location already distributed
they will the cache from near location not from India region,so latency will reduce speed the process 
Select the ccitpublicbucket1


Step4:Enable Origin Shield,(Caching the website in the edge location, where you required) select No 
Origin shield is an additional caching layer that can help reduce the load on your origin and help protect its availability.

Step5: CDN created successfully

 
 --Done 
For Ex:-For Understanding ,course.html page edited the text "IT Sector" Save the file upload to public bucket,ccitpublicbucket1
 Step1: Course.html just added text "IT SECTOR subbu"
  
Step2: Now you see the difference endpoint of the bucket join changes are reflected immediately

 Step3: CDN or  aws cloud front changes are not reflected immediately 

Step4: For that you need enable one policy in the distribution 
Cache key and origin requests
  •    Cache policy  
   Caching Optimized   (it is default policy)
    Click view policy


As you see below 86400 Seconds it will refresh the edge location , if you want refresh immediately 
you need create you own policy 

>Click create policy given policy name and give the values 30 second ,so edage location will refresh 30 second ,click create policy 

Attach the own policy which we have create above, choose ccitpolicy under cache policy click save changes    


Private Bucket (without bucket public policy,the policy gave to distribution)

Step6: Create one more CDN using ccitprivatebucket1 this  time 

You must update the S3 bucket policy
CloudFront will provide you with the policy statement after creating the distribution.


here you can access file from private bucket using cloud front endpoint using policy

WAF  enable click Create distribution
Step7:


{
        "Version": "2008-10-17",
        "Id": "PolicyForCloudFrontPrivateContent",
        "Statement": [
            {
                "Sid": "AllowCloudFrontServicePrincipal",
                "Effect": "Allow",
                "Principal": {
                    "Service": "cloudfront.amazonaws.com"
                },
                "Action": "s3:GetObject",
                "Resource": "arn:aws:s3:::ccitprivatebucket1/*",
                "Condition": {
                    "StringEquals": {
                      "AWS:SourceArn": "arn:aws:cloudfront::216989104632:distribution/E24UYTF97GKQCF"
                    }
                }
            }
        ]
      }
Step8: above policy need ,give you ccitprivatebucket1 policy click save 

ccitprivatebucket1 >permission >Bucket policy above json clode copy past click save 


Step9: Distribution url able to access bucket images  

Step10: As you see bucket endpoint getting access denied , we have given Cross origan policy to the bucket for the distribution,so the reason able access bucket object by distribution url


--Thanks 







Thursday, June 5, 2025

Cloud Trail

Cloud Trail

Class 39th AWS Cloud Trail June 5th

in our origination track the all users action details  

Root user 

AWS>Cloud trail 

See below how created the users ,track the details 


Step1: i have created one dummy IAM user ccitdeveloper access permission to S3full access 

and created one bucket "ccitdevelopecreatedjune525", need check cloud trail action was tracked or not 

I have login console ccitdeveloper , another admin create the bucket another action tracked


Charges 

Cloud Trail : It will show services accessed by user 

By default It stores last 90 days of activities.

$2.00 per 1,00,000 management events delivered.

$0.10 per 1,00,000 network activity events delivered(for vpc)

By default cloud trail will record all events 

But i want on specific events(s3,Ec2) we need to create cloud trail 

Practical for Specific event  Trail

Step1: Cloud trail > create  trail, give name, the trail log events are stored in S3 bucket

click next 


Select the events 

Management event and data events 

Data event: S3 only

click Crete trail ,Successfully created trail , Events are captured in s3 bucket


in IAM admin user i have uploaded S3 bucket s3://ccitdevelopecreatedjune5251, two files (objects)

Could trail s3 bucket action are tracked , the log format is showing Json format

for the Json format ,you can use convert online ,copy to covert more unstand .
https://jsonformatter.org/json-to-yaml

See here log , below bucket two file uploaded 1.png ,2.png actions are tracked 

   resources:
      - accountId: '216989104632'
        type: 'AWS::S3::Bucket'
        ARN: 'arn:aws:s3:::ccitdevelopecreatedjune5251'
      - type: 'AWS::S3::Object'
        ARN: 'arn:aws:s3:::ccitdevelopecreatedjune5251/1.png'
  resources:
      - accountId: '216989104632'
        type: 'AWS::S3::Bucket'
        ARN: 'arn:aws:s3:::ccitdevelopecreatedjune5251'
      - type: 'AWS::S3::Object'
        ARN: 'arn:aws:s3:::ccitdevelopecreatedjune5251/2.png'

Sample Questions:
Q1) How do you track user activites in aws?
       Cloud trail
Q2) One of the user delete on sever in AWS account how do you find them ?
Cloud trail
Q3)By Default how many days event should be stored ?
 90 days
Q4)Can we filter events separately for a resource ?
   Yes 
We can not able delete the event trail 

--Thanks 

                                           Content delivery network(CDN)
  It will help to Steaming the browser live events 
  • Used to deliver app from edge location 
  • It gives fast response
ex: cricket matches ,e-commerce sales..etc 

User -->Edge Location -->App server 
Edge location Application information exist in the edge location also 
Origin -->Original server  for ex:-S3,ELB,API GATEWAY

Here below flow chart  
First time user give the request it will go through cloud front fetch image from the s3 bucket
we get response from S3 bucket origin send to cloud front and then send to response user .
Second time user 


  • Users request images through a web or mobile application.
  • The application constructs URLs pointing to CloudFront distributions associated with the S3 buckets.
  • CloudFront serves the cached images from the nearest edge location, reducing latency and improving performance.
  • If the requested image or transformation is not cached, CloudFront fetches it from S3. If the fetch from S3 results in a 404 error (image not found), Lambda@Edge will be triggered to serve a default image. Alternatively you can set up CloudFront with origin failover (fallback to another S3 bucket or a web server) for scenarios that require high availability.

Advantages:
 1.Reduce Latency 
 2.Cut cost 
 3.Customize Delivery 
 4.Security 
Free Tier :
1 TB of data transfer not 
10,000,000 (10 million) Http or Https request 
2,00,0000 Cloud front  function invocations 
Each month , always free

Load Balancer (Between two servers control the traffic, we used load balancer)
Create load balancer >Application load balancer 
Load balancer name:Amazon

Server1 Server2), usually need two server for load balancer 

General we can access the server using IP, but load balancer we can access though DNS 
Came to Active Below screen shot 


So far we can access application use public ip 
http://54.198.190.185/
now we able access the server using Dns name also using below url, shown screen shot 
http://amazon-542509.us-east-1.elb.amazonaws.com/



Step1:Create 2 servers and deploy amazon app --done (as of now one server exist)
Step2:Create load balancer --Done
Step3:Cloud front -->Original domain ELB(select your LB) -->Protocal :HTTP only (original Protocol)
-->Enable Origin Shield (cache will store): us-east-1b -->Protocol: HTTPS(Cloud front protocol) -->
Select WAF  --> IPv6: OFF -->CREATE 

AWS console type >cloud front 
Create a CloudFront distribution

Distribution options

 .Single website or app

 Select the region where your application server exist 

So far your HTTP, once protocol enabled Shield, our application will load to https also

Finally WAF(Web application firewall)enabled , Treads will control for this 

Click Create distribution, it will take 2 mint  enabled and last modified deployed to update the date.

Now Last modified change date and time , now Copy the distribution domain name, try to access website.
  

See now ,Application access https for reference , i have copied the domain name in search box .

http://54.198.190.185/  (website url)
http://amazon-542509.us-east-1.elb.amazonaws.com/  (Load balancer )
https://d36y6xvhps2lzh.cloudfront.net/  (Cloud front ) ,compared above Url CDN is more faster than above , our live Steaming will work like that way 

We able Restrict the website in CDN CloudFront geographic restrictions  edit select geographic location block  India ,Hungary click Save
Hungary
India

Now see we can not able to access the application 


After completion for you tasks need to delete the CDN, first need disabled and then delete 


--Thanks 


Tuesday, June 3, 2025

Nginx

Nginx

Class 38th AWS Nginx June 3rd

Any application to host minimum three serves are mandatory 

Webserver -->Application -->db server 

If you open any host url for ex: Swiggy  --> it will go to webserver (Front end code, html,css,java script)

User -->Server(Webserver)-->App server --> Db server   [AKA also know as]

Webserver:(Appache,Nginx,IIS,websphere)

AKA :The presentation layer 

Purpose :to show the app

Who :UI/UX(front-end dev)

What :Web technologies

Ex:html,css,js

NGINX IS AN WEBSERVER 

  • USED TO SERVE STATIC FILES (FRONT END CODE)
  • 35% of website all over the world
  • It got officially release in Oct 2004
  • It was created to solve the problem of c10k (connect 10k Sessions)
  • Free & open source
  • Easy to install and use 
  • port 80

 Nginx will overcome the problem 10k session handle ,all webserver port 80 

Advantages :

  • It uses less memory and resources (10 MB software).
  • Nginx makes the website faster (give you better ranking in website)
  • helps to get a better  google ranking 
  • handling thousands of connections same time.
  • Load balancing 
  • Acts a proxy & reverse server 
This is for just Knowledge purpose 

Website Ranking checking in Google using 

https://sitechecker.pro/rank-checker/ for specific website to check

https://www.similarweb.com/top-websites/ top 10 website to check 

As you see below Per day Google access 10.22 minutes Avg ,Youtube 20.03 minute per day.


Forward Proxy (just like tool free number,fake ip address)

Advantages:
  •  Hide A client's IP address
  •  Protect data and resources from malicious actors 
  •  Restrict access to specific users/groups
  •  Speed results from cache
Case :case mean if you are open any website it will take time first time, if you open again second time it will come fast second time quick ,because it was case previous that is called case.


 Advantages: 
  •  Hide A server IP address 
  •  Protect against DDOS attacks (Distributed Denial of service) millions of request give to faker make the server down
  •  Speed Access to Specific Users/Group based on location
  •  Speed results from cache
Practical 
Installation 
apt install nginx  :To install  (any package(tools) you need to install )
systemctl start nginx :To install 
systemctl status nginx :To check status  

Important paths
cd  /var/www/html  -->path to put frontend code 
tail -f /var/log/nginx/access.log -->access logs
tail -f /var/log/nginx/access.log | aws '{print $1}' : checking  ips

Step1:
Create one instance Ubantu  instance 

ubuntu@ip-172-31-42-237:~$ sudo -i
root@ip-172-31-42-237:~#
root@ip-172-31-42-237:~# apt update
root@ip-172-31-42-237:~# apt install nginx
root@ip-172-31-42-237:~# systemctl start nginx
root@ip-172-31-42-237:~# systemctl status nginx

Step2: you can get the source code from  git hub

https://github.com/RAHAMSHAIK007/amazonapp
root@ip-172-31-42-237:~# git clone https://github.com/RAHAMSHAIK007/amazonapp.git

root@ip-172-31-42-237:~/amazonapp# cat amazon.sh

apt update 

apt install apache2 

cd /var/www/html : all frontend code 

git clone https://github.com/Ironhack-Archive/online-clone-amazon.git

mv online-clone-amazon/* .

root@ip-172-31-42-237:~/amazonapp# sh amazon.sh

Step3: using your public Ip of the your instance ,amazon web application opened successfully.


Remove the nginx and after access public
root@ip-172-31-42-237:/var/www/html# apt remove nginx

Website getting error 

This site can’t be reached

root@ip-172-31-42-237:~# apt install nginx
root@ip-172-31-42-237:~# systemctl start nginx
Website open successfully
Step4:Log information of the who are using the website url,here below 200 means status code, user getting successfully response from website.
root@ip-172-31-42-237:/var/www/html# tail -f /var/log/nginx/access.log
84.225.123.245 - - [09/Jun/2025:10:28:13 +0000] "GET /img/dress.png HTTP/1.1" 200 635437 "http://54.198.138.13/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36"
84.225.123.245 - - [09/Jun/2025:10:28:14 +0000] "GET /img/product_6.jpg HTTP/1.1" 200 211409 "http://54.198.138.13/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36"

Step5: if you want to block the website

root@ip-172-31-42-237:/var/www/html# vim /etc/nginx/nginx.conf

http {

        deny all;

root@ip-172-31-42-237:/var/www/html# systemctl restart nginx.service

Website unable to access getting below error

403 Forbidden

root@ip-172-31-42-237:/var/www/html# tail -f /var/log/nginx/access.log

84.225.123.245 - - [09/Jun/2025:11:16:27 +0000] "GET / HTTP/1.1" 403 195 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36"

Step6: Deny for Single user access

root@ip-172-31-42-237:/var/www/html# vim /etc/nginx/nginx.conf

http {

     deny 84.225.123.245/32;

root@ip-172-31-42-237:/var/www/html# systemctl restart nginx.service

Getting error for my ip address

403 Forbidden

Others are able to access log success
root@ip-172-31-42-237:/var/www/html# tail -f /var/log/nginx/access.log
136.226.199.34 - - [09/Jun/2025:11:30:25 +0000] "GET /favicon.ico HTTP/1.1" 200 "http://54.198.138.13/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36"

 See here using ip address http://54.198.138.13/  instead of that will give to Unique domain name 

 http://54.198.138.13/  --> Amazon.com (domain)

--Thanks 

Just   Knowledge Purpose 

Instead of Chat gpt, use Visual studio code setup the copilot,text the requirement ,the code will generated automatically 



This is the way AI (your are experienced give terraform code for ec2 and make sure the having full secure)

this the  way you need search, see below ,it will give complete text for IAM, security group everything.

// main.tf

provider "aws" {

  region = "us-east-1"

}

data "aws_ami" "amazon_linux" {

  most_recent = true

  owners      = ["amazon"]

  filter {

    name   = "name"

    values = ["amzn2-ami-hvm-*-x86_64-gp2"]

  }

}

resource "aws_security_group" "ec2_sg" {

  name        = "secure-ec2-sg"

  description = "Allow SSH only from my IP"

  vpc_id      = "<YOUR_VPC_ID>"

  ingress {

    description = "SSH from my IP"

    from_port   = 22

    to_port     = 22

    protocol    = "tcp"

    cidr_blocks = ["<YOUR_IP>/32"]

  }

  egress {

    from_port   = 0

    to_port     = 0

    protocol    = "-1"

    cidr_blocks = ["0.0.0.0/0"]

  }

}

resource "aws_iam_role" "ec2_role" {

  name = "ec2-least-privilege-role"

  assume_role_policy = jsonencode({

    Version = "2012-10-17"

    Statement = [{

      Action = "sts:AssumeRole"

      Effect = "Allow"

      Principal = {

        Service = "ec2.amazonaws.com"

      }

    }]

  })

}

resource "aws_iam_instance_profile" "ec2_profile" {

  name = "ec2-instance-profile"

  role = aws_iam_role.ec2_role.name

}

resource "aws_instance" "secure_ec2" {

  ami                         = data.aws_ami.amazon_linux.id

  instance_type               = "t3.micro"

  subnet_id                   = "<YOUR_PRIVATE_SUBNET_ID>"

  vpc_security_group_ids      = [aws_security_group.ec2_sg.id]

  associate_public_ip_address = false

  iam_instance_profile        = aws_iam_instance_profile.ec2_profile.name


  root_block_device {

    encrypted = true

    volume_size = 8

    volume_type = "gp3"

  }

  tags = {

    Name = "secure-ec2"

  }

}

Just Knowledge Purpose 

If you want linux server use this website for 1 hr , click ubuntu  free

https://killercoda.com/playgrounds

--Thanks 



Monday, June 2, 2025

Cloud Watch

 Cloud Watch

Class 37th AWS Cloud Watch June2nd

How many steps to create one instance in amazon ?

7 steps:

1.Instance name:

2.AMI: Image 

3.Instance type : t2.micro

4.Key pair

5.Security group :Network 

6.Configuratin of storage 

7.Advance details

Monitoring 

What is monitoring 

Monitoring is the process of continuously Observing measuring, and  analyzing systems, applications,

Monitoring ensure they operate efficiently ,securely, and without disruptions.

Monitoring identify performance issues ,cyber threats and unauthorized access.

Importance of monitoring

Early Issue Detection: Helps in identifying problems before critical failures.

Business Continuity: Minimizes downtime and improves service availability.

User Experience :Ensures a smooth and reliable experience for end users 

Type of monitoring 
01 . IT & INFRASTRICTURE MONITORING
Server monitoring
Network monitoring
Database monitoring 
Cloud monitoring 
Application Performance monitoring 
02. SECURITY MONITORING 
Log monitoring 
Endpoint monitoring
SIEM monitoring
03. BUSINESS  & MARKETING MONITORING 
Website monitoring
Social media monitoring 
Competitor monitoring 
Customer feed back

Step1:
If you create any instance in AWS by default Cloud monitoring 


Step2:
Below script for website launch
[ec2-user@ip-10-0-0-222 ~]$ sudo -i
[root@ip-10-0-0-222 ~]# vi amazon.sh
[root@ip-10-0-0-222 ~]#  [New] 9L, 232B written
[root@ip-10-0-0-222 ~]# cat amazon.sh
#! /bin/bash
yum install httpd git -y
systemctl start httpd
systemctl status httpd
cd /var/www/html
git clone https://github.com/Ironhack-Archive/online-clone-amazon.git
mv online-clone-amazon/* .
tail -f /var/log/httpd/access_log
[root@ip-10-0-0-222 ~]# sh amazon.sh
Step3: Using Public ip for the instance launch the website

Step3: Create a dashboard 

Click >Cloud watch >Click Dashboard >Create Dashboard

Give any Dash board name :Amazon

Metrics :Metric means (Cpu,Ram,disk) consume ,You can choose whatever type,line take as of now

click next 

You need select which one you need monitor ,for us now Ec2 

> Select Per-Instance Metric ,Search with your instance id and select the cpu utilization, Click create widget and then Click Save 

Step4: See below Utilization,9% reached 
Step5:
Giving stress to server manually

[ec2-user@ip-10-0-1-102 ~]$ sudo -i
[root@ip-10-0-1-102 ~]# amazon-linux-extras install epel -y

[root@ip-10-0-1-102 ~]# yum install stress
 below command give manually stress for the server 

[root@ip-10-0-1-102 ~]#stress

Below command next 100s give to stress for my server 

[root@ip-10-0-1-102 ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 100s

We can choose widget type, choose number 

Given 1000s seconds

[root@ip-10-0-1-102 ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 1000s

Investigate is AMI need enabled it is paid



CREATE ALARM
Step1: SNS(Simple network service)

>Cloud watch >Alarm >In alarm 

Click next select stop the instance when reach server above 50 % 


Create Topic,it will give the alert to email notification,click create topic 
After Click create topic you will subscription confirmation to you email ,click  that confirmation 


After confirmation, your received below 

select you notification which you have created.
Click Next > give Alarm name any >Create Alarm 
Alarm created Successfully



Step2: Need give Manually stress and check again 

[root@ip-10-0-1-102 ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 1000s





Step3: You will get notification email 

Step4: Instance was stopped 

Step5:Without stopping , you can removed stop Ec2, you will get only email


                                                               Prometheus Monitoring Tool

Prometheus Grafana Created in 20 seconds with below script 

Step1: After executed the script 

https://github.com/RAHAMSHAIK007/all-setups/blob/master/prometheus.sh

[root@ip-10-0-1-102 ~]# sh prometheus.sh

Step2: launch your publicip ,prometheus

http://18.201.137.228:9090/


Step4: Grafana



--Thanks