Thursday, June 5, 2025

Cloud Trail

Cloud Trail

Class 39th AWS Cloud Trail June 5th

in our origination track the all users action details  

Root user 

AWS>Cloud trail 

See below how created the users ,track the details 


Step1: i have created one dummy IAM user ccitdeveloper access permission to S3full access 

and created one bucket "ccitdevelopecreatedjune525", need check cloud trail action was tracked or not 

I have login console ccitdeveloper , another admin create the bucket another action tracked


Charges 

Cloud Trail : It will show services accessed by user 

By default It stores last 90 days of activities.

$2.00 per 1,00,000 management events delivered.

$0.10 per 1,00,000 network activity events delivered(for vpc)

By default cloud trail will record all events 

But i want on specific events(s3,Ec2) we need to create cloud trail 

Practical for Specific event  Trail

Step1: Cloud trail > create  trail, give name, the trail log events are stored in S3 bucket

click next 


Select the events 

Management event and data events 

Data event: S3 only

click Crete trail ,Successfully created trail , Events are captured in s3 bucket


in IAM admin user i have uploaded S3 bucket s3://ccitdevelopecreatedjune5251, two files (objects)

Could trail s3 bucket action are tracked , the log format is showing Json format

for the Json format ,you can use convert online ,copy to covert more unstand .
https://jsonformatter.org/json-to-yaml

See here log , below bucket two file uploaded 1.png ,2.png actions are tracked 

   resources:
      - accountId: '216989104632'
        type: 'AWS::S3::Bucket'
        ARN: 'arn:aws:s3:::ccitdevelopecreatedjune5251'
      - type: 'AWS::S3::Object'
        ARN: 'arn:aws:s3:::ccitdevelopecreatedjune5251/1.png'
  resources:
      - accountId: '216989104632'
        type: 'AWS::S3::Bucket'
        ARN: 'arn:aws:s3:::ccitdevelopecreatedjune5251'
      - type: 'AWS::S3::Object'
        ARN: 'arn:aws:s3:::ccitdevelopecreatedjune5251/2.png'

Sample Questions:
Q1) How do you track user activites in aws?
       Cloud trail
Q2) One of the user delete on sever in AWS account how do you find them ?
Cloud trail
Q3)By Default how many days event should be stored ?
 90 days
Q4)Can we filter events separately for a resource ?
   Yes 
We can not able delete the event trail 

--Thanks 

                                           Content delivery network(CDN)
  It will help to Steaming the browser live events 
  • Used to deliver app from edge location 
  • It gives fast response
ex: cricket matches ,e-commerce sales..etc 

User -->Edge Location -->App server 
Edge location Application information exist in the edge location also 
Origin -->Original server  for ex:-S3,ELB,API GATEWAY

Here below flow chart  
First time user give the request it will go through cloud front fetch image from the s3 bucket
we get response from S3 bucket origin send to cloud front and then send to response user .
Second time user 


  • Users request images through a web or mobile application.
  • The application constructs URLs pointing to CloudFront distributions associated with the S3 buckets.
  • CloudFront serves the cached images from the nearest edge location, reducing latency and improving performance.
  • If the requested image or transformation is not cached, CloudFront fetches it from S3. If the fetch from S3 results in a 404 error (image not found), Lambda@Edge will be triggered to serve a default image. Alternatively you can set up CloudFront with origin failover (fallback to another S3 bucket or a web server) for scenarios that require high availability.

Advantages:
 1.Reduce Latency 
 2.Cut cost 
 3.Customize Delivery 
 4.Security 
Free Tier :
1 TB of data transfer not 
10,000,000 (10 million) Http or Https request 
2,00,0000 Cloud front  function invocations 
Each month , always free

Load Balancer (Between two servers control the traffic, we used load balancer)
Create load balancer >Application load balancer 
Load balancer name:Amazon

Server1 Server2), usually need two server for load balancer 

General we can access the server using IP, but load balancer we can access though DNS 
Came to Active Below screen shot 


So far we can access application use public ip 
http://54.198.190.185/
now we able access the server using Dns name also using below url, shown screen shot 
http://amazon-542509.us-east-1.elb.amazonaws.com/



Step1:Create 2 servers and deploy amazon app --done (as of now one server exist)
Step2:Create load balancer --Done
Step3:Cloud front -->Original domain ELB(select your LB) -->Protocal :HTTP only (original Protocol)
-->Enable Origin Shield (cache will store): us-east-1b -->Protocol: HTTPS(Cloud front protocol) -->
Select WAF  --> IPv6: OFF -->CREATE 

AWS console type >cloud front 
Create a CloudFront distribution

Distribution options

 .Single website or app

 Select the region where your application server exist 

So far your HTTP, once protocol enabled Shield, our application will load to https also

Finally WAF(Web application firewall)enabled , Treads will control for this 

Click Create distribution, it will take 2 mint  enabled and last modified deployed to update the date.

Now Last modified change date and time , now Copy the distribution domain name, try to access website.
  

See now ,Application access https for reference , i have copied the domain name in search box .

http://54.198.190.185/  (website url)
http://amazon-542509.us-east-1.elb.amazonaws.com/  (Load balancer )
https://d36y6xvhps2lzh.cloudfront.net/  (Cloud front ) ,compared above Url CDN is more faster than above , our live Steaming will work like that way 

We able Restrict the website in CDN CloudFront geographic restrictions  edit select geographic location block  India ,Hungary click Save
Hungary
India

Now see we can not able to access the application 


After completion for you tasks need to delete the CDN, first need disabled and then delete 


--Thanks 


Tuesday, June 3, 2025

Nginx

Nginx

Class 38th AWS Nginx June 3rd

Any application to host minimum three serves are mandatory 

Webserver -->Application -->db server 

If you open any host url for ex: Swiggy  --> it will go to webserver (Front end code, html,css,java script)

User -->Server(Webserver)-->App server --> Db server   [AKA also know as]

Webserver:(Appache,Nginx,IIS,websphere)

AKA :The presentation layer 

Purpose :to show the app

Who :UI/UX(front-end dev)

What :Web technologies

Ex:html,css,js

NGINX IS AN WEBSERVER 

  • USED TO SERVE STATIC FILES (FRONT END CODE)
  • 35% of website all over the world
  • It got officially release in Oct 2004
  • It was created to solve the problem of c10k (connect 10k Sessions)
  • Free & open source
  • Easy to install and use 
  • port 80

 Nginx will overcome the problem 10k session handle ,all webserver port 80 

Advantages :

  • It uses less memory and resources (10 MB software).
  • Nginx makes the website faster (give you better ranking in website)
  • helps to get a better  google ranking 
  • handling thousands of connections same time.
  • Load balancing 
  • Acts a proxy & reverse server 
This is for just Knowledge purpose 

Website Ranking checking in Google using 

https://sitechecker.pro/rank-checker/ for specific website to check

https://www.similarweb.com/top-websites/ top 10 website to check 

As you see below Per day Google access 10.22 minutes Avg ,Youtube 20.03 minute per day.


Forward Proxy (just like tool free number,fake ip address)

Advantages:
  •  Hide A client's IP address
  •  Protect data and resources from malicious actors 
  •  Restrict access to specific users/groups
  •  Speed results from cache
Case :case mean if you are open any website it will take time first time, if you open again second time it will come fast second time quick ,because it was case previous that is called case.


 Advantages: 
  •  Hide A server IP address 
  •  Protect against DDOS attacks (Distributed Denial of service) millions of request give to faker make the server down
  •  Speed Access to Specific Users/Group based on location
  •  Speed results from cache
Practical 
Installation 
apt install nginx  :To install  (any package(tools) you need to install )
systemctl start nginx :To install 
systemctl status nginx :To check status  

Important paths
cd  /var/www/html  -->path to put frontend code 
tail -f /var/log/nginx/access.log -->access logs
tail -f /var/log/nginx/access.log | aws '{print $1}' : checking  ips

Step1:
Create one instance Ubantu  instance 

ubuntu@ip-172-31-42-237:~$ sudo -i
root@ip-172-31-42-237:~#
root@ip-172-31-42-237:~# apt update
root@ip-172-31-42-237:~# apt install nginx
root@ip-172-31-42-237:~# systemctl start nginx
root@ip-172-31-42-237:~# systemctl status nginx

Step2: you can get the source code from  git hub

https://github.com/RAHAMSHAIK007/amazonapp
root@ip-172-31-42-237:~# git clone https://github.com/RAHAMSHAIK007/amazonapp.git

root@ip-172-31-42-237:~/amazonapp# cat amazon.sh

apt update 

apt install apache2 

cd /var/www/html : all frontend code 

git clone https://github.com/Ironhack-Archive/online-clone-amazon.git

mv online-clone-amazon/* .

root@ip-172-31-42-237:~/amazonapp# sh amazon.sh

Step3: using your public Ip of the your instance ,amazon web application opened successfully.


Remove the nginx and after access public
root@ip-172-31-42-237:/var/www/html# apt remove nginx

Website getting error 

This site can’t be reached

root@ip-172-31-42-237:~# apt install nginx
root@ip-172-31-42-237:~# systemctl start nginx
Website open successfully
Step4:Log information of the who are using the website url,here below 200 means status code, user getting successfully response from website.
root@ip-172-31-42-237:/var/www/html# tail -f /var/log/nginx/access.log
84.225.123.245 - - [09/Jun/2025:10:28:13 +0000] "GET /img/dress.png HTTP/1.1" 200 635437 "http://54.198.138.13/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36"
84.225.123.245 - - [09/Jun/2025:10:28:14 +0000] "GET /img/product_6.jpg HTTP/1.1" 200 211409 "http://54.198.138.13/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36"

Step5: if you want to block the website

root@ip-172-31-42-237:/var/www/html# vim /etc/nginx/nginx.conf

http {

        deny all;

root@ip-172-31-42-237:/var/www/html# systemctl restart nginx.service

Website unable to access getting below error

403 Forbidden

root@ip-172-31-42-237:/var/www/html# tail -f /var/log/nginx/access.log

84.225.123.245 - - [09/Jun/2025:11:16:27 +0000] "GET / HTTP/1.1" 403 195 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36"

Step6: Deny for Single user access

root@ip-172-31-42-237:/var/www/html# vim /etc/nginx/nginx.conf

http {

     deny 84.225.123.245/32;

root@ip-172-31-42-237:/var/www/html# systemctl restart nginx.service

Getting error for my ip address

403 Forbidden

Others are able to access log success
root@ip-172-31-42-237:/var/www/html# tail -f /var/log/nginx/access.log
136.226.199.34 - - [09/Jun/2025:11:30:25 +0000] "GET /favicon.ico HTTP/1.1" 200 "http://54.198.138.13/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36"

 See here using ip address http://54.198.138.13/  instead of that will give to Unique domain name 

 http://54.198.138.13/  --> Amazon.com (domain)

--Thanks 

Just   Knowledge Purpose 

Instead of Chat gpt, use Visual studio code setup the copilot,text the requirement ,the code will generated automatically 



This is the way AI (your are experienced give terraform code for ec2 and make sure the having full secure)

this the  way you need search, see below ,it will give complete text for IAM, security group everything.

// main.tf

provider "aws" {

  region = "us-east-1"

}

data "aws_ami" "amazon_linux" {

  most_recent = true

  owners      = ["amazon"]

  filter {

    name   = "name"

    values = ["amzn2-ami-hvm-*-x86_64-gp2"]

  }

}

resource "aws_security_group" "ec2_sg" {

  name        = "secure-ec2-sg"

  description = "Allow SSH only from my IP"

  vpc_id      = "<YOUR_VPC_ID>"

  ingress {

    description = "SSH from my IP"

    from_port   = 22

    to_port     = 22

    protocol    = "tcp"

    cidr_blocks = ["<YOUR_IP>/32"]

  }

  egress {

    from_port   = 0

    to_port     = 0

    protocol    = "-1"

    cidr_blocks = ["0.0.0.0/0"]

  }

}

resource "aws_iam_role" "ec2_role" {

  name = "ec2-least-privilege-role"

  assume_role_policy = jsonencode({

    Version = "2012-10-17"

    Statement = [{

      Action = "sts:AssumeRole"

      Effect = "Allow"

      Principal = {

        Service = "ec2.amazonaws.com"

      }

    }]

  })

}

resource "aws_iam_instance_profile" "ec2_profile" {

  name = "ec2-instance-profile"

  role = aws_iam_role.ec2_role.name

}

resource "aws_instance" "secure_ec2" {

  ami                         = data.aws_ami.amazon_linux.id

  instance_type               = "t3.micro"

  subnet_id                   = "<YOUR_PRIVATE_SUBNET_ID>"

  vpc_security_group_ids      = [aws_security_group.ec2_sg.id]

  associate_public_ip_address = false

  iam_instance_profile        = aws_iam_instance_profile.ec2_profile.name


  root_block_device {

    encrypted = true

    volume_size = 8

    volume_type = "gp3"

  }

  tags = {

    Name = "secure-ec2"

  }

}

Just Knowledge Purpose 

If you want linux server use this website for 1 hr , click ubuntu  free

https://killercoda.com/playgrounds

--Thanks 



Monday, June 2, 2025

Cloud Watch

 Cloud Watch

Class 37th AWS Cloud Watch June2nd

How many steps to create one instance in amazon ?

7 steps:

1.Instance name:

2.AMI: Image 

3.Instance type : t2.micro

4.Key pair

5.Security group :Network 

6.Configuratin of storage 

7.Advance details

Monitoring 

What is monitoring 

Monitoring is the process of continuously Observing measuring, and  analyzing systems, applications,

Monitoring ensure they operate efficiently ,securely, and without disruptions.

Monitoring identify performance issues ,cyber threats and unauthorized access.

Importance of monitoring

Early Issue Detection: Helps in identifying problems before critical failures.

Business Continuity: Minimizes downtime and improves service availability.

User Experience :Ensures a smooth and reliable experience for end users 

Type of monitoring 
01 . IT & INFRASTRICTURE MONITORING
Server monitoring
Network monitoring
Database monitoring 
Cloud monitoring 
Application Performance monitoring 
02. SECURITY MONITORING 
Log monitoring 
Endpoint monitoring
SIEM monitoring
03. BUSINESS  & MARKETING MONITORING 
Website monitoring
Social media monitoring 
Competitor monitoring 
Customer feed back

Step1:
If you create any instance in AWS by default Cloud monitoring 


Step2:
Below script for website launch
[ec2-user@ip-10-0-0-222 ~]$ sudo -i
[root@ip-10-0-0-222 ~]# vi amazon.sh
[root@ip-10-0-0-222 ~]#  [New] 9L, 232B written
[root@ip-10-0-0-222 ~]# cat amazon.sh
#! /bin/bash
yum install httpd git -y
systemctl start httpd
systemctl status httpd
cd /var/www/html
git clone https://github.com/Ironhack-Archive/online-clone-amazon.git
mv online-clone-amazon/* .
tail -f /var/log/httpd/access_log
[root@ip-10-0-0-222 ~]# sh amazon.sh
Step3: Using Public ip for the instance launch the website

Step3: Create a dashboard 

Click >Cloud watch >Click Dashboard >Create Dashboard

Give any Dash board name :Amazon

Metrics :Metric means (Cpu,Ram,disk) consume ,You can choose whatever type,line take as of now

click next 

You need select which one you need monitor ,for us now Ec2 

> Select Per-Instance Metric ,Search with your instance id and select the cpu utilization, Click create widget and then Click Save 

Step4: See below Utilization,9% reached 
Step5:
Giving stress to server manually

[ec2-user@ip-10-0-1-102 ~]$ sudo -i
[root@ip-10-0-1-102 ~]# amazon-linux-extras install epel -y

[root@ip-10-0-1-102 ~]# yum install stress
 below command give manually stress for the server 

[root@ip-10-0-1-102 ~]#stress

Below command next 100s give to stress for my server 

[root@ip-10-0-1-102 ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 100s

We can choose widget type, choose number 

Given 1000s seconds

[root@ip-10-0-1-102 ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 1000s

Investigate is AMI need enabled it is paid



CREATE ALARM
Step1: SNS(Simple network service)

>Cloud watch >Alarm >In alarm 

Click next select stop the instance when reach server above 50 % 


Create Topic,it will give the alert to email notification,click create topic 
After Click create topic you will subscription confirmation to you email ,click  that confirmation 


After confirmation, your received below 

select you notification which you have created.
Click Next > give Alarm name any >Create Alarm 
Alarm created Successfully



Step2: Need give Manually stress and check again 

[root@ip-10-0-1-102 ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 1000s





Step3: You will get notification email 

Step4: Instance was stopped 

Step5:Without stopping , you can removed stop Ec2, you will get only email


                                                               Prometheus Monitoring Tool

Prometheus Grafana Created in 20 seconds with below script 

Step1: After executed the script 

https://github.com/RAHAMSHAIK007/all-setups/blob/master/prometheus.sh

[root@ip-10-0-1-102 ~]# sh prometheus.sh

Step2: launch your publicip ,prometheus

http://18.201.137.228:9090/


Step4: Grafana



--Thanks

Saturday, May 31, 2025

S3 part2

S3 part2

Class 36th AWS S3 part2 May 31st

  • Version S3
  • Storage Classes in S3 
  • Lifecycle management in S3
  • S3 Transfer Acceleration (S3TA)
  • Working with S3 using command line interface -CLI (assignment)

  • Version S3
    • Versioning in Amazon S3 is a means of keeping multiple variant of an object in the same bucket.
    • You can use the s3 versioning feature to preserver,retrieve,and restore every version of every object stored in your buckets.
    • With versioning you can recover more easily from both unintended user actions and application failures.
    • After versioning is enabled for a bucket,if amazon s3 received multiple write requests for the same object simultaneously, it stores all of the objects

    Practical 

    Step1: Create one Sample bucket enable bucket version "ccits3bucketexample" click create bucket.

    Adding uploaded one sample.txt file ,with below context 

    Subbu S3 Bucket Versioning 1

    After Change second time ,uploaded ,see below screen shoe show version was showing two files 

    Subbu S3 Bucket Versioning 2

    Delete the file, After deleted one delete mark added to the file 

    For restore the file just select delete mark file delete. automatically recovery the file 

    After the operation you have performed everything tracked in the cloud trail event history, track will be available 90 days.

    s3 version disable option not there ,you can suspend versioning click save changes. it will track 
    After suspect , after any upload changes version tracks are stopped ,exist version you need to manually select them delete,for space constraints 


    Storge Classes:
    Amazon S3 offers a range of storage classes that you can choose from based on the data access, resiliency and cost requirements of you workloads.

    Frequently Access data 
    S3 Standard 
    S3 Express one zone 
    S3 Reduced redundancy (not using)
    S3 Intelligent-tiering 
    In-Frequently Access data 
    S3 Standard-infrequent Access (s3 standard-IA)
    S3 One zone -infrequent Access(s3 one zone -IA)
    Archival data
    S3 Glacier Instant Retrieval
    S3 Glacier Flexible Retrieval(formerly S3 Glacier)
    S3 Glacier Deep Archive
    S3 Outpost 

    S3 Standard 
    (Default file storage class, easy to available 3 zones)

    User ->Bucket ->(Az11,Az2,Az3)

    • Used to frequently accessed data
    • Files store in multiple Azs.
    • 99.9999999999 of durability
    • Default storage class
    • Only Storage charges applicable
    • No minimum duration
    • No minimum size
    • Fast Accessing
    Storage is chargeable, no transaction charges 
    S3 Standard-infrequent Access (s3 standard-IA)

    • Used to infrequently accessed data
    • Files store in multiple Azs.
    • 99.9999999999 of durability
    • Fast accessing, But Cheaper than standard 
    • Storage and retrieval charges applicable 
    • Storage charges with minimum 30days duration 
    • minimum size charge 128kb
    • Best suitable for the long live data (30 days)
    S3 One zone -infrequent Access(s3 one zone -IA)         
    • Suitable for infrequently accessed data
    • Files store in one Azs.
    • 99.9999999999 of durability
    • Cheaper than standard -IA
    • Storage and retrieval charges applicable 
    • Storage charges with minimum 30days duration 
    • minimum size charge 128kb
    • Best suitable for the second backup storage
    S3 Glacier Instant Retrieval
    • Used to frequently accessed data
    • Files store in multiple Azs with immediate retrieval.
    • 99.9999999999 of durability
    • Cheaper than the frequently and infrequently storage type
    • Storage and retrieval charges applicable 
    • Storage charges with minimum 90days duration 
    • minimum size charge 128kb
    • Best suitable for the long live data (90 days)
    S3 Glacier Flexible Retrieval
    • Used to frequently accessed data
    • Files store in multiple Azs & retrieval time min to hrs 
    • 99.9999999999 of durability
    • Cheaper than the Glacier instant retrieval
    • Storage and retrieval charges applicable 
    • Storage charges with minimum 90days duration 
    • minimum size charge 40kb
    • Best suitable for the long live data (90 days)
    S3 Glacier Deep Archive
    • Used for infrequently and long live data 
    • Files store in multiple Azs & retrieval  in Hrs only 
    • 99.9999999999 of durability
    • Cheaper than the above classes 
    • Storage and retrieval charges applicable 
    • Storage charges with minimum 90days duration
    • minimum size charge 40kb
    • Best suitable for the long live data (180 days)
    S3 Intelligent-tiering 
    • Used for unknown or changing access pattern
    • Files store in multiple Azs
    • 99.9999999999 of durability
    • Automatically moves the files to various classes based on usage (30 days data base)
    • Monitoring storge and retrieval charge applicable 
    S3 One zone -infrequent Access(s3 one zone -IA)
    • Low cost than standard
    • File store  in a select AZ.
    • 99.9999999999 of durability
    • High performance storage for your very frequently accessed data 
    • Speed 10x faster than standard classes and cost 50% less 
    • you can use s3 Express one zone with service such as amazon sage maker model training amazon athena,Amazon EMR ,an AWS Glue data catalog to accelerate your machine learning analytics workload.
    S3 Outpost (Private cloud purpose not general usage)
                                                   S3 Life cycle management 

    • An Amazon S3 lifecycle rule configured predefined actions to perform on objects during their lifetime
    • You can create a lifecycle rule optimize your objects storage costs throughout their lifetime.
    • You can define the scope of the lifecycle rule to all objects in your bucket or to objects with a shared prefix, certain object tags,or a certain object size
    • An Amazon s3 lifecycle rule configures predefined actions to perform on objects during their lifetime.
    • You can create a lifecycle rule to optimize you objects storage costs throughout their lifetime
    • You can define the scope of the lifecycle rule to all objects in you bucket or to objects with a shared prefix certain object tags,or certain object size

     Set of rules – Transition between
                         --Delete the objects
    up to down only (when you are set the rule not down to up)
    S3 standard 
    S3 Standard-infrequent Access (s3 standard-IA)
    S3 Glacier Flexible Retrieval(formerly S3 Glacier)
    S3 Glacier Deep Archive
    S3 Intelligent-tiering 
    S3 One zone -infrequent Access(s3 one zone -IA)
    S3 Glacier Instant Retrieval

    Practical  Life cycle management  Rule 
    Step1: Create one bucket and add one public folder upload some file to public folder 


    Step2:Here creating life rule for public objects only more than 5 kb files appliable for the rule
    Lifecycle rule actions 
    Transition current versions of objects between storage classes willl added up to down order based on the periodically not possible down to up


       Review transition and expiration actions
    Current version actions
    Day 0
    Objects uploaded
    Day 30
    Objects move to Standard-IA
    Day 60
    Objects move to Intelligent-Tiering
    Day 90
    Objects move to One Zone-IA
    Day 120
    Objects move to Glacier Flexible Retrieval (formerly Glacier)
    Day 210
    Objects move to Glacier Deep Archive


    S3 Transfer Acceleration (S3TA)

    Amazon S3 tranfer Acceleration (S3TA) is a bucket-level feature that enables fast,easy,and secure transfers of files over long distances between you client and an s3 bucket

    It is a bucket level feature that enables fast,easy and secure transfers of files over long distances between you client and s3 bucket.

    Transfer Acceleration can speed up content transfers to and from Amazon s3 by  a much as 50-500% for long-distance transfer of larger objects 


    Replication 

    SRR                                                      CRR

    Same region replication              Cross region replication

    • Replication is a process which enables automatic, asynchronous copying of objects across amazon s3 buckets
    • Bucekets that are configured for objects replication can be owned by  the same AWS account or by different accounts
    • You can replicate objects to a single destination bucket or to multiple destination buckets,filter option also available.
    • The destination buckets can be in different AWS regions or within the same region as the source bucket.
    • You can replicate the existing and new objects to the destination.Replication will be stated immediately and asynchronously
    • An IAM role with required permissions attached to the replication rule to get the access of buckets which are in the same region or different region or account 



    --Thanks