Q2) One of the user delete on sever in AWS account how do you find them ?
Cloud trail
Q3)By Default how many days event should be stored ?
90 days
Q4)Can we filter events separately for a resource ?
Yes
We can not able delete the event trail
--Thanks
Content delivery network(CDN)
It will help to Steaming the browser live events
Used to deliver app from edge location
It gives fast response
ex: cricket matches ,e-commerce sales..etc
User -->Edge Location -->App server
Edge location Application information exist in the edge location also
Origin -->Original server for ex:-S3,ELB,API GATEWAY
Here below flow chart
First time user give the request it will go through cloud front fetch image from the s3 bucket
we get response from S3 bucket origin send to cloud front and then send to response user .
Second time user
Users request images through a web or mobile application.
The application constructs URLs pointing to CloudFront distributions associated with the S3 buckets.
CloudFront serves the cached images from the nearest edge location, reducing latency and improving performance.
If the requested image or transformation is not cached, CloudFront fetches it from S3. If the fetch from S3 results in a 404 error (image not found), Lambda@Edge will be triggered to serve a default image. Alternatively you can set up CloudFront with origin failover (fallback to another S3 bucket or a web server) for scenarios that require high availability.
Advantages:
1.Reduce Latency
2.Cut cost
3.Customize Delivery
4.Security
Free Tier :
1 TB of data transfer not
10,000,000 (10 million) Http or Https request
2,00,0000 Cloud front function invocations
Each month , always free
Load Balancer (Between two servers control the traffic, we used load balancer)
Create load balancer >Application load balancer
Load balancer name:Amazon
Click Create Load balancer
It will take time to create the Load balancer
Loader Balance :If 100 user hint the application we have two server equally divided the session between two servers(Server1 Server2), usually need two server for load balancer
General we can access the server using IP, but load balancer we can access though DNS
Came to Active Below screen shot
So far we can access application use public ip
http://54.198.190.185/
now we able access the server using Dns name also using below url, shown screen shot
http://amazon-542509.us-east-1.elb.amazonaws.com/
Step1:Create 2 servers and deploy amazon app --done (as of now one server exist)
Step2:Create load balancer --Done
Step3:Cloud front -->Original domain ELB(select your LB) -->Protocal :HTTP only (original Protocol)
-->Enable Origin Shield (cache will store): us-east-1b -->Protocol: HTTPS(Cloud front protocol) -->
Select WAF --> IPv6: OFF -->CREATE
AWS console type >cloud front
Create a CloudFront distribution
Distribution options
.Single website or app
Select the region where your application server exist
So far your HTTP, once protocol enabled Shield, our application will load to https also
Finally WAF(Web application firewall)enabled , Treads will control for this
Click Create distribution, it will take 2 mint enabled and last modified deployed to update the date.
Now Last modified change date and time , now Copy the distribution domain name, try to access website.
See now ,Application access https for reference , i have copied the domain name in search box .
Any application to host minimum three serves are mandatory
Webserver -->Application -->db server
If you open any host url for ex: Swiggy --> it will go to webserver (Front end code, html,css,java script)
User -->Server(Webserver)-->App server --> Db server [AKA also know as]
Webserver:(Appache,Nginx,IIS,websphere)
AKA :The presentation layer
Purpose :to show the app
Who :UI/UX(front-end dev)
What :Web technologies
Ex:html,css,js
NGINX IS AN WEBSERVER
USED TO SERVE STATIC FILES (FRONT END CODE)
35% of website all over the world
It got officially release in Oct 2004
It was created to solve the problem of c10k (connect 10k Sessions)
Free & open source
Easy to install and use
port 80
Nginx will overcome the problem 10k session handle ,all webserver port 80
Advantages :
It uses less memory and resources (10 MB software).
Nginx makes the website faster (give you better ranking in website)
helps to get a better google ranking
handling thousands of connections same time.
Load balancing
Acts a proxy & reverse server
This is for just Knowledge purpose
Website Ranking checking in Google using
https://sitechecker.pro/rank-checker/ for specific website to check
https://www.similarweb.com/top-websites/ top 10 website to check
As you see below Per day Google access 10.22 minutes Avg ,Youtube 20.03 minute per day.
Forward Proxy (just like tool free number,fake ip address)
Advantages:
Hide A client's IP address
Protect data and resources from malicious actors
Restrict access to specific users/groups
Speed results from cache
Case :case mean if you are open any website it will take time first time, if you open again second time it will come fast second time quick ,because it was case previous that is called case.
Advantages:
Hide A server IP address
Protect against DDOS attacks (Distributed Denial of service) millions of request give to faker make the server down
Speed Access to Specific Users/Group based on location
Speed results from cache
Practical
Installation
apt install nginx :To install (any package(tools) you need to install )
Working with S3 using command line interface -CLI (assignment)
Version S3
Versioning in Amazon S3 is a means of keeping multiple variant
of an object in the same bucket.
You can use the s3 versioning feature to preserver,retrieve,and
restore every version of every object stored in your buckets.
With versioning you can recover more easily from both
unintended user actions and application failures.
After versioning is enabled for a bucket,if amazon s3
received multiple write requests for the same object simultaneously, it stores
all of the objects
Practical
Step1: Create one Sample bucket enable bucket version "ccits3bucketexample" click create bucket.
Adding uploaded one sample.txt file ,with below context
Subbu S3 Bucket Versioning 1
After Change second time ,uploaded ,see below screen shoe show version was showing two files
Subbu S3 Bucket Versioning 2
Delete the file, After deleted one delete mark added to the file
For restore the file just select delete mark file delete. automatically recovery the file
After the operation you have performed everything tracked in the cloud trail event history, track will be available 90 days.
s3 version disable option not there ,you can suspend versioning click save changes. it will track
After suspect , after any upload changes version tracks are stopped ,exist version you need to manually select them delete,for space constraints
Storge Classes:
Amazon S3 offers a range of storage classes that you can
choose from based on the data access, resiliency and cost requirements of you
workloads.
(Default file storage class, easy to available 3 zones)
User ->Bucket ->(Az11,Az2,Az3)
Used to frequently accessed data
Files store in multiple Azs.
99.9999999999 of durability
Default storage class
Only Storage charges applicable
No minimum duration
No minimum size
Fast Accessing
Storage is chargeable, no transaction charges
S3 Standard-infrequent Access (s3 standard-IA)
Used to infrequently accessed data
Files store in multiple Azs.
99.9999999999 of durability
Fast accessing, But Cheaper than standard
Storage and retrieval charges applicable
Storage charges with minimum 30days duration
minimum size charge 128kb
Best suitable for the long live data (30 days)
S3 One zone -infrequent Access(s3 one zone -IA)
Suitable for infrequently accessed data
Files store in one Azs.
99.9999999999 of durability
Cheaper than standard -IA
Storage and retrieval charges applicable
Storage charges with minimum 30days duration
minimum size charge 128kb
Best suitable for the second backup storage
S3 Glacier Instant Retrieval
Used to frequently accessed data
Files store in multiple Azs with immediate retrieval.
99.9999999999 of durability
Cheaper than the frequently and infrequently storage type
Storage and retrieval charges applicable
Storage charges with minimum 90days duration
minimum size charge 128kb
Best suitable for the long live data (90 days)
S3 Glacier Flexible Retrieval
Used to frequently accessed data
Files store in multiple Azs & retrieval time min to hrs
99.9999999999 of durability
Cheaper than the Glacier instant retrieval
Storage and retrieval charges applicable
Storage charges with minimum 90days duration
minimum size charge 40kb
Best suitable for the long live data (90 days)
S3 Glacier Deep Archive
Used for infrequently and long live data
Files store in multiple Azs & retrieval in Hrs only
99.9999999999 of durability
Cheaper than the above classes
Storage and retrieval charges applicable
Storage charges with minimum 90days duration
minimum size charge 40kb
Best suitable for the long live data (180 days)
S3 Intelligent-tiering
Used for unknown or changing access pattern
Files store in multiple Azs
99.9999999999 of durability
Automatically moves the files to various classes based on usage (30 days data base)
Monitoring storge and retrieval charge applicable
S3 One zone -infrequent Access(s3 one zone -IA)
Low cost than standard
File store in a select AZ.
99.9999999999 of durability
High performance storage for your very frequently accessed data
Speed 10x faster than standard classes and cost 50% less
you can use s3 Express one zone with service such as amazon sage maker model training amazon athena,Amazon EMR ,an AWS Glue data catalog to accelerate your machine learning analytics workload.
S3 Outpost (Private cloud purpose not general usage)
S3 Life cycle management
An Amazon S3 lifecycle rule configured predefined actions to
perform on objects during their lifetime
You can create a lifecycle rule optimize your objects
storage costs throughout their lifetime.
You can define the scope of the lifecycle rule to all
objects in your bucket or to objects with a shared prefix, certain object
tags,or a certain object size
An Amazon s3 lifecycle rule configures predefined actions to
perform on objects during their lifetime.
You can create a lifecycle rule to optimize you objects
storage costs throughout their lifetime
You can define the scope of the lifecycle rule to all
objects in you bucket or to objects with a shared prefix certain object tags,or
certain object size
Set of rules –
Transition between --Delete the objects
up to down only (when you are set the rule not down to up)
S3 standard S3 Standard-infrequent Access (s3 standard-IA) S3 Glacier Flexible Retrieval(formerly S3 Glacier) S3 Glacier Deep Archive S3 Intelligent-tiering S3 One zone -infrequent Access(s3 one zone -IA) S3 Glacier Instant Retrieval
Practical Life cycle management Rule
Step1: Create one bucket and add one public folder upload some file to public folder
Step2:Here creating life rule for public objects only more than 5 kb files appliable for the rule
Lifecycle rule actions
Transition current versions of objects between storage classes willl added up to down order based on the periodically not possible down to up
Review transition and expiration actions
Current version actions
Day 0
Objects uploaded
Day 30
Objects move to Standard-IA
Day 60
Objects move to Intelligent-Tiering
Day 90
Objects move to One Zone-IA
Day 120
Objects move to Glacier Flexible Retrieval (formerly Glacier)
Day 210
Objects move to Glacier Deep Archive
S3 Transfer Acceleration (S3TA)
Amazon S3 tranfer Acceleration (S3TA) is a bucket-level feature
that enables fast,easy,and secure transfers of files over long distances
between you client and an s3 bucket
It is a bucket level feature that enables fast,easy and secure
transfers of files over long distances between you client and s3 bucket.
Transfer Acceleration can speed up content transfers to and
from Amazon s3 by a much as 50-500% for
long-distance transfer of larger objects
Replication
SRR CRR
Same region replication Cross region replication
Replication is a process which enables automatic,
asynchronous copying of objects across amazon s3 buckets
Bucekets that are configured for objects replication can be
owned by the same AWS account or by different
accounts
You can replicate objects to a single destination bucket or
to multiple destination buckets,filter option also available.
The destination buckets can be in different AWS regions or
within the same region as the source bucket.
You can replicate the existing and new objects to the
destination.Replication will be stated immediately and asynchronously
An IAM role with required permissions attached to the
replication rule to get the access of buckets which are in the same region or
different region or account