Saturday, May 31, 2025

S3 part2

S3 part2

Class 36th AWS S3 part2 May 31st

  • Version S3
  • Storage Classes in S3 
  • Lifecycle management in S3
  • S3 Transfer Acceleration (S3TA)
  • Working with S3 using command line interface -CLI (assignment)

  • Version S3
    • Versioning in Amazon S3 is a means of keeping multiple variant of an object in the same bucket.
    • You can use the s3 versioning feature to preserver,retrieve,and restore every version of every object stored in your buckets.
    • With versioning you can recover more easily from both unintended user actions and application failures.
    • After versioning is enabled for a bucket,if amazon s3 received multiple write requests for the same object simultaneously, it stores all of the objects

    Practical 

    Step1: Create one Sample bucket enable bucket version "ccits3bucketexample" click create bucket.

    Adding uploaded one sample.txt file ,with below context 

    Subbu S3 Bucket Versioning 1

    After Change second time ,uploaded ,see below screen shoe show version was showing two files 

    Subbu S3 Bucket Versioning 2

    Delete the file, After deleted one delete mark added to the file 

    For restore the file just select delete mark file delete. automatically recovery the file 

    After the operation you have performed everything tracked in the cloud trail event history, track will be available 90 days.

    s3 version disable option not there ,you can suspend versioning click save changes. it will track 
    After suspect , after any upload changes version tracks are stopped ,exist version you need to manually select them delete,for space constraints 


    Storge Classes:
    Amazon S3 offers a range of storage classes that you can choose from based on the data access, resiliency and cost requirements of you workloads.

    Frequently Access data 
    S3 Standard 
    S3 Express one zone 
    S3 Reduced redundancy (not using)
    S3 Intelligent-tiering 
    In-Frequently Access data 
    S3 Standard-infrequent Access (s3 standard-IA)
    S3 One zone -infrequent Access(s3 one zone -IA)
    Archival data
    S3 Glacier Instant Retrieval
    S3 Glacier Flexible Retrieval(formerly S3 Glacier)
    S3 Glacier Deep Archive
    S3 Outpost 

    S3 Standard 
    (Default file storage class, easy to available 3 zones)

    User ->Bucket ->(Az11,Az2,Az3)

    • Used to frequently accessed data
    • Files store in multiple Azs.
    • 99.9999999999 of durability
    • Default storage class
    • Only Storage charges applicable
    • No minimum duration
    • No minimum size
    • Fast Accessing
    Storage is chargeable, no transaction charges 
    S3 Standard-infrequent Access (s3 standard-IA)

    • Used to infrequently accessed data
    • Files store in multiple Azs.
    • 99.9999999999 of durability
    • Fast accessing, But Cheaper than standard 
    • Storage and retrieval charges applicable 
    • Storage charges with minimum 30days duration 
    • minimum size charge 128kb
    • Best suitable for the long live data (30 days)
    S3 One zone -infrequent Access(s3 one zone -IA)         
    • Suitable for infrequently accessed data
    • Files store in one Azs.
    • 99.9999999999 of durability
    • Cheaper than standard -IA
    • Storage and retrieval charges applicable 
    • Storage charges with minimum 30days duration 
    • minimum size charge 128kb
    • Best suitable for the second backup storage
    S3 Glacier Instant Retrieval
    • Used to frequently accessed data
    • Files store in multiple Azs with immediate retrieval.
    • 99.9999999999 of durability
    • Cheaper than the frequently and infrequently storage type
    • Storage and retrieval charges applicable 
    • Storage charges with minimum 90days duration 
    • minimum size charge 128kb
    • Best suitable for the long live data (90 days)
    S3 Glacier Flexible Retrieval
    • Used to frequently accessed data
    • Files store in multiple Azs & retrieval time min to hrs 
    • 99.9999999999 of durability
    • Cheaper than the Glacier instant retrieval
    • Storage and retrieval charges applicable 
    • Storage charges with minimum 90days duration 
    • minimum size charge 40kb
    • Best suitable for the long live data (90 days)
    S3 Glacier Deep Archive
    • Used for infrequently and long live data 
    • Files store in multiple Azs & retrieval  in Hrs only 
    • 99.9999999999 of durability
    • Cheaper than the above classes 
    • Storage and retrieval charges applicable 
    • Storage charges with minimum 90days duration
    • minimum size charge 40kb
    • Best suitable for the long live data (180 days)
    S3 Intelligent-tiering 
    • Used for unknown or changing access pattern
    • Files store in multiple Azs
    • 99.9999999999 of durability
    • Automatically moves the files to various classes based on usage (30 days data base)
    • Monitoring storge and retrieval charge applicable 
    S3 One zone -infrequent Access(s3 one zone -IA)
    • Low cost than standard
    • File store  in a select AZ.
    • 99.9999999999 of durability
    • High performance storage for your very frequently accessed data 
    • Speed 10x faster than standard classes and cost 50% less 
    • you can use s3 Express one zone with service such as amazon sage maker model training amazon athena,Amazon EMR ,an AWS Glue data catalog to accelerate your machine learning analytics workload.
    S3 Outpost (Private cloud purpose not general usage)
                                                   S3 Life cycle management 

    • An Amazon S3 lifecycle rule configured predefined actions to perform on objects during their lifetime
    • You can create a lifecycle rule optimize your objects storage costs throughout their lifetime.
    • You can define the scope of the lifecycle rule to all objects in your bucket or to objects with a shared prefix, certain object tags,or a certain object size
    • An Amazon s3 lifecycle rule configures predefined actions to perform on objects during their lifetime.
    • You can create a lifecycle rule to optimize you objects storage costs throughout their lifetime
    • You can define the scope of the lifecycle rule to all objects in you bucket or to objects with a shared prefix certain object tags,or certain object size

     Set of rules – Transition between
                         --Delete the objects
    up to down only (when you are set the rule not down to up)
    S3 standard 
    S3 Standard-infrequent Access (s3 standard-IA)
    S3 Glacier Flexible Retrieval(formerly S3 Glacier)
    S3 Glacier Deep Archive
    S3 Intelligent-tiering 
    S3 One zone -infrequent Access(s3 one zone -IA)
    S3 Glacier Instant Retrieval

    Practical  Life cycle management  Rule 
    Step1: Create one bucket and add one public folder upload some file to public folder 


    Step2:Here creating life rule for public objects only more than 5 kb files appliable for the rule
    Lifecycle rule actions 
    Transition current versions of objects between storage classes willl added up to down order based on the periodically not possible down to up


       Review transition and expiration actions
    Current version actions
    Day 0
    Objects uploaded
    Day 30
    Objects move to Standard-IA
    Day 60
    Objects move to Intelligent-Tiering
    Day 90
    Objects move to One Zone-IA
    Day 120
    Objects move to Glacier Flexible Retrieval (formerly Glacier)
    Day 210
    Objects move to Glacier Deep Archive


    S3 Transfer Acceleration (S3TA)

    Amazon S3 tranfer Acceleration (S3TA) is a bucket-level feature that enables fast,easy,and secure transfers of files over long distances between you client and an s3 bucket

    It is a bucket level feature that enables fast,easy and secure transfers of files over long distances between you client and s3 bucket.

    Transfer Acceleration can speed up content transfers to and from Amazon s3 by  a much as 50-500% for long-distance transfer of larger objects 


    Replication 

    SRR                                                      CRR

    Same region replication              Cross region replication

    • Replication is a process which enables automatic, asynchronous copying of objects across amazon s3 buckets
    • Bucekets that are configured for objects replication can be owned by  the same AWS account or by different accounts
    • You can replicate objects to a single destination bucket or to multiple destination buckets,filter option also available.
    • The destination buckets can be in different AWS regions or within the same region as the source bucket.
    • You can replicate the existing and new objects to the destination.Replication will be stated immediately and asynchronously
    • An IAM role with required permissions attached to the replication rule to get the access of buckets which are in the same region or different region or account 



    --Thanks 

    Wednesday, May 28, 2025

    IAM Part#3

     IAM Part#3

    Class 33rd AWS IAM Roles May28th
    Topics 
    Working with Access keys.
    IAM Roles
    What is the AWS CLI?
    How to download and install CLI?
    IAM Operation with CLI
    Working with CLI in Linux Machine
    Working with CLI in AWS Cloud shell.

    We able to access multiple way to access aws services ,so far we have used aws console only able connect using CLI ,Could shell,Programming(phython/java..etc)
     CLI Command line interface
     
       CLI Working with Access key 
    Step1: For the ccitdev1 user ,i have disabled console access, trying to connect with Access key

    >ccitdev1 >Security Credentials>Access key  >Click >Create access key

    Give any name ,create access key
    Download the key 

    Practical : Using Access key and secret key Python program ,upload the file to s3 bucket

    already i have s3 bucket ccitnewbucketcreated edit the permission tab give ACL enabled click save

    And Uncheck Block public access click save changes


    Phyton Script: 
    Step1:
     Here boto3 is aws related framework for python program kit will available from boto3, we just
    import boto3 (aws service we can access) using you acccess_key_id,secrete_access_key, previous which is you have download access key for the user ccitdev1, copy past the key

    from flask import Flask, render_template, request, redirect, flash, url_for
    import boto3
    from botocore.exceptions import NoCredentialsError, PartialCredentialsError, ClientError
    import os

    app = Flask(__name__)
    app.secret_key = 'your_secret_key'

    # AWS S3 Configuration
    S3_BUCKET = "ccitnewbucketcreated"
    S3_REGION = "eu-west-1"

    # Initialize S3 client
    s3_client = boto3.client(
    's3',
    aws_access_key_id='your_access_key_id',
    aws_secret_access_key='your_secret_access_key',
    region_name=S3_REGION
    )

    # Check if the bucket exists, if not, create it
    def create_bucket_if_not_exists():
    try:
    # Check if the bucket exists
    s3_client.head_bucket(Bucket=S3_BUCKET)
    except ClientError as e:
    # If the bucket does not exist, create it
    if e.response['Error']['Code'] == 'NoSuchBucket':
    try:
    s3_client.create_bucket(
    Bucket=S3_BUCKET,
    CreateBucketConfiguration={'LocationConstraint': S3_REGION}
    )
    flash(f"Bucket '{S3_BUCKET}' created successfully.")
    except Exception as e:
    flash(f"Error creating bucket: {e}")
    else:
    flash(f"Error accessing bucket: {e}")

    # Home and File Upload Route
    @app.route('/', methods=['GET', 'POST'])
    def upload_file():
    # Create the bucket if it doesn't exist
    create_bucket_if_not_exists()

    if request.method == 'POST':
    if 'file' not in request.files:
    flash('No file part in the request')
    return redirect(request.url)

    file = request.files['file']

    if file.filename == '':
    flash('No selected file')
    return redirect(request.url)

    try:
    s3_client.upload_fileobj(
    file,
    S3_BUCKET,
    file.filename
    )
    flash(f"File '{file.filename}' uploaded successfully to S3.")
    return redirect(url_for('upload_file'))

    except NoCredentialsError:
    flash("Credentials not available.")
    except PartialCredentialsError:
    flash("Incomplete credentials provided.")
    except Exception as e:
    flash(f"Error uploading file: {e}")

    # List files in the bucket to display them on the page
    files = []
    try:
    response = s3_client.list_objects_v2(Bucket=S3_BUCKET)
    if 'Contents' in response:
    files = [file['Key'] for file in response['Contents']]
    except Exception as e:
    flash(f"Error retrieving files: {e}")

    return render_template('upload.html', files=files)


    # Route to delete a file from S3
    @app.route('/delete/<filename>', methods=['POST'])
    def delete_file(filename):
    try:
    s3_client.delete_object(Bucket=S3_BUCKET, Key=filename)
    flash(f"File '{filename}' deleted successfully from S3.")
    except Exception as e:
    flash(f"Error deleting file: {e}")
    return redirect(url_for('upload_file'))

    if __name__ == '__main__':
    app.run(debug=True)
    Step2: Your local computer/laptop required python sotware/ and python charm 
    After changes run the code in command prompt 
    For boto3 installation 
    pip install boto3
    python.exe -m pip install --upgrade pip

    Step3:
    cmd>python app.py
     * Serving Flask app 'app'
     * Debug mode: on
    WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
     * Running on http://127.0.0.1:5000
    Press CTRL+C to quit
     * Restarting with stat
     * Debugger is active!
     * Debugger PIN: 629-481-715

    Upload any file ,uploaded successfully.


    Object upload successfully using python program, using access key 

                                              Working with  CLI
    Step1: Download software for windows and install , if you using amazon linux it will exist by default 
    if windows need install manually 
    https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html

    Just for checking after installed CLI, if comes option installed successfully
    PS C:\Users\Administrator> aws cli

    usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
    To see help text, you can run:
    Step2:Your credentials will store in below your user  directory 

    C:\Users\Administrator\.aws
    User : ccitdev1    
    PS C:\Users\Administrator> aws configure
    AWS Access Key ID [None]: AKIATFBMO7H4N2HZHWK4
    AWS Secret Access Key [None]: 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
    Default region name [None]: eu-west-1
    Default output format [None]:

    Step3:copy one file to s3 bucket uploaded successfully
    PS C:\Users\Administrator> aws s3 cp "C:\Users\Administrator\Desktop\Pht\5.png" "s3://ccitnewbucketcreated"
    upload: Desktop\Pht\5.png to s3://ccitnewbucketcreated/5.png


    Download the files from aws bucket
    PS C:\Users\Administrator> aws s3 cp  "s3://ccitnewbucketcreated/5.png" "C:\Users\Administrator\Desktop\Pht"
    download: s3://ccitnewbucketcreated/5.png to Desktop\Pht\5.png


                                                                   IAM Role 
    Aws Role( role simply ,we attach the role to service, not for the iam user)

    1.       A role is a set of permission that grant access to actions and resources in AWS. These permissions are attached to the role,not to an IAM user or a group.

    2.       An IAM user can use a role in the same AWS account or a different account.

    3.       An IAM user is similar to an IAM user, role is also an AWS identity with permission policies that determine what the identity can and cannot do in aws.

    4.       A role is not uniquely associated with a single person,it can be used by anyone who needs it.

    5.       You can use the roles to delegate access to users,applications or services that generally do not have access to you aws resources 

    For example : communicate one service to other service ,S3 data move to rds using lambda
    communicate service we need permission, the permission we will give from role

    S3-->lambda -->Rds 
    Step1: Create one Instance Windows 
    Aws Service (Ec2,S3,lambda..etc)
    Aws account (we give permission to other account also)

    Select service Ec2  click next

    Give permission to S3 access for Ec2 service

    Click next 
    Give any name for the role Ec2-S3-Role,Click Create role , see below json script understand, giving permission to ec2 service


    Step2:Create one windows instances and generated the password. Security >Get windows password upload you .pem file which you have added window key while creating instance.
    Instance ID
    i-0cdeaff2968b40266
    Private IP address
    10.0.2.107
    Username
    Administrator
    Password
    YDQE3ooRAdDac?hFShM.qZmC5FmerGti

    After launch windows remotely using public ip and mstsc, download the window CLI software in  you remote desktop and install, After installation completed.


    Step3: Now we need give Role base authentication to our instance
    Select the instance >security >Modify IAM role ,assign role which you have create and update IAM role
    Step4:
    Remote machine successfully uploaded the file using IAM role based 
    C:\Users\Administrator\Desktop\images>aws s3 cp "./Image1.png" "s3://ccitnewbucketcreated/"
    upload: .\Image1.png to s3://ccitnewbucketcreated/Image1.png
      
    Step5:

    Step6: Now this code those two lines not required , it will come to Role based with out accesskey and secrete key.

    Amazon Linux CLI in built in exist you just configure the key.


    CLI Using create user existing user add permision IAM full access 

    Step1:

    Step2:I have executed this in cmd prompt 

    PS C:\Users\Administrator> aws iam create-user --user-name ccitvdev2
    {
        "User": {
            "Path": "/",
            "UserName": "ccitvdev2",
            "UserId": "AIDATFBMO7H4DAX5TNEKR",
            "Arn": "arn:aws:iam::216989104632:user/ccitvdev2",
            "CreateDate": "2025-06-03T13:09:36+00:00"
        }
    }
    Step3:
    Step4:
    PS C:\Users\Administrator> aws iam delete-user --user-name ccitvdev2
    PS C:\Users\Administrator>


    Working with CLI in AWS Cloud shell.
    AWS CloudShell is a browser-based shell provided by Amazon web services(AWS) that allow you to manage you AWS resources directly from the AWS Management console,without needing to install or configure anything locally.

    Step1:

    We have inbuilt in service is cloud shell it is work just like linux CLI ,with not EC2 linux instance.
    it has inbuilt git,python,java along with 1 GB storage also available 


    You need permission for the user 
    ~ $ git --version
    git version 2.47.1
    ~ $ python --version
    Python 3.9.21
    ~ $ java --version
    openjdk 21.0.7 2025-04-15 LTS
    OpenJDK Runtime Environment Corretto-21.0.7.6.1 (build 21.0.7+6-LTS)
    OpenJDK 64-Bit Server VM Corretto-21.0.7.6.1 (build 21.0.7+6-LTS, mixed mode, sharing)
    ~ $ aws s3 ls
    2025-05-31 03:58:25 ccitbuckets3bucket
    2025-06-01 16:39:14 ccitdev1june
    2025-06-01 14:43:56 ccitnewbucketcreated
    ~ $ aws iam create-user --user-name ccitdev3
    {
        "User": {
            "Path": "/",
            "UserName": "ccitdev3",
            "UserId": "AIDATFBMO7H4NYV4JMAIR",
            "Arn": "arn:aws:iam::216989104632:user/ccitdev3",
            "CreateDate": "2025-06-03T13:30:02+00:00"
        }
    }



    if you want to test any python code you can test simple code .
    Cloud shell is free tier, using Cloud shell create resource will be chargeable

    Maximum one AWS account 5000 user we will create. 

    --Thanks








    Tuesday, May 27, 2025

    IAM Part#2

     IAM Part#2

    Class 32nd AWS IAM Policies May27th
    Topics
    IAM Policies and Types 
    What is ARN ?
    IAM Group
    Inline Policies, 
    permission boundaries

    What is ARN ?
    ARN full-form is Amazon Resource name
    ARN is to uniquely identity the AWS resources.
    Every AWS Resource Contains a unique ARN 
    We use ARN in for different use cases like IAM roles creation policies creation ..etc 

    IAM USER  ARN  : ccitdeveloper is the user, 216989104632 (12 digit AWS account id),iam (resource type),aws(provider)
    arn:aws:iam::216989104632:user/ccitdeveloper
    S3 Bucket ARN :ccitbuckets3bucket(bucketname), ::: (::Global Service, one colon :region)
    arn:aws:s3:::ccitbuckets3bucket
    arn:partition:service:region:account-id,resource-type/resourceid

    Partition (aws- aws Regions, aws-cn -chain Region,aws-us-gov -AWS Govcloud(US) region)
    Region specific (S3,Ec2,IAM) ap-south-1.. etc 

    What is a policy in IAM?
    ->Im simple words, policy is nothing but permission to use any service in aws 
    ->An aws policy define the permissions of an identity(users,groups, and roles) or resource within the aws account 
    ->An aws iam policy regulates access to aws resources to help ensure that only authorized users have access to specific digital assets 
    ->Most policies are written and stored in aws as Json documents .When you attach policy to an iam entity, such as a user,group,or role,it grants permissions to that entity.
    Types of Policies 
    We have multiple type of policies available in AWS IAM for different use cases 
    -->Identity-based policies
    -->Resource-based policies
    -->Permission boundaries :
    AWS 
    -->Organization SCPS
    -->Access controls(ACLS)
    -->Session Policies

    -->Identity-based policies are Json permission policies documents that control what actions and identity(users, group of users and roles ) can perform , on which resource ,and under what conditions)
    Identity-based policies Categorized
    Managed policies :managed  policies that are created and managed by AWS
    Customized polices :Customer managed policies provides more precise control over your policies than AWS managed policies 
    Inline policies:Policies that you add directly to a single user,group or role, inline policies maintain a strict one-to-one relationship between a policy and and identity ,they are deleted when you deleted the identity 
     

    -->Resource-based policies (EC2,S3..etc)

    -->Permission boundaries (we can put limitation to Iam user ):AWS Iam Permission Boundaries are an advanced feature in AWS identity and Access Management(IAM) that allow administrators to define the maximum level of permissions an IAM entity(like a user or role),can have. Permission boundaries are used to limit the effective permissions of the entity, even if the entity is grated more extensive permissions via policies.

    -->Organization SCPS(Service control)

    We have Assign policies to the different different team based on there responsibilities.
    For ex:- Administrative team, database team,Development team,Billing team, Security team. like that.


    Own Policy We can Write for example , if you want give S3 bucket give some permission to the developer, we will create our own policies and assign to the user those we call customer managed policy.

    Practical : S3 Service Related Policy
    Step1: Create user Policy , Give full access for the resource  and then click next

    Created Custom managed policy 

    Json :Statement Block Resource * mean allow all permission,
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "VisualEditor0",
                "Effect": "Allow",
                "Action": "s3:*",
                "Resource": "*"
            }
        ]
    }

    Step2: We can assigned these policy to one User ccitdeveloper, click next add permission ,
    now the user can able to write,read the file into the bucket
    Able to create bucket ccitdeveloper user and upload the file



    First you need give first permission. List my bucket upto(1000 file you able view,remaining will be paging)and then on top of that you can give put ,get object permision.

    Simple Json script below 
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "VisualEditor0",
                "Effect": "Allow",
                "Action": [
                    "s3:PutObject",
                    "s3:GetObject",
                    "s3:ListBucket"
                ],
                "Resource": "*"
            }
        ]
    }

    Step3: As you see below all list of bucket showing ,can you able to give specific resource bucket to the user 
    See all buckets are showing, you can copy the bucket name and give to specific for the edit policy 

    Click Add ARN Click next and save changes 


    Here we can not able view other buckets to list, we can able stop the object permission only 

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "VisualEditor0",
                "Effect": "Allow",
                "Action": [
                    "s3:ListAllMyBuckets",
                    "s3:ListBucket"
                ],
                "Resource": "*"
            },
            {
                "Sid": "VisualEditor1",
                "Effect": "Deny",
                "Action": [
                    "s3:PutObject",
                    "s3:GetObject",
                    "s3:ListBucket"
                ],
                "Resource": "arn:aws:s3:::ccitnewbucketcreated"
            }
        ]
    }

    User Group : Group of Users, give all required Permissions or policies
    Create the ccitdev1,ccitdev2,ccitdev3 users click create user groups
    and search the policy and attached to them

    Given any name to the create user group click save


    Now you see cccitdev1,ccitdev2,ccitdev3 has all these two permission,if you want remove one of the policy for the all user just edit remove one policy to user group, it will apply for all users automatically.


    Permission Boundaries : Even user have user group permission, if you want limit his permission for the particular user on permission ,you give to the user
    See below ccitdev1 has s3 full access, i have put limitation to the bounday read only access, click save 



    Inline Policy : it is special policy one-to one user , it is separate for the user group 

    Create inline Policy, and attached to the user

    See below ,newly created one more policy ccitdev1_inline it has give special permission ec2 
    for particular user , that is called customer inline policy
    See here Ec2 special permission to the ccitdev1 user.


    --Thanks


     

     ***Thanks





    Monday, May 26, 2025

    AWS Account

     AWS Account creation

    Class 31st Aws account creation May26th

    www.aws.amazon.com  or www.aws.com
    >Create Account 
    Root User 
    When you first create an AWS account ,you begin with one sign-in identity that has complete access to all
    AWS services and resources in the account .This identity is called Is called the AWS account root user and is accessed by signing in with the email address and password that you used to create the account.
    Important:We strongly recommend that you don’t use the root user for you everyday tasks. Safegaurd
    Your root user credentials and use them to perform the tasks that only the root user can perform.
    IAM user
    An IAM user is an identity with your AWS account that has specific permissions for a single person or application

    Step1:
    AWS Account creation steps 
    1.Email verification
    2.Contact information 
    3.Billing information
    4.Identity confirmation
    5.Support plan selection 

    1.Email Verification:
    Verification code ,sent to you email ,give the verification code and the click verify 

    Give root user password click continue
    2.Contact Information give details click continue

    3.Billing information: Debit card/credit card International  payment mode should be enable
    initial 2 will be debited to you account

    4. Identity confirmation ,pan or aadhar ..etc ,mobile number

    5.Support plan Choose Basic Support free
    Aws Account created sucessfully

    Step 1: Sign in with rootuser, give MFA multi factor authentication 
    AWS has total 240 Services

    Daily Root usage don't use root user , We need to use IAM user 

    Create IAM User  [Identity access management]
    Step 2: Select the service IAM > click User ( important main component is user)
    >Click Create user 
     IAM user have limit permission 
    Usually for Developer we give these permissions (RDS,Dynamodb,lambda,Ec2)
                       Admin(IAM,Cloudwatch)

     Javadeveloper/phython We wil give Secrete,accesskey
    Step3: click Next , Attach the permissions,  permission nothing but policies 
    whoevery the user based on there requirement and usage services ,give attach polices to the user 

    If you want give s3 related service attach the policy to the user ,the user has full access for s3 service( S3 mean simple service storage)

    Login with iam user and password , Every account 12 digit ID
     IAM Service trying to use ,getting error Access denied 


    User has S3 full access, already previous one bucket create user can able to see exist bucket infomation


     AwS Health Dashboard

    Open and recent issues : it show any recent issue or knowing issues will give 
    Scheduled Changes: We can scheduled can changes aws team will provide 
    Evenlog: any event region specific health check will monitor

    Service wise health checks 


    --Thanks