AWS Simple Storage Service (S3), RRS, Glacier

Amazon Web Services (AWS) provides low-cost data storage with high durability and availability. AWS offers storage choices for backup, archiving, and disaster recovery use cases and provides block, file, and object storage.

AWS Identity and Access Management

Block, File and object storage

AWS Identity and Access Management
  • Block Storage (SAN): We use block storage systems to host our databases, support random read/write operations, and keep system files of the running virtual machines.
  • File Storage (NAS): File level storage is seen and deployed in Network Attached Storage (NAS) systems.
  • Object Storage: Object storage works very well for unstructured data sets where data is mostly read (rather than written to).

AWS Identity and Access Management
AWS Identity and Access Management

S3

S3 (Simple storage service) provides developers and IT teams with highly scalable, durable, secure object storage. Amazon S3 is easy to use, with a simple web service interface to store and retrieve any amount of data from anywhere on the web.

  • Amazon S3 also allows you to pay only for the storage you actually use
  • Amazon S3 is one of first services introduced by AWS
  • Amazon S3 offers configurable lifecycle policies
  • Amazon S3 provides a rich set of permissions, access controls, and encryption options.
  • Amazon S3 object contains both data and metadata.
  • Objects reside in containers called buckets, and each object is identified by a unique user-specified key (filename). Buckets are a simple flat folder with no file system hierarchy.
  • You can’t “mount” a bucket, “open” an object, install an operating system on Amazon S3, or run a database on it.

S3 Bucket

  • A bucket is a container (web folder) for objects (files) stored in Amazon S3.
  • Every Amazon S3 object is contained in a bucket.
  • Buckets form the top-level namespace for Amazon S3
  • Bucket names are global. This means that your bucket names must be unique across all AWS accounts, much like Domain Name System (DNS) domain names, not just within your own account.
  • Bucket names can contain up to 63 lowercase letters, numbers, hyphens, and periods.
  • You can create and use multiple buckets
  • you can have up to 100 bucket per account by default.

AWS Regions

  • Amazon S3 bucket is created in a specific region that you choose.
  • You can create and use buckets that are located close to a particular set of end users or customers in order to minimize latency, or located in a particular region to satisfy data locality and sovereignty concerns, or located far away from your primary facilities in order to satisfy disaster recovery and compliance needs.

Objects

  • Objects are the entities or files stored in Amazon S3 buckets.
  • An object can store virtually any kind of data in any format.
  • Objects can range in size from 0 bytes up to 5TB.
  • a single bucket can store an unlimited number of objects.
  • Each object consists of data (the file itself) and metadata (data about the file).
  • There are two types of metadata
    • system metadata: System metadata is created and used by Amazon S3 itself, and it includes things like the
      • date last modified,
      • object size,
      • MD5 digest,
      • and HTTP Content-Type.
    • user metadata: . User metadata is optional, and it can only be specified at the time an object is created. You can use custom metadata to tag your data with attributes that are meaningful to you.

Keys

  • Every object stored in an S3 bucket is identified by a unique identifier called a key
  • A key can be up to 1024 bytes of Unicode UTF-8 characters, including embedded slashes, backslashes, dots, and dashes. important
  • Keys must be unique within a single bucket, but different buckets can contain objects with the same key.

Object URL

  • every Amazon S3 object can be addressed by a unique URL
  • Example: http://mybucket.s3.amazonaws.com/jack.doc
    • mybucket is the S3 bucket name, and jack.doc is the key or filename.
  • Example: http://mybucket.s3.amazonaws.com/fee/fi/fo/fum/jack.doc
    • The bucket name is still mybucket, but now the key or filename is the string fee/fi/fo/fum/jack.doc. A key may contain delimiter characters like slashes or backslashes to help you name and logically organize your Amazon S3 objects, but to Amazon S3 it is simply a long key name in a flat namespace. There is no actual file and folder hierarchy.

S3 Operations

  • Create/delete a bucket
  • Write an object
  • Read an object
  • Delete an object
  • List keys in a bucket

REST Interface

  • The native interface for Amazon S3 is a REST (Representational State Transfer) API.
  • you use standard HTTP or HTTPS requests to create and delete buckets, list keys, and read and write objects.
  • Always use HTTPS for Amazon S3 API requests to ensure that your requests and data are secure.

Durability and Availability

  • Durability: Durability addresses the question, “Will my data still be there in the future?”
  • Availability: Availability addresses the question, “Can I access my data right now?”
  • Amazon S3 standard storage is designed for 99.999999999% durability and 99.99% availability of objects over a given year.

Even though Amazon S3 storage offers very high durability at the infrastructure level, it is still a best practice to protect against user-level accidental deletion or overwriting of data by using additional features such as versioning, cross-region replication, and MFA Delete.

Data Consistency

  • Consistency means that if you PUT new data to an existing key, a subsequent GET might return the old data.
    • Update file Condition: your data is automatically replicated across multiple servers and locations within a region, changes in your data may take some time to propagate to all locations. As a result, there are some situations where information that you read immediately after an update may return stale data.
    • Insert file condition: Amazon S3 provides read-after-write consistency. means no problem.
    • Delete file condition: if you DELETE an object, a subsequent GET for that object might still read the deleted object.(Eventually consistency)

Access Control

  • Using an Amazon S3 bucket policy,
    • you can specify who can access the bucket,
    • from where (by Classless Inter-Domain Routing [CIDR] block or IP address),
    • and during what time of day.
  • IAM policies may be associated directly with IAM principals that grant access to an Amazon S3 bucket, just as it can grant access to any AWS service and resource.

Prefixes and Delimiters in Key name

  • you would use a slash (/) or backslash (\) as a delimiter and then use key names with embedded delimiters to emulate a file and folder hierarchy within the flat object key namespace of a bucket.

For example, you might want to store a series of server logs by server name (such as server42), but organized by year and month, like so:

  • logs/2016/January/server42.log
  • logs/2016/February/server42.log
  • logs/2016/March/server42.log

Use delimiters and object prefixes to hierarchically organize the objects in your Amazon S3 buckets, but always remember that Amazon S3 is not really a file system.

Encryption

  • Two type of encryption
    • In-flight: you can use the Amazon S3 Secure Sockets Layer (SSL) API endpoints. This ensures that all data sent to and from Amazon S3 is encrypted while in transit using the HTTPS protocol.
    • Rest:
      • you can use several variations of Server-Side Encryption (SSE).
      • Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it.
      • All Server-Side Encryption (SSE) performed by Amazon S3 and AWS Key Management Service (Amazon KMS) uses the 256-bit Advanced Encryption Standard (AES).
      • Types of Encryption
        • SSE-S3 (AWS-Managed Keys)
        • SSE-KMS (AWS KMS Keys)
        • SSE-C (Customer-Provided Keys)
        • Client-Side Encryption

For maximum simplicity and ease of use, use server-side encryption with AWSmanaged keys (SSE-S3 or SSE-KMS).

S3 Versioning

  • Amazon S3 versioning helps protects your data against accidental or malicious deletion by keeping multiple versions of each object in the bucket, identified by a unique version ID.
  • If a user makes an accidental change or even maliciously deletes an object in your S3 bucket, you can restore the object to its original state simply by referencing the version ID
  • Once enabled, versioning cannot be removed from a bucket; it can only be suspended

S3 MFA Delete

  • MFA Delete adds another layer of data protection on top of bucket versioning.
  • MFA Delete requires an authentication code (a temporary, one-time password) generated by a hardware or virtual Multi-Factor Authentication (MFA) device.

Pre-Signed URLs

All Amazon S3 objects by default are private, meaning that only the owner has access. However, the object owner can optionally share objects with others by creating a pre-signed URL, using their own security credentials to grant time-limited permission to download the objects.

Multipart Upload

  • Multipart upload is a three-step process: initiation, uploading the parts, and completion (or abort).
  • Parts can be uploaded independently in arbitrary order, with retransmission if needed. After all of the parts are uploaded, Amazon S3 assembles the parts in order to create an object.
  • you should use multipart upload for objects larger than 100 Mb, and you must use multipart upload for objects larger than 5GB.
  • When using the low-level APIs, you must break the file to be uploaded into parts and keep track of the parts.
  • When using the high-level APIs and the high-level Amazon S3 commands in the AWS CLI (aws s3 cp, aws s3 mv, and aws s3 sync), multipart upload is automatically performed for large objects.
AWS Identity and Access Management

Range GETs

  • It is possible to download (GET) only a portion of an object in both Amazon S3 and Amazon Glacier by using something called a Range GET.
  • Using the Range HTTP header in the GET request or equivalent parameters in one of the SDK wrapper libraries, you specify a range of bytes of the object.
  • This can be useful in dealing with large objects when you have poor connectivity or to download only a known portion of a large Amazon Glacier backup.

Logging

  • Logging is off by default, but it can easily be enabled
  • When you enable logging for a bucket (the source bucket), you must choose where the logs will be stored (the target bucket). You can store access logs in the same bucket or in a different bucket.
  • best practice to specify a prefix, such as logs/ or yourbucketname/logs/
  • Logs include information such as:
    • Requestor account and IP address
    • Bucket name
    • Request time
    • Action (GET, PUT, LIST, and so forth)
    • Response status or error code

S3 Essentials

  1. S3 is object based i.e. it allows you to store, and upload files on the platform. Cannot install OS or databases on S3
  2. Files can be from 1 byte to 5tb in size
  3. There is unlimited storage
  4. Files are stored in buckets(any directory like we have on Windows or Linux files system)
  5. Buckets have a unique namespace for each given region. For example, if we wanted to create a bucket called acloudguru the EU west region then that namespace name would be reserved so someone else using another Amazon account could not create a acloudguru bucket eg: https://s3-us-west2.amazonaws.com/acloudguru
  6. When you upload a file to S3 you will receive a HTTP 200 code if the file uploaded successfully.
  7. Amazon guarantees 99.99% availability or the S3 platform.S3 buckets are essentially spread across availability zones, so if the availability zone goes down the S3 bucket is stored in the other availability zones, and Amazon does this automatically we do not need to configure this
  8. Amazon also guarantees 99.999999999% durability for S3 information.Durability is simply, if you think of storing a file on a disc set that’s a RAID 1 and you lose one of the discs, since we are in the RAID 1 configuration which is a mirror, all your information is stored across 2 disks, so you can afford the loss of 1 disc. The way Amazon structures S3 is that if we store 10,000 files that will guarantee that those 10,000 files will stay there with above guarantee %age of durability
  9. S3 can have metadata(key value pairs) on each file
  10. S3 allows you to do lifecycle management as well as versioning
  11. S3 also allows you to encrypt your buckets, so you can store your files and encrypted at rest

S3 Bucket URL Structure

Amazon S3 supports both virtual-hosted–style and path-style URLs to access a bucket.

  • In a virtual-hosted–style URL, the bucket name is part of the domain name in the URL.
    • http://bucket.s3.amazonaws.com
    • http://bucket.s3-aws-region.amazonaws.com.
  • In a path-style URL, the bucket name is not part of the domain (unless you use a region-specific endpoint).

S3 Object Structure

S3 is object based. Object consist of the following

  1. Key: This is simply the name of the object.
  2. Value: This is simply the data and is made up of a sequence of bytes.
  3. Version ID: Important for versioning.
  4. MetaData: Data about the data you are storing.
  5. Subresources
    • Access control lists (ACL): Bucket level access control
    • Torrent: Amazon does support torrent

Storage Types

  1. S3: 99.99% avialibility, 99.99999999999% durability, stored redundantly across multiple devices in multiple facilities and is designed to sustain the lose of 2 facilities concurrently.
  2. S3 - IA (Infrequently Accessed): For data that is accessed less frequently, but requires rapid access when needed. Lower fee than S3, but you are charged a retrieval fee.
  3. Reduced Redundancy Storage (RRS): Reduced Redundancy Storage (RRS) is an Amazon S3 storage option that enables customers to store noncritical, reproducible data at lower levels of redundancy than Amazon S3’s standard storage. It provides a highly available solution for distributing or sharing content that is durably stored elsewhere, or for storing thumbnails, transcoded media, or other processed data that can be easily reproduced.Designed to provide 99.99% durability and 99.99% availability of object over a given year.
  4. Glacier:
    • An extremely low-cost storage service for data archival. Amazon Glacier stores data for as little as $0.004 per gigabyte per month, and is optimized for data that is infrequently accessed and for which retrieval times of 3-5 hours are suitable.
    • Glacier is basically data archiving. So, in traditional organizations you might archive off to tape, and you might do this because in some countries, companies that are regulated by Financial Services Authority, have to store their data for seven years and more.
    • You want to archive off to tapes and put the tapes into a safe location, and then forget about them. After 7 years you can destroy the data So Amazon gives us this data archival as a service. We can do away with tapes entirely,we can archive anything that is in our S3 buckets directly to Glacier.
    • In Amazon Glacier, data is stored in archives.
    • An archive can contain up to 40TB of data, and you can have an unlimited number of archives.
    • Each archive is assigned a unique archive ID at the time of creation.
    • All archives are automatically encrypted, and archives are immutable—after an archive is created, it cannot be modified.
    • Vaults: Vaults are containers for archives. Each AWS account can have up to 1,000 vaults. You can control access to your vaults and the actions allowed using IAM policies or vault access policies.
    • Vaults Locks: You can easily deploy and enforce compliance controls for individual Amazon Glacier vaults with a vault lock policy. Once locked, the policy can no longer be changed.
AWS Identity and Access Management

S3 v/s RRS v/s Glacier

Amazon Glacier S3
Amazon Glacier supports 40TB archives S3 supports 5TB objects
Amazon Glacier are identified by system-generated archive IDs Amazon S3 lets you use “friendly” key names
Amazon Glacier archives are automatically encrypted encryption at rest is optional in Amazon S3
AWS Identity and Access Management

S3 Charges Depends On

  1. Storage: Charge for storage, how much data you are stored on S3.
  2. Requests: You charged for number of requests.
  3. Storage Management: We add tags to manage files in S3, that would be chargeable.
  4. Data Transfer Pricing: Data incomming is free, but data moving in S3, like replication of data files one region to another region that would be chargeable.
  5. Transfer Acceleration: caching file on Edge location.

S3 Charges List

AWS Identity and Access Management

Charges depends on regions, every regions has different charges list

S3 Bucket Creation

  • Rules for Bucket Naming
    • Always choose DNS compliant bucket name.
    • Bucket names must be at least 3 and no more than 63 characters long.
    • Bucket names can contain lowercase letters, numbers, and hyphens. Each label must start and end with a lowercase letter or a number.
    • Bucket names must not be formatted as an IP address (e.g., 192.168.5.4).
    • Amazon recommend that you do not use periods (.) in bucket names.

S3 Versioning

  • Stores all the versions of an object(including all writes and even if you delete and object).
  • Great backup tool, once enabled,versioning cannot be disabled but only suspended
  • Integrates with Lifecycle rules
  • Versioning’s MultiFactorAuthentication(MFA) Delete capability, which uses multi-factor authentication, can be used to provide an additional layer of security Cross region Replication, requires versioning enabled on the source bucket

Cross Region Replication

  • Cross-region replication is a feature of Amazon S3 that allows you to asynchronously replicate all new objects in the source bucket in one AWS region to a target bucket in another region.
  • Any metadata and ACLs associated with the object are also part of the replication.
  • To enable cross-region replication, versioning must be turned on for both source and destination buckets.

If turned on in an existing bucket, cross-region replication will only replicate new objects. Existing objects will not be replicated and must be copied to the new bucket via a separate command.

S3 Lifecycle management

  • can be used in conjunction with versioning
  • can be applied to current versions and previous versions.
  • Following actions are allowed in conjunction with or without versioning:
    • archive to Glacier storage class(30 days after IA, if relevant)
    • permanent delete
    • archive and permanent delete
    • transition to the Standard: Infrequent Access Storage class(128kb and 30 days after the creation date)

Exam Tips

  • Amazon S3 is the core object storage service on AWS, allowing you to store an unlimited amount of data with very high durability.
  • Common Amazon S3 use cases include backup and archive, web content, big data analytics, static website hosting, mobile and cloud-native application hosting, and disaster recovery.
  • Amazon S3 is integrated with many other AWS cloud services, including AWS IAM, AWS KMS, Amazon EC2, Amazon EBS, Amazon EMR, Amazon DynamoDB, Amazon Redshift, Amazon SQS, AWS Lambda, and Amazon CloudFront.
  • Object storage differs from traditional block and file storage. Block storage manages data at a device level as addressable blocks, while file storage manages data at the operating system level as files and folders. Object storage manages data as objects that contain both data and metadata, manipulated by an API.
  • Amazon S3 buckets are containers for objects stored in Amazon S3. Bucket names must be globally unique. Each bucket is created in a specific region, and data does not leave the region unless explicitly copied by the user.
  • Amazon S3 objects are files stored in buckets. Objects can be up to 5TB and can contain any kind of data. Objects contain both data and metadata and are identified by keys. Each Amazon S3 object can be addressed by a unique URL formed by the web services endpoint, the bucket name, and the object key.
  • Amazon S3 has a minimalistic API—create/delete a bucket, read/write/delete objects, list keys in a bucket—and uses a REST interface based on standard HTTP verbs—GET, PUT, POST, and DELETE. You can also use SDK wrapper libraries, the AWS CLI, and the AWS Management Console to work with Amazon S3.
  • Amazon S3 is highly durable and highly available, designed for 11 nines of durability of objects in a given year and four nines of availability.
  • Amazon S3 is eventually consistent, but offers read-after-write consistency for new object PUTs.
  • Amazon S3 objects are private by default, accessible only to the owner. Objects can be marked public readable to make them accessible on the web. Controlled access may be provided to others using ACLs and AWS IAM and Amazon S3 bucket policies.
  • Static websites can be hosted in an Amazon S3 bucket.
  • Prefixes and delimiters may be used in key names to organize and navigate data hierarchically much like a traditional file system.
  • Amazon S3 offers several storage classes suited to different use cases: Standard is designed for general-purpose data needing high performance and low latency. Standard-IA is for less frequently accessed data. RRS offers lower redundancy at lower cost for easily reproduced data. Amazon Glacier offers low-cost durable storage for archive and long-term backups that can are rarely accessed and can accept a three- to five-hour retrieval time.
  • Object lifecycle management policies can be used to automatically move data between storage classes based on time.
  • Amazon S3 data can be encrypted using server-side or client-side encryption, and encryption keys can be managed with Amazon KMS.
  • Versioning and MFA Delete can be used to protect against accidental deletion.
  • Cross-region replication can be used to automatically copy new objects from a source bucket in one region to a target bucket in another region.
  • Pre-signed URLs grant time-limited permission to download objects and can be used to protect media and other web content from unauthorized “web scraping.”
  • Multipart upload can be used to upload large objects, and Range GETs can be used to download portions of an Amazon S3 object or Amazon Glacier archive.
  • Server access logs can be enabled on a bucket to track requestor, object, action, and response.
  • Amazon S3 event notifications can be used to send an Amazon SQS or Amazon SNS message or to trigger an AWS Lambda function when an object is created or deleted.
  • Amazon Glacier can be used as a standalone service or as a storage class in Amazon S3.
  • Amazon Glacier stores data in archives, which are contained in vaults. You can have up to 1,000 vaults, and each vault can store an unlimited number of archives.
  • Amazon Glacier vaults can be locked for compliance purposes.

  • S3

    • Details
      • Max 100 buckets
      • Unique bucket names, DNS name convention
      • Multiple regions. Data stays in region unless explicitly moved.
      • Min size: 0b
      • Max size: 5Tb
      • > 5Gb requires multipart upload
      • Multipart upload recommended on >= 100Mb
      • Multipart can upload parallel and out of order
      • CompleteMultipartUpload API reassembles the file after multipart upload
      • Can be encrypted before WRITE to disk, decrypted ON download
    • Data Consistency
      • New objects: Read-after-write consistency
        • No stale reads possible
        • Potential higher read latency
        • Potential lower read throughput
      • Updated objects: Eventual consistency
        • Stale reads possible
        • Lowest read latency
        • Highest read throughput
    • Performance
      • Consistent:
        • > 100 put/list/update requests/s
        • > 300 get requests/s
      • Burst:
        • > 300 put/list/update requests/s
        • > 800 get requests/s
      • Request more if needed prior to production
    • Performance Optimizations
      • S3 keeps index of object keynames in each region
      • BAD:
        • ex/2016-12-07/p1.jpg
        • ex/2016-12-07/p2.jpg
        • ex/2016-12-07/p3.jpg
        • ex/2016-12-07/p4.jpg
      • GOOD:
        • ex/e87f-2016-12-07/p1.jpg
        • ex/a023-2016-12-07/p2.jpg
        • ex/b753-2016-12-07/p3.jpg
        • ex/18fa-2016-12-07/p4.jpg
      • Requests don’t all float through same node (indexes built on hashes rather than the same date string)
      • GET heavy workflow => use CloudFront CDN as caching service
        • Limits the requests on S3
    • Hosting static websites
      • HTML/CSS/Javascript in Buckets.
        • Custom error pages
        • Custom index file
        • Custom redirect rules
      • S3 gives default URL
        • .s3-website-.amazonaws.com
        • Route 53 integration for custom names
      • Bucket name must match domain name (dashsoft.dk -> S3://dashsoft.dk, www.dashsoft.dk -> s3://www.dashsoft.dk)
        • if someone used that bucket name, feature can’t be used
    • S3 IAM and Bucket policies
      • more: https://aws.amazon.com/blogs/security/iam-policies-and-bucket-policies-and-acls-oh-my-controlling-access-to-s3-resources/
      • IAM policy: User level (User policy)
        • Multiple users can be assigned same IAM policy
        • Attached to a user, so can not be used to grant anonymous users permissions
        • JSON based
      • Bucket policy: Resource level (resource-based policy)
        • JSON file attached to resource
          • Who is allowed to access resources?
          • What that user can do with those resources?
          • Can contain conditions (e.g. StringEquals)
        • max 20kb size
        • SHOULD BE USED to manage cross-account permissions for all Amazon S3 permissions
      • ACLs:
        • Legacy.
        • For buckets and objects
        • More restrictive than Bucket Policies
          • Can manage permissions on INDIVIDUAL OBJECTS WITHIN A BUCKET
        • XML based
        • Grant read/write to OTHER AWS ACCOUNTS
        • No conditional permissions
        • Can not explicitly deny permissions
        • Only way to manage access to objects not owned by the bucket owner
      • Bucket owner has full permission as default
        • Bucket owner paying bills can deny access or modify objects regardless of who owns them
      • Explicit DENY always overrides allows in policies.
    • Logging S3 API calls
      • CloudTrail
        • Bucket levels operations
        • Logs all API calls
        • Use with CloudWatch to filter certain calls and notify on occurences
      • Amazon S3 Server Access Logs
        • Object level operations
        • Logs GET
        • May take up to 1 hour before showing logs
        • on a best effort basis - not for completely accurate logging!!
      • Saved to S3 bucket
    • Object Versioning
      • Three states: Versioning enabled, Versioning suspended, unversioned
        • Once version-enabled a bucket cannot be unversioned.
        • Version-enabled can be version suspended
          • Existing versions will remain.
          • New objects will be given version ID null
      • Setup at bucket level
      • New versions of objects are given a Version ID
      • Upon GET the latest version is returned
      • Upon DELETE all versions remain in bucket, but a delete marker will be inserted on the object. Will appear to be deleted.
      • Permanent deletion of version: Specify ID. Next ID will be current version.
      • Restore a version:
        • Copy from ID (old versions ID) to same bucket
          • works because: GET -> PUT (gets new ID)
        • Delete current versions ID (i.e. next ID will be current)
      • Lifecycle management to handle versions life. Examples:
        • Send noncurrent versions to amazon Glacier?
        • Permanently delete objects that have been non-current for 180 days
    • S3 Encryption
      • Protecting data in-transit
        • SSL or client-side encryption or…
        • AWS KMS-managed (Key Management System) Customer Master Key (CMK)
          • Unique encryption key for each object
          • Amazon knows master key
          • On upload:
            • Client request key from AWS KMS
            • KMS returns key: plain text + cipher blob
            • plain text used to encrypt object data
            • cipher blob to upload to S3 as object metadata
          • On download:
            • Client downloads encrypted object data + cipher blob in object metadata
            • Client sends cipher blob to KMS and gets plain text back
            • plain text used to decrypt object data
        • Client-side master key:
          • Amazon does NOT know master key
          • on upload:
            • Client provides master key to AWS S3 encryption client (LOCALLY)
            • S3 client generates random data key and encrypts it with master key
            • S3 client encrypts data using data key and uploads material description as part of object metadata (x-amz-meta-x-amz-key)
          • on download:
            • Client downloads encrypted object WITH metadata
            • Metadata tells client which master key to use
            • Client decrypts the data key using that master key
            • Client uses decrypted data key to decrypt object
      • Protecting data at rest
        • Server side encryption (by AWS S3)
          • Must add x-amz-server-side-encryption request header to upload request
          • Uses AES-256
          • Bucket policy can require all objects to use server-side encryption
            • StringNotEquals “x-amz-server-side-encryption” = “AES-256”
        • KMS-managed encryption keys
          • Uses customer master keys
          • More flexibility in controlling keys
        • Customer-provided encryption keys
          • Gives the option to generate your own keys outside the environment
          • Amazon does NOT store the encryption key
      • Both can be used simultaneously

S3 Encryption

Storage Gateway

S3 Security

S3 Host Static Website

S3 Transfer Acceleration

Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket. Transfer Acceleration leverages Amazon CloudFront’s globally distributed AWS Edge Locations

  • First, enable Transfer Acceleration on an S3 bucket using the Amazon S3 console,
  • After Transfer Acceleration is enabled, you can point your Amazon S3 PUT and GET requests to the s3-accelerate endpoint domain name.
  • Your data transfer application must use one of the following two types of endpoints to access the bucket for faster data transfer:
    • .s3-accelerate.amazonaws.com
    • .s3-accelerate.dualstack.amazonaws.com

S3 FAQ

Latest FAQ’s find on this link.