AWS General Questions

AWS Certified Developer

  • At least two questions about specific request and response parameters/fields for SQS and SNS
  • Optimizing S3 key names
  • HTTP status codes
  • Fixing session state problems using something other than ELB sticky sessions
  • Elastic beanstalk supported platforms
  • SDK languages and default region
  • Efficient usage of DynamoDB throughput
  • DynamoDB key questions, but always using the old terms hash/range instead of the new terms partition/sort.

AWS Config

  • One of the challenges in managing AWS resources is to keep track of changes in the resource configuration over time. Which one of the following statements provide the best solution?

    • Use strict syntax tagging on the resources
    • Create a custom application to automate the configuration management process
    • Use AWS Config for supported services and use an automated process via APIs for unsupported services
    • Use resource groups and tagging along with CloudTrail so that you can audit changes using the logs
  • Fill the blanks: ____ helps us track AWS API calls and transitions, ____ helps to understand what resources we have now, and ____ allows auditing credentials and logins.

    • AWS Config, CloudTrail, IAM Credential Reports
    • CloudTrail, IAM Credential Reports, AWS Config
    • CloudTrail, AWS Config, IAM Credential Reports
    • AWS Config, IAM Credential Reports, CloudTrail

AWS_WorkSpaces

  • A company needs to deploy virtual desktops to its customers in a virtual private cloud, leveraging existing security controls. Which set of AWS services and features will meet the company’s requirements?
    • Virtual Private Network connection. AWS Directory Services, and ClassicLink (ClassicLink allows you to link an EC2-Classic instance to a VPC in your account, within the same region)
    • Virtual Private Network connection. AWS Directory Services, and Amazon Workspaces (WorkSpaces for Virtual desktops, and AWS Directory Services to authenticate to an existing on-premises AD through VPN)
    • AWS Directory Service, Amazon Workspaces, and AWS Identity and Access Management (AD service needs a VPN connection to interact with an On-premise AD directory)
    • Amazon Elastic Compute Cloud, and AWS Identity and Access Management (Need WorkSpaces for virtual desktops)

AWS CloudHSM

  • With which AWS services CloudHSM can be used (select 2)
    • S3
    • DynamoDb
    • RDS
    • ElastiCache
    • Amazon Redshift

AWS CloudSearch

  • A newspaper organization has an on-premises application which allows the public to search its back catalogue and retrieve individual newspaper pages via a website written in Java. They have scanned the old newspapers into JPEGs (approx. 17TB) and used Optical Character Recognition (OCR) to populate a commercial search product. The hosting platform and software is now end of life and the organization wants to migrate its archive to AWS and produce a cost efficient architecture and still be designed for availability and durability. Which is the most appropriate?
    • Use S3 with reduced redundancy to store and serve the scanned files, install the commercial search application on EC2 Instances and configure with auto-scaling and an Elastic Load Balancer. (Reusing Commercial search application which is nearing end of life not a good option for cost)
    • Model the environment using CloudFormation. Use an EC2 instance running Apache webserver and an open source search application, stripe multiple standard EBS volumes together to store the JPEGs and search index. (storing JPEGs on EBS volumes not cost effective also answer does not address Open source solution availability)
    • Use S3 with standard redundancy to store and serve the scanned files, use CloudSearch for query processing, and use Elastic Beanstalk to host the website across multiple availability zones. (Cost effective S3 storage, CloudSearch for Search and Highly available and durable web application)
    • Use a single-AZ RDS MySQL instance to store the search index and the JPEG images use an EC2 instance to serve the website and translate user queries into SQL. (MySQL not an ideal solution to sore index and JPEG images for cost and performance)
    • Use a CloudFront download distribution to serve the JPEGs to the end users and Install the current commercial search product, along with a Java Container for the website on EC2 instances and use Route53 with DNS round-robin. (Web Application not scalable, whats the source for JPEGs files through CloudFront)

Simple Email Service

  • What does Amazon SES stand for?
    • Simple Elastic Server
    • Simple Email Service
    • Software Email Solution
    • Software Enabled Server

AWS Import/Export

  • You are working with a customer who has 10 TB of archival data that they want to migrate to Amazon Glacier. The customer has a 1-Mbps connection to the Internet. Which service or feature provides the fastest method of getting the data into Amazon Glacier?
    • Amazon Glacier multipart upload
    • AWS Storage Gateway
    • VM Import/Export
    • AWS Import/Export (Normal upload will take ~900 days as the internet max speed is capped)

AWS Elasticsearch

  • You need to perform ad-hoc analysis on log data, including searching quickly for specific error codes and reference numbers. Which should you evaluate first?

    • AWS Elasticsearch Service (Elasticsearch Service (ES) is a managed service that makes it easy to deploy, operate, and scale Elasticsearch clusters in the AWS cloud. Elasticsearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and click stream analytics. Refer link)
    • AWS RedShift
    • AWS EMR
    • AWS DynamoDB
  • You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it’s very hard to find errors in logs on so many services. How can you best meet this requirement and satisfy your CTO?

    • Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the PutBucket event and publish events to AWS Lambda. Use the Lambda to analyze logs as soon as they come in and flag issues.
    • Begin using CloudWatch Logs on every service. Stream all Log Groups into S3 objects. Use AWS EMR cluster jobs to perform adhoc MapReduce analysis and write new queries when needed.
    • Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the PutBucket event and publish events to AWS Kinesis. Use Apache Spark on AWS EMR to perform at-scale stream processing queries on the log chunks and flag issues.
    • Begin using CloudWatch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster. (AWS Elasticsearch with Kibana stack is designed specifically for real-time, ad-hoc log analysis and aggregation)

Amazon Glacier

  • What is Amazon Glacier?
    • You mean Amazon “Iceberg”: it’s a low-cost storage service.
    • A security tool that allows to “freeze” an EBS volume and perform computer forensics on it.
    • A low-cost storage service that provides secure and durable storage for data archiving and backup
    • It’s a security tool that allows to “freeze” an EC2 instance and perform computer forensics on it.
  • Amazon Glacier is designed for: (Choose 2 answers)
    • Active database storage
    • Infrequently accessed data
    • Data archives
    • Frequently accessed data
    • Cached session data
  • An organization is generating digital policy files which are required by the admins for verification. Once the files are verified they may not be required in the future unless there is some compliance issue. If the organization wants to save them in a cost effective way, which is the best possible solution?
    • AWS RRS
    • AWS S3
    • AWS RDS
    • AWS Glacier
  • A user has moved an object to Glacier using the life cycle rules. The user requests to restore the archive after 6 months. When the restore request is completed the user accesses that archive. Which of the below mentioned statements is not true in this condition?
    • The archive will be available as an object for the duration specified by the user during the restoration request
    • The restored object’s storage class will be RRS (After the object is restored the storage class still remains GLACIER.)
    • The user can modify the restoration period only by issuing a new restore request with the updated period
    • The user needs to pay storage for both RRS (restored) and Glacier (Archive) Rates
  • A user is uploading archives to Glacier. The user is trying to understand key Glacier resources. Which of the below mentioned options is not a Glacier resource?
    • Notification configuration
    • Archive ID
    • Job
    • Archive

AWS Data Pipeline

  • An International company has deployed a multi-tier web application that relies on DynamoDB in a single region. For regulatory reasons they need disaster recovery capability in a separate region with a Recovery Time Objective of 2 hours and a Recovery Point Objective of 24 hours. They should synchronize their data on a regular basis and be able to provision the web application rapidly using CloudFormation. The objective is to minimize changes to the existing web application, control the throughput of DynamoDB used for the synchronization of data and synchronize only the modified elements. Which design would you choose to meet these requirements?
    • Use AWS data Pipeline to schedule a DynamoDB cross region copy once a day. Create a ‘Lastupdated’ attribute in your DynamoDB table that would represent the timestamp of the last update and use it as a filter.
    • Use EMR and write a custom script to retrieve data from DynamoDB in the current region using a SCAN operation and push it to DynamoDB in the second region. (No Schedule and throughput control)
    • Use AWS data Pipeline to schedule an export of the DynamoDB table to S3 in the current region once a day then schedule another task immediately after it that will import data from S3 to DynamoDB in the other region. (With AWS Data pipeline the data can be copied directly to other DynamoDB table)
    • Send each item into an SQS queue in the second region; use an auto-scaling group behind the SQS queue to replay the write in the second region. (Not Automated to replay the write)

Elastic Transcoder

  • Your website is serving on-demand training videos to your workforce. Videos are uploaded monthly in high resolution MP4 format. Your workforce is distributed globally often on the move and using company-provided tablets that require the HTTP Live Streaming (HLS) protocol to watch a video. Your company has no video transcoding expertise and it required you might need to pay for a consultant. How do you implement the most cost-efficient architecture without compromising high availability and quality of video delivery?
    • Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. S3 to host videos with lifecycle Management to archive original flies to Glacier after a few days. CloudFront to serve HLS transcoded videos from S3
    • A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number or nodes depending on the length of the queue S3 to host videos with Lifecycle Management to archive all files to Glacier after a few days CloudFront to serve HLS transcoding videos from Glacier
    • Elastic Transcoder to transcode original high-resolution MP4 videos to HLS EBS volumes to host videos and EBS snapshots to incrementally backup original rues after a few days. CloudFront to serve HLS transcoded videos from EC2.
    • A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few days. CloudFront to serve HLS transcoded videos from EC2

AWS CloudTrail

  • You currently operate a web application in the AWS US-East region. The application runs on an auto-scaled layer of EC2 instances and an RDS Multi-AZ database. Your IT security compliance officer has tasked you to develop a reliable and durable logging solution to track changes made to your EC2, IAM and RDS resources. The solution must ensure the integrity and confidentiality of your log data. Which of these solutions would you recommend?

    • Create a new CloudTrail trail with one new S3 bucket to store the logs and with the global services option selected. Use IAM roles, S3 bucket policies and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs. (Single New bucket with global services option for IAM and MFA delete for confidentiality)
    • Create a new CloudTrail with one new S3 bucket to store the logs. Configure SNS to send log file delivery notifications to your management system. Use IAM roles and S3 bucket policies on the S3 bucket that stores your logs. (Missing Global Services for IAM)
    • Create a new CloudTrail trail with an existing S3 bucket to store the logs and with the global services option selected Use S3 ACLs and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs. (Existing bucket prevents confidentiality)
    • Create three new CloudTrail trails with three new S3 buckets to store the logs one for the AWS Management console, one for AWS SDKs and one for command line tools. Use IAM roles and S3 bucket policies on the S3 buckets that store your logs (3 buckets not needed, Missing Global services options)
  • Which of the following are true regarding AWS CloudTrail? Choose 3 answers

    • CloudTrail is enabled globally (it can be enabled for all regions and also per region basis)
    • CloudTrail is enabled by default (was not enabled by default, however, it is enabled by default as per the latest AWS enhancements)
    • CloudTrail is enabled on a per-region basis (it can be enabled for all regions and also per region basis)
    • CloudTrail is enabled on a per-service basis (once enabled it is applicable for all the supported services, service can’t be selected)
    • Logs can be delivered to a single Amazon S3 bucket for aggregation
    • CloudTrail is enabled for all available services within a region. (is enabled only for CloudTrail supported services)
    • Logs can only be processed and delivered to the region in which they are generated. (can be logged to bucket in any region)
  • An organization has configured the custom metric upload with CloudWatch. The organization has given permission to its employees to upload data using CLI as well SDK. How can the user track the calls made to CloudWatch?

    • The user can enable logging with CloudWatch which logs all the activities
    • Use CloudTrail to monitor the API calls
    • Create an IAM user and allow each user to log the data using the S3 bucket
    • Enable detailed monitoring with CloudWatch
  • A user is trying to understand the CloudWatch metrics for the AWS services. It is required that the user should first understand the namespace for the AWS services. Which of the below mentioned is not a valid namespace for the AWS services?

    • AWS/StorageGateway
    • AWS/CloudTrail (CloudWatch supported namespaces)
    • AWS/ElastiCache
    • AWS/SWF
  • Your CTO thinks your AWS account was hacked. What is the only way to know for certain if there was unauthorized access and what they did, assuming your hackers are very sophisticated AWS engineers and doing everything they can to cover their tracks?

    • Use CloudTrail Log File Integrity Validation.
    • Use AWS Config SNS Subscriptions and process events in real time.
    • Use CloudTrail backed up to AWS S3 and Glacier.
    • Use AWS Config Timeline forensics.
  • Your CTO has asked you to make sure that you know what all users of your AWS account are doing to change resources at all times. She wants a report of who is doing what over time, reported to her once per week, for as broad a resource type group as possible. How should you do this?

    • Create a global AWS CloudTrail Trail. Configure a script to aggregate the log data delivered to S3 once per week and deliver this to the CTO.
    • Use CloudWatch Events Rules with an SNS topic subscribed to all AWS API calls. Subscribe the CTO to an email type delivery on this SNS Topic.
    • Use AWS IAM credential reports to deliver a CSV of all uses of IAM User Tokens over time to the CTO.
    • Use AWS Config with an SNS subscription on a Lambda, and insert these changes over time into a DynamoDB table. Generate reports based on the contents of this table.

AWS Trusted Advisor

  • The Trusted Advisor service provides insight regarding which four categories of an AWS account?
    • Security, fault tolerance, high availability, and connectivity
    • Security, access control, high availability, and performance
    • Performance, cost optimization, security, and fault tolerance
    • Performance, cost optimization, access control, and connectivity

AWS Elastic Beanstalk vs OpsWorks vs CloudFormation

  • Your team is excited about the use of AWS because now they have access to programmable infrastructure. You have been asked to manage your AWS infrastructure in a manner similar to the way you might manage application code. You want to be able to deploy exact copies of different versions of your infrastructure, stage changes into different environments, revert back to previous versions, and identify what versions are running at any particular time (development test QA. production). Which approach addresses this requirement?
    • Use cost allocation reports and AWS Opsworks to deploy and manage your infrastructure.
    • Use AWS CloudWatch metrics and alerts along with resource tagging to deploy and manage your infrastructure.
    • Use AWS Elastic Beanstalk and a version control system like GIT to deploy and manage your infrastructure.
    • Use AWS CloudFormation and a version control system like GIT to deploy and manage your infrastructure.
  • An organization is planning to use AWS for their production roll out. The organization wants to implement automation for deployment such that it will automatically create a LAMP stack, download the latest PHP installable from S3 and setup the ELB. Which of the below mentioned AWS services meets the requirement for making an orderly deployment of the software?
    • AWS Elastic Beanstalk
    • AWS CloudFront
    • AWS CloudFormation
    • AWS DevOps
  • You are working with a customer who is using Chef configuration management in their data center. Which service is designed to let the customer leverage existing Chef recipes in AWS?
    • Amazon Simple Workflow Service
    • AWS Elastic Beanstalk
    • AWS CloudFormation
    • AWS OpsWorks

Amazon ElastiCache

  • What does Amazon ElastiCache provide?
    • A service by this name doesn’t exist. Perhaps you mean Amazon CloudCache.
    • A virtual server with a huge amount of memory.
    • A managed In-memory cache service
    • An Amazon EC2 instance with the Memcached software already pre-installed.
  • You are developing a highly available web application using stateless web servers. Which services are suitable for storing session state data? Choose 3 answers.
    • Elastic Load Balancing
    • Amazon Relational Database Service (RDS)
    • Amazon CloudWatch
    • Amazon ElastiCache
    • Amazon DynamoDB
    • AWS Storage Gateway
  • Which statement best describes ElastiCache?
    • Reduces the latency by splitting the workload across multiple AZs
    • A simple web services interface to create and store multiple data sets, query your data easily, and return the results
    • Offload the read traffic from your database in order to reduce latency caused by read-heavy workload
    • Managed service that makes it easy to set up, operate and scale a relational database in the cloud
  • Our company is getting ready to do a major public announcement of a social media site on AWS. The website is running on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS MySQL Extra Large DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency model. After comprehensive tests you discover that there is read contention on RDS MySQL. Which are the best approaches to meet these requirements? (Choose 2 answers)
    • Deploy ElastiCache in-memory cache running in each availability zone
    • Implement sharding to distribute load to multiple RDS MySQL instances
    • Increase the RDS MySQL Instance size and Implement provisioned IOPS
    • Add an RDS MySQL read replica in each availability zone
  • You are using ElastiCache Memcached to store session state and cache database queries in your infrastructure. You notice in CloudWatch that Evictions and Get Misses are both very high. What two actions could you take to rectify this? Choose 2 answers
    • Increase the number of nodes in your cluster
    • Tweak the max_item_size parameter
    • Shrink the number of nodes in your cluster
    • Increase the size of the nodes in the cluster
  • You have been tasked with moving an ecommerce web application from a customer’s datacenter into a VPC. The application must be fault tolerant and well as highly scalable. Moreover, the customer is adamant that service interruptions not affect the user experience. As you near launch, you discover that the application currently uses multicast to share session state between web servers, In order to handle session state within the VPC, you choose to:
    • Store session state in Amazon ElastiCache for Redis (scalable and makes the web applications stateless)
    • Create a mesh VPN between instances and allow multicast on it
    • Store session state in Amazon Relational Database Service (RDS solution not highly scalable)
    • Enable session stickiness via Elastic Load Balancing (affects user experience if the instance goes down)
  • When you are designing to support a 24-hour flash sale, which one of the following methods best describes a strategy to lower the latency while keeping up with unusually heavy traffic?
    • Launch enhanced networking instances in a placement group to support the heavy traffic (only improves internal communication)
    • Apply Service Oriented Architecture (SOA) principles instead of a 3-tier architecture (just simplifies architecture)
    • Use Elastic Beanstalk to enable blue-green deployment (only minimizes download for applications and ease of rollback)
    • Use ElastiCache as in-memory storage on top of DynamoDB to store user sessions (scalable, faster read/writes and in memory storage)
  • You are configuring your company’s application to use Auto Scaling and need to move user state information. Which of the following AWS services provides a shared data store with durability and low latency?
    • AWS ElastiCache Memcached (does not provide durability as if the node is gone the data is gone)
    • Amazon Simple Storage Service
    • Amazon EC2 instance storage
    • Amazon DynamoDB
  • Your application is using an ELB in front of an Auto Scaling group of web/application servers deployed across two AZs and a Multi-AZ RDS Instance for data persistence. The database CPU is often above 80% usage and 90% of I/O operations on the database are reads. To improve performance you recently added a single-node Memcached ElastiCache Cluster to cache frequent DB query results. In the next weeks the overall workload is expected to grow by 30%. Do you need to change anything in the architecture to maintain the high availability for the application with the anticipated additional load and Why?
    • You should deploy two Memcached ElastiCache Clusters in different AZs because the RDS Instance will not be able to handle the load if the cache node fails.
    • If the cache node fails the automated ElastiCache node recovery feature will prevent any availability impact. (does not provide high availability, as data is lost if the node is lost)
    • Yes you should deploy the Memcached ElastiCache Cluster with two nodes in the same AZ as the RDS DB master instance to handle the load if one cache node fails. (Single AZ affects availability as DB is Multi AZ and would be overloaded is the AZ goes down)
    • No if the cache node fails you can always get the same data from the DB without having any availability impact. (Will overload the database affecting availability)
  • A read only news reporting site with a combined web and application tier and a database tier that receives large and unpredictable traffic demands must be able to respond to these traffic fluctuations automatically. What AWS services should be used meet these requirements?
    • Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with CloudWatch and RDS with read replicas.
    • Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch and RDS with read replicas (Stateful instances will not allow for scaling)
    • Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch and multi-AZ RDS (Stateful instances will allow not for scaling & multi-AZ is for high availability and not scaling)
    • Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with CloudWatch and multi-AZ RDS (multi-AZ is for high availability and not scaling)
  • You have written an application that uses the Elastic Load Balancing service to spread traffic to several web servers. Your users complain that they are sometimes forced to login again in the middle of using your application, after they have already logged in. This is not behavior you have designed. What is a possible solution to prevent this happening?
    • Use instance memory to save session state.
    • Use instance storage to save session state.
    • Use EBS to save session state.
    • Use ElastiCache to save session state.
    • Use Glacier to save session slate.

Amazon Kinesis

  • You are deploying an application to track GPS coordinates of delivery trucks in the United States. Coordinates are transmitted from each delivery truck once every three seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. Which service should you use to implement data ingestion?

    • Amazon Kinesis
    • AWS Data Pipeline
    • Amazon AppStream
    • Amazon Simple Queue Service
  • You are deploying an application to collect votes for a very popular television show. Millions of users will submit votes using mobile devices. The votes must be collected into a durable, scalable, and highly available data store for real-time public tabulation. Which service should you use?

    • Amazon DynamoDB
    • Amazon Redshift
    • Amazon Kinesis
    • Amazon Simple Queue Service
  • Your company is in the process of developing a next generation pet collar that collects biometric information to assist families with promoting healthy lifestyles for their pets. Each collar will push 30kb of biometric data In JSON format every 2 seconds to a collection platform that will process and analyze the data providing health trending information back to the pet owners and veterinarians via a web portal Management has tasked you to architect the collection platform ensuring the following requirements are met. Provide the ability for real-time analytics of the inbound biometric data Ensure processing of the biometric data is highly durable, elastic and parallel. The results of the analytic processing should be persisted for data mining. Which architecture outlined below will meet the initial requirements for the collection platform?

    • Utilize S3 to collect the inbound sensor data analyze the data from S3 with a daily scheduled Data Pipeline and save the results to a Redshift Cluster.
    • Utilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Redshift cluster using EMR.
    • Utilize SQS to collect the inbound sensor data analyze the data from SQS with Amazon Kinesis and save the results to a Microsoft SQL Server RDS instance.
    • Utilize EMR to collect the inbound sensor data, analyze the data from EUR with Amazon Kinesis and save me results to DynamoDB.
  • Your customer is willing to consolidate their log streams (access logs, application logs, security logs etc.) in one single system. Once consolidated, the customer wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from the last 12 hours? What is the best approach to meet your customer’s requirements?

    • Send all the log events to Amazon SQS. Setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics.
    • Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs (Can perform real time analysis and stores data for 24 hours which can be extended to 7 days)
    • Configure Amazon CloudTrail to receive custom logs, use EMR to apply heuristics the logs (CloudTrail is only for auditing)
    • Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3 use EMR to apply heuristics on the logs (EMR is for batch analysis)
  • You require the ability to analyze a customer’s clickstream data on a website so they can do behavioral analysis. Your customer needs to know what sequence of pages and ads their customer clicked on. This data will be used in real time to modify the page layouts as customers click through the site to increase stickiness and advertising click-through. Which option meets the requirements for captioning and analyzing this data?

    • Log clicks in weblogs by URL store to Amazon S3, and then analyze with Elastic MapReduce
    • Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers
    • Write click events directly to Amazon Redshift and then analyze with SQL
    • Publish web clicks by session to an Amazon SQS queue men periodically drain these events to Amazon RDS and analyze with SQL
  • Your social media monitoring application uses a Python app running on AWS Elastic Beanstalk to inject tweets, Facebook updates and RSS feeds into an Amazon Kinesis stream. A second AWS Elastic Beanstalk app generates key performance indicators into an Amazon DynamoDB table and powers a dashboard application. What is the most efficient option to prevent any data loss for this application?

    • Use AWS Data Pipeline to replicate your DynamoDB tables into another region.
    • Use the second AWS Elastic Beanstalk app to store a backup of Kinesis data onto Amazon Elastic Block Store (EBS), and then create snapshots from your Amazon EBS volumes.
    • Add a second Amazon Kinesis stream in another Availability Zone and use AWS data pipeline to replicate data across Kinesis streams.
    • Add a third AWS Elastic Beanstalk app that uses the Amazon Kinesis S3 connector to archive data from Amazon Kinesis into Amazon S3.
  • You need to replicate API calls across two systems in real time. What tool should you use as a buffer and transport mechanism for API call events?

    • AWS SQS
    • AWS Lambda
    • AWS Kinesis (AWS Kinesis is an event stream service. Streams can act as buffers and transport across systems for in-order programmatic events, making it ideal for replicating API calls across systems)
    • AWS SNS
  • You need to perform ad-hoc business analytics queries on well-structured data. Data comes in constantly at a high velocity. Your business intelligence team can understand SQL. What AWS service(s) should you look to first?

    • Kinesis Firehose + RDS
    • Kinesis Firehose + RedShift (Kinesis Firehose provides a managed service for aggregating streaming data and inserting it into RedShift. RedShift also supports ad-hoc queries over well-structured data using a SQL-compliant wire protocol, so the business team should be able to adopt this system easily. Refer link)
    • EMR using Hive
    • EMR running Apache Spark

AWS CloudFormation


  • how to configure and troubleshoot a VPC inside and out, including basic IP subnetting. VPC is arguably one of the more complex components of AWS and you cannot pass this exam without a thorough understanding of it.

  • the difference in use cases between Simple Workflow (SWF),

  • Simple Queue Services (SQS),

  • and Simple Notification Services (SNS).

  • how an Elastic Load Balancer (ELB) interacts with auto-scaling groups in a high-availability deployment.

  • how to properly secure a S3 bucket in different usage scenarios

  • when it would be appropriate to use either EBS-backed or ephemeral instances.

  • a basic understanding of CloudFormation.

  • how to properly use various EBS volume configurations and snapshots to optimize I/O performance and data durability.

  • Does S3 provide read-after-write consistency?

    • No, not for any region
    • Yes, but only for certain regions
    • Yes, but only for certain regions and for new objects
    • Yes, for all regions

Answer: C.

  • What is the maximum size of a single S3 object?

    • There is no such limit
    • 5 TB
    • 5 GB
    • 100 GB Answer: B.
  • Is data stored in S3 is always encrypted?

    • Yes, S3 always encrypts data for security
    • No, there is no such feature
    • Yes, but only when right APIs are called
    • Yes, but only in Gov Cloud datacenters

Answer: C. S3 by default do not encrypt the data stored into its service. But using Server Side Encryption feature, if proper headers are passes (in REST), S3 will first encrypt the data and then store that encrypted data.

  • What is true for S3 buckets (select multiple if more than one is true)?
    • Bucket namespace is shared and is global among all AWS users.
    • Bucket names can contain alpha numeric characters
    • Bucket are associated with a region, and all data in a bucket resides in that region
    • Buckets can be transferred from one account to another through API

Answers: A, B, C. Bucket namespace is global, which means that if someone creates a bucket called “pics” nobody else can create a bucket of the same name. Bucket names can contain alphanumeric characters which means we can have a bucket named “cloudthat1234”. And when we create a bucket, we have to select a region for the bucket, and all data stored in that bucket is physically stored in that region only (although the data can be accessed from anywhere in the world with proper credentials).

  • EBS can always tolerate an Availability Zone failure?
    • No, all EBS volume is stored in a single Availability Zone
    • Yes, EBS volume has multiple copies so it should be fine
    • Depends on how it is setup
    • Depends on the Region where EBS volume is initiated

Answer: A. One of the known fallacies of EBS is that all the data of a single volume lives in a single Availability Zone. Thus it cannot withstand Availability zone failures.

  • Which of the following Auto scaling CANNOT do (select multiple if more than one is true)?
    • Start up EC2 instances when CPU utilization is above threshold
    • Release EC2 instances when CPU utilization is below threshold
    • Increase the instance size when utilization is above threshold
    • Add more Relational Database Service (RDS) read replicas when utilization is above threshold

Answers: A, D. RDS MultiAZ does make database a single AZ failure tolerant, as it has automatic failover feature, where the secondly will become primary and application can keep accessing the database. It also increases availability during maintenance, as maintenance is performed first on secondary, then its made primary and then old-primary is updated.

  • Which of the following benefits does adding Multi-AZ deployment in RDS provide (choose multiple if more than one is true)?
    • MultiAZ deployed database can tolerate an Availability Zone failure
    • Decrease latencies if app servers accessing database are in multiple Availability zones
    • Make database access times faster for all app servers
    • Make database more available during maintenance tasks

Answers: B, C, D.

  • What happens to data when an EC2 instance terminates (select multiple if more than one is true)?

    • For EBS backed AMI, the EBS volume with operation system on it is preserved
    • For EBS backed AMI, any volume attached other than the OS volume is preserved
    • All the snapshots of the EBS volume with operating system is preserved
    • For S3 backed AMI, all the data in the local (ephemeral) hard drive is deleted Answer: C.
  • For an EC2 instance launched in a private subnet in VPC, which of the following are the options for it to be able to connect to the internet (assume security groups have proper ports open).

    • Simply attach an elastic IP
    • If there is also a public subnet in the same VPC, an ENI can be attached to the instance with the ip address range of the public subnet
    • If there is a public subnet in the same VPC with a NAT instance attached to internet gateway, then a route can be configured from the instance to the NAT
    • There is no way for an instance in private subnet to talk to the internet Answer: C
  • When an ELB is setup, what is the best way to route a website’s traffic to it?

    • Resolve the ELB name to an ip address and point the website to that ip address
    • There is no direct way to do so, Route53 has to be used
    • Generate a CNAME record for the website pointing to the DNS name of the ELB
  • Which of the following is a method for bidding on unused EC2 capacity based on the current spot price ?

    • a) On-Demand Instance
    • b) Reserved Instances
    • c) Spot Instance
    • d) All of the mentioned

Answer:c. This feature offers a significantly lower price, but it varies over time or may not be available when there is no excess capacity.

  • Point out the wrong statement:
    • a) The standard instances are not suitable for standard server applications
    • b) High memory instances are useful for large data throughput applications such as SQL Server databases and data caching and retrieval
    • c) FPS is exposed as an API that sorts transactions into packages called Quick Starts that makes it easy to implement
    • d) None of the mentioned

Answer:a Explanation:The standard instances are deemed to be suitable for standard server applications.

  • Which of the following instance has hourly rate with no long-term commitment ?
    • a) On-Demand Instance
    • b) Reserved Instances
    • c) Spot Instance
    • d) All of the mentioned

Answer:a Explanation:Pricing varies by zone, instance, and pricing model.

  • Which of the following is a batch processing application ?
    • a) IBM sMash
    • b) IBM WebSphere Application Server
    • c) Condor
    • d) Windows Media Server

Answer:c Explanation:Condor is a powerful, distributed batch-processing system that lets you use otherwise idle CPU cycles in a cluster of workstations.

  • Point out the correct statement:
    • a) Security can be set through passwords, Kerberos tickets, or certificates
    • b) Secure access to your EC2 AMIs is controlled by passwords, Kerberos, and 509 Certificates
    • c) Most of the system image templates that Amazon AWS offers are based on Red Hat Linux
    • d) All of the mentioned

Answer:d Explanation:Hundreds of free and paid AMIs can be found on AWS.

  • How many EC2 service zones or regions exist ?
    • a) 1
    • b) 2
    • c) 3
    • d) 4

Answer:d Explanation:There are four different EC2 service zones or regions.

  • Amazon ______ cloud-based storage system allows you to store data objects ranging in size from 1 byte up to 5GB.
    • a) S1
    • b) S2
    • c) S3
    • d) S4

Answer:c Explanation:In S3, storage containers are referred to as buckets.

  • Which of the following can be done with S3 buckets through the SOAP and REST APIs ?
    • a) Upload new objects to a bucket and download them
    • b) Create, edit, or delete existing buckets
    • c) Specify where a bucket should be stored
    • d) All of the mentioned

Answer:d Explanation:The REST API is preferred to the SOAP API, because it is easier to work with large binary objects with REST.

  • Which of the following operation retrieves the newest version of the object ?
    • a) PUT
    • b) GET
    • c) POST
    • d) COPY

Answer:b Explanation:Versioning also can be used for preserving data and for archiving purposes.

  • Which of the following statement is wrong about Amazon S3 ?
    • a) Amazon S3 is highly reliable
    • b) Amazon S3 provides large quantities of reliable storage that is highly protected
    • c) Amazon S3 is highly available
    • d) None of the mentioned

Answer:c Explanation:S3 excels in applications where storage is archival in nature.

  • A company wants to use Amazon S3 and Amazon Glacier as part of their backup and archive infrastructure. For this they will be using a third party software, which of the following approaches will they use to limit access to a particular bucket ‘company-backup’ in Amazon S3?

    • A. a custom bucket policy limited to the S3 API for the Glacier archive “company-backup”
    • B. a custom bucket policy limited to the S3 API in “company-backup”
    • C. a custom IAM user policy limited to the S3 API for the Glacier archive “company-backup”
    • D. a custom IAM user policy limited to the S3 API in “company-backup”
  • Amazon S3 is which type of storage service?

    • Object
    • Block
    • Simple
    • Secure

YOUR ANSWER - Object CORRECT ANSWER: Object storage is more scalable than traditional file system storage, which is typically what users think about when comparing storage to databases for data persistence.

  • Which AWS storage service assists S3 with transferring data?
    • CloudFront
    • AWS Import/Export
    • DynamoDB
    • ElastiCache

CORRECT ANSWER - AWS Import/Export AWS Import/Export accelerates moving large amounts of data into and out of AWS using portable storage devices. AWS transfers your data directly onto and off of storage devices by using Amazon’s internal network and avoiding the Internet.

  • Object storage systems store files in a flat organization of containers called what?

    • Baskets
    • Brackets
    • Clusters
    • Buckets YOUR ANSWER - Buckets CORRECT ANSWER - Instead of organizing files in a directory hierarchy, object storage systems store files in a flat organization of containers known as buckets in Amazon S3.
  • Amazon S3 offers encryption services for which types of data? data in flight data at relax data at rest data in motion

    • a and c
    • b and d YOUR ANSWER - a and c CORRECT ANSWER - Amazon offers encryption services for data at flight and data at rest.
  • Amazon S3 has how many pricing components?

    • 4
    • 5
    • 3
    • 2

YOUR ANSWER - 3 CORRECT ANSWER - Amazon S3 offers three pricing options. Storage (per GB per month), data transfer in or out (per GB per month), and requests (per x thousand requests per month).

  • What does RRS stand for when referring to the storage option in Amazon S3 that offers a lower level of durability at a lower storage cost?
    • Reduced Reaction Storage
    • Redundant Research Storage
    • Regulatory Resources Storage
    • Reduced Redundancy Storage

CORRECT ANSWER - Reduced Redundancy Storage Non-critical data, such as transcoded media or image thumbnails, can be easily reproduced using the Reduced Redundancy Storage option. Objects stored using the RRS option have less redundancy than objects stored using standard Amazon S3 storage.

  • Object storage systems require less _____ than file systems to store and access files.
    • Big data
    • Metadata
    • Master data
    • Exif data

YOUR ANSWER - Metadata CORRECT ANSWER - Object storage systems are typically more efficient because they reduce the overhead of managing file metadata by storing the metadata with the object. This means object storage can be scaled out almost endlessly by adding nodes.

  • True or False. S3 objects are only accessible from the region they were created in.
    • True
    • False

YOUR ANSWER - False CORRECT ANSWER - While S3 objects are created in a specific region, they can be accessed from anywhere.

  • Amazon S3 offers developers which combination?
    • High scalability and low latency data storage infrastructure at low costs.
    • Low scalability and high latency data storage infrastructure at high costs.
    • High scalability and low latency data storage infrastructure at high costs.
    • Low scalability and high latency data storage infrastructure at low costs.

YOUR ANSWER - High scalability and low latency data storage infrastructure at low costs. CORRECT ANSWER - Amazon S3 offers software developers a reliable, highly scalable and low-latency data storage infrastructure at very low costs. S3 provides an interface that can be used to store and retrieve any amount of data from anywhere on the Web.

  • Why is a bucket policy necessary?
    • To allow bucket access to multiple users.
    • To grant or deny accounts to read and upload files in your bucket.
    • To approve or deny users the option to add or remove buckets.
    • All of the above

YOUR ANSWER - To grant or deny accounts to read and upload files in your bucket. CORRECT ANSWER - Users need a bucket policy to grant or deny accounts to read and upload files in your bucket.

Q.1 What is difference between statefull and stateless services ?**

Defination according to AWS services - Let’s say that you have a web server on your EC2 instance listening on port 80. If you allow inbound connections to that port in your security group, you can successfully access it even though you haven’t explicitly allowed the outbound communication in the other direction. That’s thanks to the stateful nature of the security group. Now imagine that you open up for port 80 inbound in the Network ACL instead. Since the Network ACL is stateless, you will be able to reach the server, but the response from the server will not get back to you unless explicitly allowed in the outbound section of the Network ACL configuration.

Defination according to website

  • A stateful web service will keep track of the “state” of a client’s connection and data over several requests. So for example, the client might login, select a users account data, update their address, attach a photo, and change the status flag, then disconnect.

In a stateless web service, the server doesn’t keep any information from one request to the next. The client needs to do it’s work in a series of simple transactions, and the client has to keep track of what happens between requests. So in the above example, the client needs to do each operation separately: connect and update the address, disconnect. Connect and attach the photo, disconnect. Connect and change the status flag, disconnect.

  • Amazon SWF is designed to help users
    • a. Manage user identification and authorization
    • b. Coordinate synchronous and asynchronous tasks
    • c. Secure their VPCs
    • d. Help users store file based objects b

In RDS, what is the maximum value I can set for my backup retention period?

a. 15 days b. 30 days c. 35 days d. 45 days c

True or False. Automated backups are enabled by default for new DB Instance? True

Amazon RDS does not currently support increasing storage on a ___ DB instance.

a. MySQL b. Aurora c. Oracle d. MSSQL d

In what circumstances would I choose provisioned IOPS in RDS over standard storage?

a. If you use production online transaction processing b. If you have workloads that are not sensitive to latency/lag c. If your business was trying to save money d. If this was a test DB a

Amazon S3 is

a. Object Based Storage b. Block Based Storage c. A Data Warehouse Solution d. Suitable for data archival, not frequently used files. a

In S3 with RRS the availability is

a. 99.999999999% b. 100% c. 99% d. 99.99% d

Amazon’s EBS volumes are

a. Object based storage b. Block based storage c. Encrypted by default d. Not suitable for databases b

a. RDS b. S3 c. Glacier d. EBS d

In S3 the durability of my files is

a. 99.99% b. 99.999999999% c. 99% d. 100% b

Can you access Amazon EBS Snapshots?

a. Yes, through the AWS APIs/CLI & AWS Console b. No c. Depends on the region d. EBS does not have snapshot functionality a

A _____ is a document that provides a formal statement of one or more permissions.

a. Policy b. User c. Group d. Role a

In a default VPC, all Amazon EC2 instances are assigned 2 IP addresses at launch, what are these?

a. Private IP and Public IP b. Public IP and Secret IP c. Elastic IP and Public IP d. IPv6 and Elastic IP a

If an Amazon EBS volume is the root device of an instance, can I detach it without stopping the instance?

a. Yes b. No b

If you want your application to check whether a request generated an error, then you look for an ____ node in the response from the Amazon RDS API

a. Incorrect b. Error c. False d. True b

True or False. AWS recommends providing EC2 instances with credentials so they can access other resources (such as S3 buckets) instead of assigning roles.

False

Can I move a reserved instance from one region to another?

a. Yes b. Only in the US c. No d. Depends on the region c

In S3 RRS, the durability of my files is

a. 99.99% b. 99.99999999% c. 99% d. 100% a

In RDS, changes to the backup window take effect

a. After 30 mins b. The next day c. Immediately d. you cannot back up in RDS c

In RDS, what is the maximum size for a Microsoft SQL Server DB Instance with SQL server Express edition?

a. 10GB/db b. 100GB/db c. 1TB/db d. 2TB/db a

In S3, what does RRS stand for?

a. Relational Reduced Storage b. Reactive Replicating Storage c. Reduced Replication Storage d. Reduced Redundancy Storage d

Can I “force” a failover for any RDS instance that has Multi-AZ configured?

a. Yes b. No c. Only for Oracle RDS instances a

What does EBS stand for?

a. Energetic Block Storage b. Elastic Based Storage c. Equal Block Storage d. Elastic Block Storage d

True or False. You can conduct your own vulnerability scans within your own VPC without alerting AWS first.

False

True or False. Reserved instances are available for multi-AZ deployments. True

True or False. Amazon’s Glacier service is a Content Distribution Network which integrates with S3.

False

MySQL installations default to port number

a. 1433 b. 3306 c. 3389 d. 80 b

If an Amazon EBS volume is an additional partition (ie. not the root volume), can I detach it without stopping the instance?

a. Yes, but it may take some time b. No, you still need to stop the instance a

Every user you create in the IAM system starts with ____

a. Full permissions b. Partial permissions c. No permissions c

True or False. You can RDP or SSH into an RDS instance to see what is going on with the operating system.

False

True or False. When creating a new security group, all inbound traffic is allowed by default.

False

True or False. Amazon recommends that you leave all security groups in web facing subnets open on port 22 to 0.0.0.0/0 CIDR, that way you can connect wherever you are in the world.

False

What are the 4 level of AWS premium support?

a. It’s an IAAS platform, there sis no support b. Free, Bronze, Silver, Gold c. Basic, Startup, Business, Enterprise d. Basic, Developer, Business, Enterprise d

True or False. As the AWS is PCI DSS 1.00 compliant, I can immediately deploy a website to it that takes credit card details. I do not need any kind of delta accreditation from a QSA.

False

To help manage your Amazon EC2 instances, you can assign you own metadata in the form of

a. Wildcards b. Certificates c. Tags d. Notes c

Which statement best describes Availability Zones

a. Content distribution network which is used to distribute content to users b. A restricted area designed specifically for creating VPCs c. Two zones containing compute resources that are designed to maintain synchronized copies of data within each other d. Distinct locations from within an AWS region that are engineered to be isolated from failures d

True or False. The service to allow Big Data Processing on the AWS platform is known as AWS “Elastic Big Data”.

False

Individual instances are provisioned in

a. Regions only, you cannot choose anything below this b. Availability Zones c. Global b

True or False. When using a custom VPC and placing an EC2 instance into a public subnet, it will automatically be internet accessible (ie. you don’t need to apply an elastic IP or ELB to the instance).

False

What is the underlying Hypervisor for EC2?

a. Hyper-V b. ESX c. Xen d. OVM c

True or False. The AWS platform is certified PCI DSS 1.0 compliant.

True

The AWS platform consists of how many regions currently?

a. 5 b. 10 c. 11 d. 12 c

How many copies of my data does RDS - Aurora store by default?

a. 3 b. 6 c. 2 d. 1 b

What is the difference between Elastic Beanstalk and CloudFormation?

a. Elastic Beanstalk is a monitoring tool to view performance of your AWS resources. CloudFormation is an automated provisioning engine to deploy entire cloud environments via JSON. b. Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring based on the code you upload to it. CloudFormation is an automated provisioning engine to deploy entire cloud environments via JSON. c. There is no difference. d. Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring based on the code you upload to it. CloudFormation is a security service designed to harden your cloud against an attack, like a DDOS. b

True or False. In RDS, you are responsible for maintaining OS & application security patching, antivirus, etc.

False

What is the maximum response time for a Business Level Premium support case?

a. 1 day b. 12 hrs c. 15 mins d. 1 hr d

True or False. When I create a new security group, all outbound traffic is allowed by default.

True

What types of RDS databases are currently available

a. Aurora, MySQL, MSSQL, Cassandra b. PostGres, Cassandra, MongoDB, Aurora c. Oracle, MSSQL, MySQL, Cassandra d. Oracle, MSSQL, MySQL, Postgres d

I can enable multi-factor authentication by using

a. RDS b. IAM c. DynamoDB d. Account Settings b

False

AWS DNS service is known as

a. CloudDNS b. CloudFront c. CloudTrail d. Route53 d

Auditing user access/API calls, etc., across the entire AWS estate can be achieved using

a. CloudFront b. CloudWatch c. CloudFlare d. CloudTrail d

EC2 instances are launched from Amazon Machine Images (AMI). An AMI can

a. Be used to launch EC2 instances in any AWS region b. Only launch EC2 instances in the same Country as the AMI is stored c. Only launch EC2 instances in the same AWS region as the AMI is stored d. Only launch EC2 instances in the same AWS AZ as the AMI is stored c

What action is required to establish an Amazon Virtual Private Cloud (VPC) VPN?

a. Assign a static internet-routable IP address to an Amazon VPC customer gateway b. Use a dedicated network address translation instance in the pubic subnet c. Modify the main route table to allow traffic to a network address translation instance a

You are working with a customer who has 10 TB of archival data that they want to migrate to Glacier. The customer has a 1-Mbps connection to the internet. Which service or feature provides the fastest method of getting data into Amazon Glacier?

a. Glacier multipart upload b. AWS Storage Gateway c. VM Import/Export d. AWS Import/Export d

An auto-scaling group spans 3 AZs and has 4 running EC2 instances. When auto-scaling needs to terminate an instance by default, autoscaling will (select 2):

a. Allow >= 5mins for Windows/Linux shutdown scripts to complete before terminating b. Terminate the instance with the least active network connections c. Send an SNS notification if configured to do so d. Terminate an instance in the AZ which currently has 2 running instances e. Randomly select one of the 3 AZs and terminate an instance c d

You have a load balancer configured for VPC, and all back-end EC2 instances are in service. Your web browser is timing out when connecting to the load balancers’ DNS name. Which options are probable causes of this behavior? Choose 2

a. Load balancer was not configured to use a public subnet with an internet gateway configured b. EC2 instances do not have a dynamically allocated private IP address c. Security groups or network ACLs are not properly configured for web traffic d. Load balancer is not configured in a private subnet with a NAT instance e. VPC does not have a VGW configured a c

Instance 1 and 2 are running in two different subnets (A and B) of a VPC. Instance 1 is not able to ping instance 2. What are 2 possible reasons?

a. The routing table of subnet A has no target route to subnet B b. The security group attached to instance 2 does not allow inbound ICMP traffic c. The policy linked to the IAM role on instance 1 is not configured correctly d. The NACL on subnet B doesn’t allow outbound ICMP traffic b d

A company has an AWS account that contains 3 VPCs (dev, tst, prd) in the same region. Tst is peered to both prd and dev. All VPCs have non-overlapping CIDR blocks. The company wants to push minor releases from dev to prd to speed up time to market. Which of the following options helps accomplish this?

a. Create a new peering connection between prd and dev along with appropriate routes b. Create a new entry to prd in the dev route table using the peering connection as the target c. Attach a second gateway to dev. Add a new entry in the prd route table identifying the gateway as the target d. The VPCs have non-overlapping CIDR blocks in teh same account. The route tables contain local routes for all VPCs a

You have a VPC with 1 private subnet and 1 public subnet with a NAT server. You are creating a group of EC2 instances that configure themselves at startup via downloading a bootstrapping script from S3 that deploys an application via GIT. Which setup provides the highest level of security?

a. EC2 instances in private subnet, no EIPs, route outgoing traffic via the NAT b. EC2 instances in public subnet, no EIPs, route outgoing traffic via the Internet Gateway (IGW) c. EC2 instances in private subnet, assign EIPs, route outgoing traffic via the Internet Gateway (IGW) d. EC2 instances in public subnet, assign EIPs, route outgoing traffic via the NAT a

Out of the stripping options available for the EBS volumes, which one has the following disadvantage : ‘Doubles the amount of I/O required from the instance to EBS compared to RAID 0, because you’re mirroring all writes to a pair of volumes, limiting how much you can stripe.’ ?

a. Raid 0 b. RAID 1+0 (RAID 10) c. Raid 1 d. Raid 2 b

An application requires OS privileges on a database host. Which one is best choice of High Available DB?

a. Amazon EC2 instances in a replication configuration utilizing a single AZ

b. A standalone Amazon EC2 instance c. Amazon EC2 instances in a replication configuration utilizing two different AZ d. Amazon RDS in a Multi-AZ configuration c

EC2 EBS-backed (EBS root) instance is stopped, what happens to the data on any ephemeral store volumes?

a. Data is automatically saved in an EBS volume. b. Data is unavailable until the instance is restarted. c. Data will be deleted and will no longer be accessible. e. Data is automatically saved as an EBS snapshot. c

An organization is planning to use AWS for their production roll out. The organization wants to implement automation for deployment such that it will automatically create a LAMP stack, download the latest PHP installable from S3 and setup the ELB. Which of the below mentioned AWS services meets the requirement for making an orderly deployment of the software?

a. AWS Elastic Beanstalk b. AWS Cloudfront c. AWS Cloudformation d. AWS DevOps a

From what services I can block incoming/outgoing IPs?

a. Security Groups b. DNS c. ELB? d. VPC subnet? e. IGW? f. NACL f

Which 2 services provide Native encryption?

a. Glacier b. EC2 c. IAM d. Storage Gateway a d

You are putting together a wordpress site for a local charity and you are using a combination of Route53, Elastic Load Balancers, EC2 & RDS. You launch your EC2 instance, download wordpress and setup the configuration files connection string so that it can communicate to RDS. When you browse to your URL however, nothing happens. Which of the following could NOT be the cause of this.

a. You have forgotten to open port 80443 on your security group in which the EC2 instance is placed. b. Your elastic load balancer has a health check which is checking a webpage that does not exist, therefore your EC2 instance is not in service. c. You have not configured an ALIAS for your A record to point to your elastic load balancer d. You have locked port 22 down to your specific IP address therefore users cannot access your site using HTTP/HTTPS d

Which feature support optimize performance for a compute cluster that requires low inter-node latency?

a. Multiple Availability Zones b. AWS Direct Connect c. EC2 Dedicated Instances d. Placement Groups e. VPC private subnets d

You need to design a VPC for a web-application consisting of an ELB a fleet of web application servers, and an RDS DB. The entire infrastructure must be distributed over 2 AZ. Which VPC configuration works while assuring the DB is not available from the internet?

a. One Public Subnet for ELB one Public Subnet for the web-servers, and one private subnet for the DB b. One Public Subnet for ELB two Private Subnets for the web-servers, and two private subnets for the RDS c. Two Public Subnets for ELB two private Subnet for the web-servers, and two private subnet for the RDS d. Two Public Subnets for ELB two Public Subnet for the web-servers, and two public subnets for the RDS b

An organization has established an Internet-based VPN connection between their on-premises data center and AWS. They are considering migrating from VPN to AWS DirectConnect. Which operational concern should drive an organization to consider switching from an Internet-based VPN connection to AWS DirectConnect?

a. AWS DirectConnect provides greater redundancy than an Internet-based VPN connection. b. AWS DirectConnect provides greater resiliency than an Internet-based VPN connection. c. AWS DirectConnect provides greater bandwidth than an Internet-based VPN connection. d. AWS DirectConnect provides greater control of network provider selection than an Internet-based VPN connection. c

a. The ELB DNS record’s TTL is set too high. b. The new instances are not being added to the ELB during the Auto Scaling cooldown period. c. The website uses the dynamic content feature of Amazon CloudFront which is keeping connections alive to the ELB. d. The ELB is continuing to send requests with previously established sessions to the same backend instances rather than spreading them out to the new instances. d

As an application has increased in popularity, reports of performance issues have grown. The current configuration initiates scaling actions based on Avg CPU utilization; however during reports of slowness, CloudWatch graphs have shown that Avg CPU remains steady at 40 percent. This is well below the alarm threshold of 60 percent. Your developers have discovered that, due to the unique design of the application, performance degradation occurs on an instance when it is processing more than 200 threads. What is the best way to ensure that your application scales to match demand?

a. Launch two to six additional instances outside of the AutoScaling group to handle the additional load. b. Populate a custom CloudWatch metric for concurrent sessions and initiate scaling actions based on that metric instead of on CPU use. c. Empirically determine the expected CPU use for 200 concurrent sessions and adjust the CloudWatch alarm threshold to be that CPU use. d. Add a script to each instance to detect the number of concurrent sessions. If the number of sessions remains over 200 for five minutes, have the instance increase the desired capacity of the AutoScaling group by one. b

Your customer is willing to consolidate their log streams (access logs application logs security logs etc.) in one single system. Once consolidated, the customer wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from the last 12 hours? What is the best approach to meet your customer’s requirements?

a. Send all the log events to Amazon SQS. Setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics. b. Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs c. Configure Amazon Cloud Trail to receive custom logs, use EMR to apply heuristics the logs d. Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3 use EMR to apply heuristics on the logs b

Which of the following is part of the failover process for a Multi-Availability Zone Amazon Relational Database Service (RDS) instance?

a. The failed RDS DB instance reboots. b. The IP of the primary DB instance is switched to the standby DB instance. c. The DNS record for the RDS endpoint is changed from primary to standby. d. A new DB instance is created in the standby availability zone. c

To be prepared for a security assessment, an organization should implement which two configuration management practices? Choose 2 answers

a. Determine whether remote administrative access is performed securely. b. Verify that all Amazon Simple Storage Service (S3) bucket policies and ACLs correctly implement your security policies. c. Determine whether unnecessary users and services have been identified on all Amazon-published AMIs. d. Verify that AWS Trusted Advisor has identified and disabled all unnecessary users and services on your Amazon Elastic Compute Cloud (EC2) instances. c d

You are tasked with moving a legacy application from a virtual machine running inside your datacenter to an Amazon VPC unfortunately this app requires access to a number of on-premises services and no one who configured the app still works for your company. even worse there is no documentation for it. what will allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured? Choose 3 answers

a. An AWS Direct connect link between the VPC and the network housing the internal services. b. An Internet gateway to allow a VPN Connection c. AN Elastic IP address on the VPC Instance d. AN IP Address space that does not conflict with the one on-premises e. Entries in Amazon Route 53 that allow the instance to resolve its dependencies IP address f. A VM Import of the current Virtual Machine a d f

A company is deploying a new two-tier web application in AWS. The company has limited staff and requires high availability, and the application requires complex queries and table joins. Which configuration provides the solution for the company’s requirements?

a. MySQL Installed on two Amazon EC2 Instances in a single Availability Zone b. Amazon RDS for MySQL with Multi-AZ c. Amazon ElastiCache d. Amazon DynamoDB b

You have a video transcoding application running on Amazon EC2. Each instance polls a queue to find out which video should be transcoded, and then runs a transcoding process. If this process is interrupted, the video will be transcoded by another instance based on the queuing system. You have a large backlog of videos which need to be transcoded and would like to reduce this backlog by adding more instances. You will need these instances only until the backlog is reduced. Which type of Amazon EC2 instances should you use to reduce the backlog in the most cost efficient way?

a. Reserved instances b. Spot instances c. Dedicated instances d. On-demand instances b

When an EC2 instance that is backed by an S3-based AMI is terminated, what happens to the data on the root volume?

a. Data is automatically saved as an EBS snapshot. b. Data is automatically saved as an EBS volume. c. Data is unavailable until the instance is restarted. d. Data is automatically deleted. d

Which procedure for backing up a relational database on EC2 that is using a set of RAlDed EBS volumes for storage minimizes the time during which the database cannot be written to and results in a consistent backup?

a. 1. Detach EBS volumes, 2. Start EBS snapshot of volumes, 3. Re-attach EBS volumes b. 1. Stop the EC2 Instance. 2. Snapshot the EBS volumes c. 1. Suspend disk I/O, 2. Create an image of the EC2 Instance, 3. Resume disk I/O d. 1. Suspend disk I/O, 2. Start EBS snapshot of volumes, 3. Resume disk I/O e. 1. Suspend disk I/O, 2. Start EBS snapshot of volumes, 3. Wait for snapshots to complete, 4. Resume disk I/O d

How can the domain’s zone apex, for example, “myzoneapexdomain.com”, be pointed towards an Elastic Load Balancer?

a. By using an Amazon Route 53 Alias record b. By using an AAAA record c. By using an Amazon Route 53 CNAME record d. By using an A record a

Your company is getting ready to do a major public announcement of a social media site on AWS. The website is running on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS MySQL Extra Large DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency model. After comprehensive tests you discover that there is read contention on RDS MySQL. Which are the best approaches to meet these requirements? (Choose 2 answers)

a. Deploy ElasticCache in-memory cache running in each availability zone b. Implement sharding to distribute load to multiple RDS MySQL instances c. Increase the RDS MySQL Instance size and Implement provisioned IOPS d. Add an RDS MySQL read replica in each availability zone a d

A customer has a single 3-TB volume on-premises that is used to hold a large repository of images and print layout files. This repository is growing at 500 GB a year and must be presented as a single logical volume. The customer is becoming increasingly constrained with their local storage capacity and wants an off-site backup of this data, while maintaining low-latency access to their frequently accessed data. Which AWS Storage Gateway configuration meets the customer requirements?

a. Gateway-Cached volumes with snapshots scheduled to Amazon S3 b. Gateway-Stored volumes with snapshots scheduled to Amazon S3 c. Gateway-Virtual Tape Library with snapshots to Amazon S3 d. Gateway-Virtual Tape Library with snapshots to Amazon Glacier a

Which of the following approaches provides the lowest cost for Amazon Elastic Block Store snapshots while giving you the ability to fully restore data?

a. Maintain two snapshots: the original snapshot and the latest incremental snapshot. b. Maintain a volume snapshot; subsequent snapshots will overwrite one another c. Maintain a single snapshot the latest snapshot is both Incremental and complete. d. Maintain the most current snapshot, archive the original and incremental to Amazon Glacier. c

a. Amazon DynamoDB b. Amazon Redshift c. Amazon Kinesis d. Amazon Simple Queue Service a

Company “ABC” needs to deploy services to an AWS region which they have not previously used. The company currently has an AWS identity and Access Management (IAM) role for the Amazon EC2 instances, which permits the instance to have access to Amazon DynamoDB. The company wants their EC2 instances in the new region to have the same privileges. How should the company achieve this?

a. Create a new IAM role and associated policies within the new region b. Assign the existing IAM role to the Amazon EC2 instances in the new region c. Copy the IAM role and associated policies to the new region and attach it to the instances d. Create an Amazon Machine Image (AMI) of the instance and copy it to the desired region using the AMI Copy feature b

A company is building software on AWS that requires access to various AWS services. Which configuration should be used to ensure that AWS credentials (i.e.,Access Key ID/Secret Access Key combination) are not compromised?

a. Enable Multi-Factor Authentication for your AWS root account. b. Assign an IAM role to the Amazon EC2 instance. c. Store the AWS Access Key ID/Secret Access Key combination in software comments. d. Assign an IAM user to the Amazon EC2 Instance. b

A_____is the concept of allowing (or disallowing) an entity such as a user, group, or role some type of access to one or more resources

a. user b. AWS Account c. resource d. Permission d

For the EBS volumes, which has the following disadvantage : ‘Doubles the amount of I/O required from the instance to EBS compared to RAID 0, because you’re mirroring all writes to a pair of volumes, limiting how much you can stripe.

a. Raid 0 b. Raid 1+0 [Raid 10] c. Raid 1 d. Raid 5 d

Which DNS name can only be resolved within amazon EC2?

a. Internal DNS Name b. External DNS Name c. Global DNS Name d. Private DNS Name a

  1. Your customer wishes to deploy an enterprise application to AWS which will consist of several web servers, several application servers and a small (50GB) Oracle database information is stored, both in the database and the file systems of the various servers. The backup system must support database recovery whole server and whole disk restores, and individual file restores with a recovery time of no more than two hours. They have chosen to use RDS Oracle as the database Which backup architecture will meet these requirements? A. Backup RDS using automated daily DB backups Backup the EC2 instances using AMIs and supplement with file-level backup to S3 using traditional enterprise backup software to provide file level restore B. Backup RDS using a Multi-AZ Deployment Backup the EC2 instances using Amis, and supplement by copying file system data to S3 to provide file level restore. C. Backup RDS using automated daily DB backups Backup the EC2 instances using EBS snapshots and supplement with file-level backups to Amazon Glacier using traditional enterprise backup software to provide file level restore D. Backup RDS database to S3 using Oracle RMAN Backup the EC2 instances using Amis, and supplement with EBS snapshots for individual volume restore.
  2. Your company has HQ in Tokyo and branch offices all over the world and is using a logistics software with a multi-regional deployment on AWS in Japan, Europe, and US.The logistic software has a 3-tier architecture and currently uses MySQL 5.6 for data persistence. Each region has deployed its own database In the HQ region you run an hourly batch process reading data from every region to compute cross-regional reports that are sent by email to all offices this batch process must be completed as fast as possible to quickly optimize logistics how do you build the database architecture in order to meet the requirements’? A. For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ region B. For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS snapshots to the HQ region C. For each regional deployment, use RDS MySQL with a master in the region and send hourly RDS snapshots to the HQ region D. For each regional deployment, use MySQL on EC2 with a master in the region and use S3 to copy data files hourly to the HQ region E. Use Direct Connect to connect all regional MySQL deployments to the HQ region and reduce network latency for the batch process
  3. A customer has a 10 GB AWS Direct Connect connection to an AWS region where they have a web application hosted on Amazon Elastic Computer Cloud (EC2). The application has dependencies on an on-premises mainframe database that uses a BASE (Basic Available. Sort stale Eventual consistency) rather than an ACID (Atomicity. Consistency isolation. Durability) consistency model. The application is exhibiting undesirable behavior because the database is not able to handle the volume of writes. How can you reduce the load on your on-premises database resources in the most cost-effective way? A. Use an Amazon Elastic Map Reduce (EMR) S3DistCp as a synchronization mechanism between the onpremises database and a Hadoop cluster on AWS. B. Modify the application to write to an Amazon SQS queue and develop a worker process to flush the queue to the on-premises database. C. Modify the application to use DynamoDB to feed an EMR cluster which uses a map function to write to the on-premises database. D. Provision an RDS read-replica database on AWS to handle the writes and synchronize the two databases using Data Pipeline.
  4. Company B is launching a new game app for mobile devices. Users will log into the game using their existing social media account to streamline data capture. Company B would like to directly save player data and scoring information from the mobile app to a DynamoDS table named Score Data When a user saves their game the progress data will be stored to the Game state S3 bucket. what is the best approach for storing data to DynamoDB and S3? A. Use an EC2 Instance that is launched with an EC2 role providing access to the Score Data DynamoDB table and the GameState S3 bucket that communicates with the mobile app via web services. B. Use temporary security credentials that assume a role providing access to the Score Data DynamoDB table and the Game State S3 bucket using web identity federation. C. Use Login with Amazon allowing users to sign in with an Amazon account providing the mobile app with access to the Score Data DynamoDB table and the Game State S3 bucket. D. Use an IAM user with access credentials assigned a role providing access to the Score Data DynamoDB table and the Game State S3 bucket for distribution with the mobile app.
  5. Your company plans to host a large donation website on Amazon Web Services (AWS). You anticipate a large and undetermined amount of traffic that will create many database writes. To be certain that you do not drop any writes to a database hosted on AWS. Which service should you use? A. Amazon RDS with provisioned IOPS up to the anticipated peak write throughput. B. Amazon Simple Queue Service (SOS) for capturing the writes and draining the queue to write to the database. C. Amazon ElastiCache to store the writes until the writes are committed to the database. D. Amazon DynamoDB with provisioned write throughput up to the anticipated peak write throughput.
  6. You have launched an EC2 instance with four (4) 500 GB EBS Provisioned IOPS volumes attached The EC2 Instance Is EBS-Optimized and supports 500 Mbps throughput between EC2 and EBS The two EBS volumes are configured as a single RAID o device, and each Provisioned IOPS volume is provisioned with 4.000 IOPS (4 000 16KB reads or writes) for a total of 16.000 random IOPS on the instance The EC2 Instance initially delivers tthe expected 16 000 IOPS random read and write performance Sometime later in order to increase the total random I/O performance of the instance, you add an additional two 500 GB EBS Provisioned IOPS volumes to the RAID Each volume Is provisioned to 4.000 lOPs like the original four for a total of 24.000 IOPS on the EC2 instance Monitoring shows that the EC2 instance CPU utilization increased from 50% to 70%. but the total random IOPS measured at the instance level does not increase at all. What is the problem and a valid solution? A. Larger storage volumes support higher Provisioned IOPS rates: increase the provisioned volume storage of each of the 6 EBS volumes to 1TB. B. The EBS-Optimized throughput limits the total IOPS that can be utilized use an EBS-Optimized instance that provides larger throughput. C. Small block sizes cause performance degradation, limiting the I’O throughput, configure the instance device driver and file system to use 64KB blocks to increase throughput. D. RAID 0 only scales linearly to about 4 devices, use RAID 0 with 4 EBS Provisioned IOPS volumes but increase each Provisioned IOPS EBS volume to 6.000 IOPS. E. The standard EBS instance root volume limits the total IOPS rate, change the instant root volume to also be a 500GB 4.000 Provisioned IOPS volume.
  7. You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of around 100 sensors for 3 months Each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS. During the pilot, you measured a peak or 10 IOPS on the database, and you stored an average of 3GB of sensor data per month in the databaseThe current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances and a PostgreSQL RDS database with 500GB standard storage.The pilot is considered a success and your CEO has managed to get the attention or some potential investors The business plan requires a deployment of at least 1O0K sensors which needs to be supported by the backend You also need to store sensor data for at least two years to be able to compare year over year Improvements. To secure funding, you have to make sure that the platform meets these requirements and leaves room forfurther scaling Which setup will meet the requirements? A. Add an SQS queue to the ingestion layer to buffer writes to the RDS instance B. Ingest data into a DynamoDB table and move old data to a Redshift cluster C. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage D. Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS
  8. You need a persistent and durable storage to trace call activity of an IVR (Interactive Voice Response) system. Call duration is mostly in the 2-3 minutes timeframe. Each traced call can be either active or terminated. An external application needs to know each minute the list of currently active calls, which are usually a few calls/second. Put once per month there is a periodic peak up to 1000 calls/second for a few hours The system is open 247 and any downtime should be avoided. Historical data is periodically archived to files. Cost saving is a priority for this project. What database implementation would better fit this scenario, keeping costs as low as possible? A. Use RDS Multi-AZ with two tables, one for -Active calls” and one for -Terminated calls”. In this way the “Active calls_ table is always small and effective to access. B. Use DynamoDB with a “Calls” table and a Global Secondary Index on a “IsActive’” attribute that is present for active calls only In this way the Global Secondary index is sparse and more effective. C. Use DynamoDB with a ‘Calls” table and a Global secondary index on a ‘State” attribute that can equal to “active” or “terminated” in this way the Global Secondary index can be used for all Items in the table. D. Use RDS Multi-AZ with a “CALLS” table and an Indexed “STATE* field that can be equal to ‘ACTIVE” or – TERMINATED” In this way the SOL query Is optimized by the use of the Index.
  9. Your company is in the process of developing a next generation pet collar that collects biometric information to assist families with promoting healthy lifestyles for their pets Each collar will push 30kb of biometric data In JSON format every 2 seconds to a collection platform that will process and analyze the data providing health trending information back to the pet owners and veterinarians via a web portal Management has tasked you to architect the collection platform ensuring the following requirements are met. Provide the ability for real-time analytics of the inbound biometric data. Ensure processing of the biometric data is highly durable. Elastic and parallel The results of the analytic processing should be persisted for data mining Which architecture outlined below win meet the initial requirements for the collection platform? A. Utilize S3 to collect the inbound sensor data analyze the data from S3 with a daily scheduled Data Pipeline and save the results to a Redshift Cluster. B. Utilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Redshift cluster using EMR. C. Utilize SQS to collect the inbound sensor data analyze the data from SQS with Amazon Kinesis and save the results to a Microsoft SQL Server RDS instance. D. Utilize EMR to collect the inbound sensor data, analyze the data from EMR with Amazon Kinesis and save me results to DynamoDB.
  10. A web design company currently runs several FTP servers that their 250 customers use to upload and download large graphic files They wish to move this system to AWS to make it more scalable, but they wish to maintain customer privacy and Keep costs to a minimum.What AWS architecture would you recommend? A. ASK their customers to use an S3 client instead of an FTP client. Create a single S3 bucket Create an IAM user for each customer Put the IAM Users in a Group that has an IAM policy that permits access to subdirectories within the bucket via use of the ‘username’ Policy variable. B. Create a single S3 bucket with Reduced Redundancy Storage turned on and ask their customers to use an S3 client instead of an FTP client Create a bucket for each customer with a Bucket Policy that permits access only to that one customer. C. Create an auto-scaling group of FTP servers with a scaling policy to automatically scale-in when minimum network traffic on the auto-scaling group is below a given threshold. Load a central list of ftp users from S3 as part of the user Data startup script on each Instance. D. Create a single S3 bucket with Requester Pays turned on and ask their customers to use an S3 client instead of an FTP client Create a bucket tor each customer with a Bucket Policy that permits access only to that one customer.
  11. You have been asked to design the storage layer for an application. The application requires disk performance of at least 100,000 IOPS in addition, the storage layer must be able to survive the loss of an individual disk. EC2 instance, or Availability Zone without any data loss. The volume you provide must have a capacity of at least 3 TB.Which of the following designs will meet these objectives’? A. Instantiate an 12 8xlarge instance in us-east-1a Create a RAID 0 volume using the four 800GB SSD ephemeral disks provided with the instance Provision 3×1 TB EBS volumes attach them to the instance and configure them as a second RAID 0 volume Configure synchronous, block-level replication from the ephemeral backed volume to the EBS-backed volume. B. Instantiate an 12 8xlarge instance in us-east-1a create a raid 0 volume using the four 800GB SSD ephemeral disks provide with the Instance Configure synchronous block-level replication to an Identically configured Instance in us-east-1b. C. Instantiate a c3 8xlarge Instance In us-east-1 Provision an AWS Storage Gateway and configure it for 3 TB of storage and 100 000 lOPS Attach the volume to the instance. D. Instantiate a c3 8xlarge instance in us-east-i provision 4x1TB EBS volumes, attach them to the instance, and configure them as a single RAID 5 volume Ensure that EBS snapshots are performed every 15 minutes. E. Instantiate a c3 8xlarge Instance in us-east-1 Provision 3x1TB EBS volumes attach them to the instance, and configure them as a single RAID 0 volume Ensure that EBS snapshots are performed every 15 minutes.
  12. You would like to create a mirror image of your production environment in another region for disaster recovery purposes. Which of the following AWS resources do not need to be recreated in the second region?(Choose 2 answers) A. Route 53 Record Sets B. IAM Roles C. Elastic IP Addresses (EIP) D. EC2 Key Pairs E. Launch configurations F. Security Groups
  13. Your company runs a customer facing event registration site This site is built with a 3-tier architecture with web and application tier servers and a MySQL database The application requires 6 web tier servers and 6 application tier servers for normal operation, but can run on a minimum of 65% server capacity and a single MySQL database. When deploying this application in a region with three availability zones (AZs) which architecture provides high availability? A. A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer), and an application tier deployed across 2 AZs with 3 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB. and one RDS (Relational Database Service) instance deployed with read replicas in the other AZ. B. A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each A2 inside an Auto Scaling Group behind an ELB (elastic load balancer) and an application tier deployed across 3 AZs with 2 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB and one RDS (Relational Database Service) Instance deployed with read replicas in the two other AZs. C. A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer) and an application tier deployed across 2 AZs with 3 EC2 instances m each AZ inside an Auto Scaling Group behind an ELS and a Multi-AZ RDS (Relational Database Service) deployment. D. A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ Inside an Auto Scaling Group behind an ELB (elastic load balancer). And an application tier deployed across 3 AZs with 2 EC2 instances In each AZ inside an Auto Scaling Group behind an ELB. And a Multi-AZ RDS (Relational Database services) deployment.
  14. Your application is using an ELB in front of an Auto Scaling group of web/application servers deployed across two AZs and a Multi-AZ RDS Instance for data persistence.The database CPU is often above 80% usage and 90% of I/O operations on the database are reads. To improve performance you recently added a single-node Memcached ElastiCache Cluster to cache frequent DB query results. In the next weeks the overall workload is expected to grow by 30%. Do you need to change anything in the architecture to maintain the high availability or the application with the anticipated additional load’* Why? A. Yes. you should deploy two Memcached ElastiCache Clusters in different AZs because the ROS Instance will not Be able to handle the load It me cache node fails. B. No. if the cache node fails the automated ElastiCache node recovery feature will prevent any availability impact. C. Yes you should deploy the Memcached ElastiCache Cluster with two nodes in the same AZ as the RDS DB master instance to handle the load if one cache node fails. D. No if the cache node fails you can always get the same data from the DB without having any availability impact.
  15. You are responsible for a legacy web application whose server environment is approaching end of life You would like to migrate this application to AWS as quickly as possible, since the application environment currently has the following limitations:The VM’s single 10GB VMDK is almost fulll and the virtual network interface still uses the 10Mbps driver, which leaves your 100Mbps WAN connection completely underutilizedIt is currently running on a highly customized. Windows VM within a VMware environment:You do not have the installation media This is a mission critical application with an RTO (Recovery Time Objective) of 8 hours. RPO (Recovery Point Objective) of 1 hour. How could you best migrate this application to AWS while meeting your businesscontinuity requirements? A. Use the EC2 VM Import Connector for vCenter to import the VM into EC2. B. Use Import/Export to import the VM as an ESS snapshot and attach to EC2. C. Use S3 to create a backup of the VM and restore the data into EC2. D. Use the ec2-bundle-instance API to Import an Image of the VM into EC2
  16. An International company has deployed a multi-tier web application that relies on DynamoDB in a single region For regulatory reasons they need disaster recovery capability In a separate region with a Recovery Time Objective of 2 hours and a Recovery Point Objective of 24 hours They should synchronize their data on a regular basis and be able to provision me web application rapidly using CloudFormation. The objective is to minimize changes to the existing web application, control the throughput of DynamoDB used for the synchronization of data and synchronize only the modified elements. Which design would you choose to meet these requirements? A. Use AWS data Pipeline to schedule a DynamoDB cross region copy once a day. create a Lastupdated’ attribute in your DynamoDB table that would represent the timestamp of the last update and use it as a filter. B. Use EMR and write a custom script to retrieve data from DynamoDB in the current region using a SCAN operation and push it to DynamoDB in the second region. C. Use AWS data Pipeline to schedule an export of the DynamoDB table to S3 in the current region once a day then schedule another task immediately after it that will import data from S3 to DynamoDB in the other region. D. Send also each Ante into an SQS queue in me second region; use an auto-scaiing group behind the SQS queue to replay the write in the second region.
  17. Refer to the architecture diagram above of a batch processing solution using Simple Queue Service (SQS) to set up a message queue between EC2 instances which are used as batch processors Cloud Watch monitors the number of Job requests (queued messages) and an Auto Scaling group adds or deletes batch servers automatically based on parameters set in Cloud Watch alarms. You can use this architecture to implement which of the following features in a cost effective and efficient manner? A. Reduce the overall time for executing jobs through parallel processing by allowing a busy EC2 instance that receives a message to pass it to the next instance in a daisy-chain setup. B. Implement fault tolerance against EC2 instance failure since messages would remain in SQS and worn can continue with recovery of EC2 instances implement fault tolerance against SQS failure by backing up messages to S3. C. Implement message passing between EC2 instances within a batch by exchanging messages through SQS. D. Coordinate number of EC2 instances with number of job requests automatically thus Improving cost effectiveness. E. Handle high priority jobs before lower priority jobs by assigning a priority metadata field to SQS messages.
  18. Your company currently has a 2-tier web application running in an on-premises data center. You have experienced several infrastructure failures in the past two months resulting in significant financial losses. Your CIO is strongly agreeing to move the application to AWS. While working on achieving buy-in from the other company executives, he asks you to develop a disaster recovery plan to help improve Business continuity in the short term. He specifies a target Recovery Time Objective (RTO) of 4 hours and a Recovery Point Objective (RPO) of 1 hour or less. He also asks you to implement the solution within 2 weeks. Your database is 200GB in size and you have a 20Mbps Internet connection. How would you do this while minimizing costs? A. Create an EBS backed private AMI which includes a fresh install or your application. Setup a script in your data center to backup the local database every 1 hour and to encrypt and copy the resulting file to an S3 bucket using multi-part upload. B. Install your application on a compute-optimized EC2 instance capable of supporting the application’s average load synchronously replicate transactions from your on-premises database to a database instance in AWS across a secure Direct Connect connection. C. Deploy your application on EC2 instances within an Auto Scaling group across multiple availability zones asynchronously replicate transactions from your on-premises database to a database instance in AWS across a secure VPN connection. D. Create an EBS backed private AMI that includes a fresh install of your application. Develop a Cloud Formation template which includes your AMI and the required EC2. Auto-Scaling and ELB resources to support deploying the application across Multiple-Ability Zones. Asynchronously replicate transactions from your onpremises database to a database instance in AWS across a secure VPN connection.
  19. An ERP application is deployed across multiple AZs in a single region. In the event of failure, the Recovery Time Objective (RTO) must be less than 3 hours, and the Recovery Point Objective (RPO) must be 15 minutes the customer realizes that data corruption occurred roughly 1.5 hours ago. What DR strategy could be used to achieve this RTO and RPO in the event of this kind of failure? A. Take hourly DB backups to S3, with transaction logs stored in S3 every 5 minutes. B. Use synchronous database master-slave replication between two availability zones C. Take hourly DB backups to EC2 Instance store volumes with transaction logs stored In S3 every 5 minutes. D. Take 15 minute DB backups stored In Glacier with transaction logs stored in S3 every 5 minutes.
  20. Your startup wants to implement an order fulfillment process for selling a personalized gadget that needs an average of 3-4 days to produce with some orders taking up to 6 months you expect 10 orders per day on your first day. 1000 orders per day after 6 months and 10,000 orders after 12 months. Orders coming in are checked for consistency men dispatched to your manufacturing plant for production quality control packaging shipment and payment processing If the product does not meet the quality standards at any stage of the process employees may force the process to repeat a step Customers are notified via email about order status and any critical issues with their orders such as payment failure.Your case architecture includes AWS Elastic Beanstalk for your website with an RDS MySQL instance for customer data and orders. How can you implement the order fulfillment process while making sure that the emails are delivered reliably? A. Add a business process management application to your Elastic Beanstalk app servers and re-use the RDS database for tracking order status use one of the Elastic Beanstalk instances to send emails to customers. B. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 Use the decider instance to send emails to customers. C. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 use SES to send emails to customers. D. Use an SQS queue to manage all process tasks Use an Auto Scaling group of EC2 Instances that poll the tasks and execute them. Use SES to send emails to customers.
  21. You have deployed a web application targeting a global audience across multiple AWS Regions under the domain name.example.com. You decide to use Route53 Latency-Based Routing to serve web requests to users from the region closest to the user. To provide business continuity in the event of server downtime you configure weighted record sets associated with two web servers in separate Availability Zones per region. Dunning a DR test you notice that when you disable all web servers in one of the regions Route53 does not automatically direct all users to the other region. What could be happening? (Choose 2 answers) A. Latency resource record sets cannot be used in combination with weighted resource record sets. B. You did not setup an http health check tor one or more of the weighted resource record sets associated with me disabled web servers. C. The value of the weight associated with the latency alias resource record set in the region with the disabled servers is higher than the weight for the other region. D. One of the two working web servers in the other region did not pass its HTTP health check. E. You did not set “Evaluate Target Health” to “Yes” on the latency alias resource record set associated with example com in the region where you disabled the servers.
  22. Your system recently experienced down time during the troubleshooting process. You found that a new administrator mistakenly terminated several production EC2 instances.Which of the following strategies will help prevent a similar situation in the future?The administrator still must be able to:– launch, start stop, and terminate development resources.– launch and start production instances. A. Create an IAM user, which is not allowed to terminate instances by leveraging production EC2 termination protection. B. Leverage resource based tagging along with an IAM user, which can prevent specific users from terminating production EC2 resources. C. Leverage EC2 termination protection and multi-factor authentication, which together require users to authenticate before terminating EC2 instances D. Create an IAM user and apply an IAM role which prevents users from terminating production EC2 instances.
  23. A customer has established an AWS Direct Connect connection to AWS. The link is up and routes are being advertised from the customer’s end, however the customer is unable to connect from EC2 instances inside its VPC to servers residing in its datacenter.Which of the following options provide a viable solution to remedy this situation? (Choose 2 answers) A. Add a route to the route table with an iPsec VPN connection as the target. B. Enable route propagation to the virtual private gateway (VGW). C. Enable route propagation to the customer gateway (CGW). D. Modify the route table of all Instances using the ‘route’ command. E. Modify the Instances VPC subnet route table by adding a route back to the customer’s on-premises environment.
  24. Your company previously configured a heavily used, dynamically routed VPN connection between your on-premises data center and AWS. You recently provisioned a DirectConnect connection and would like to start using the new connection. After configuring DirectConnect settings in the AWS Console, which of the following options will provide the most seamless transition for your users? A. Delete your existing VPN connection to avoid routing loops configure your DirectConnect router with the appropriate settings and verity network traffic is leveraging DirectConnect. B. Configure your DireclConnect router with a higher BGP priority man your VPN router, verify network traffic is leveraging Directconnect and then delete your existing VPN connection. C. Update your VPC route tables to point to the DirectConnect connection configure your DirectConnect router with the appropriate settings verify network traffic is leveraging DirectConnect and then delete the VPN connection. D. Configure your DireclConnect router, update your VPC route tables to point to the DirectConnect connection, configure your VPN connection with a higher BGP pointy. And verify network traffic is leveraging the DirectConnect connection.
  25. A web company is looking to implement an external payment service into their highly available application deployed in a VPC Their application EC2 instances are behind a public lacing ELB Auto scaling is used to add additional instances as traffic increases under normal load the application runs 2 instances in the Auto Scaling group but at peak it can scale 3x in size. The application instances need to communicate with the payment service over the Internet which requires whitelisting of all public IP addresses used to communicate with it. A maximum of 4 whitelisting IP addresses are allowed at a time and can be added through an API. How should they architect their solution? A. Route payment requests through two NAT instances setup for High Availability and whitelist the Elastic IP addresses attached to the NAT instances. B. Whitelist the VPC Internet Gateway Public IP and route payment requests through the Internet Gateway. C. Whitelist the ELB IP addresses and route payment requests from the Application servers through the ELB. D. Automatically assign public IP addresses to the application instances in the Auto Scaling group and run a script on boot that adds each instances public IP address to the payment validation whitelist API.
  26. You are designing the network infrastructure for an application server in Amazon VPC Users will access all the application instances from the Internet as well as from an on-premises network The on-premises network is connected to your VPC over an AWS Direct Connect link. How would you design routing to meet the above requirements? A. Configure a single routing Table with a default route via the Internet gateway Propagate a default route via BGP on the AWS Direct Connect customer router Associate the routing table with all VPC subnets. B. Configure a single routing table with a default route via the internet gateway Propagate specific routes for the on-premises networks via BGP on the AWS Direct Connect customer router Associate the routing table with all VPC subnets. C. Configure a single routing table with two default routes: one to the internet via an Internet gateway the other to the on-premises network via the VPN gateway use this routing table across all subnets in your VPC. D. Configure two routing tables one that has a default route via the Internet gateway and another that has a default route via the VPN gateway Associate both routing tables with each VPC subnet.
  27. You are implementing AWS Direct Connect. You intend to use AWS public service end points such as Amazon S3, across the AWS Direct Connect link. You want other Internet traffic to use your existing link to an Internet Service Provider.What is the correct way to configure AWS Direct connect for access to services such as Amazon S3? A. Configure a public Interface on your AWS Direct Connect link Configure a static route via your AWS Direct Connect link that points to Amazon S3 Advertise a default route to AWS using BGP. B. Create a private interface on your AWS Direct Connect link. Configure a static route via your AWS Direct connect link that points to Amazon S3 Configure specific routes to your network in your VPC. C. Create a public interface on your AWS Direct Connect link Redistribute BGP routes into your existing routing infrastructure advertise specific routes for your network to AWS. D. Create a private interface on your AWS Direct connect link. Redistribute BGP routes into your existing routing infrastructure and advertise a default route to AWS.
  28. You have deployed a three-tier web application in a VPC with a CIOR block of 10.0.0.0/28 You initially deploy two web servers, two application servers, two database servers and one NAT instance for a total of seven EC2 instances The web. Application and database servers are deployed across two availability zones (AZs). You also deploy an ELB in front of the two web servers, and use Route53 for DNS Web (traffic gradually increases in thefirst few days following the deployment, so you attempt to double the number of instances in each tier of the application to handle the new load unfortunately some of these new instances fail to launch. Which of the following could De the root caused? (Choose 2 answers) A. The Internet Gateway (IGW) of your VPC has scaled-up adding more instances to handle the traffic spike, reducing the number of available private IP addresses for new instance launches. B. AWS reserves one IP address In each subnet’s CIDR block for Route53 so you do not have enough addresses left to launch all of the new EC2 instances. C. AWS reserves the first and the last private IP address in each subnet’s CIDR block so you do not have enough addresses left to launch all of the new EC2 instances. D. The ELB has scaled-up. Adding more instances to handle the traffic reducing the number of available private IP addresses for new instance launches. E. AWS reserves the first tour and the last IP address in each subnet’s CIDR block so you do not have enough addresses left to launch all of the new EC2 instances.
  29. You are tasked with moving a legacy application from a virtual machine running Inside your datacenter to an Amazon VPC Unfortunately this app requires access to a number of on-premises services and no one who configured the app still works for your company. Even worse there’s no documentation for it. What will allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured? (Choose 3 answers) A. An AWS Direct Connect link between the VPC and the network housing the internal services. B. An Internet Gateway to allow a VPN connection. C. An Elastic IP address on the VPC instance D. An IP address space that does not conflict with the one on-premises E. Entries in Amazon Route 53 that allow the Instance to resolve its dependencies’ IP addresses F. A VM Import of the current virtual machine
  30. You are migrating a legacy client-server application to AWS The application responds to a specific DNS domain (e g www example com) and has a 2-tier architecture, with multiple application servers and a database server Remote clients use TCP to connect to the application servers. The application servers need to know the IP address of the clients in order to function properly and are currently taking that information from the TCP socket A Multi-AZ RDS MySQL instance will be used for the database. During the migration you can change the application code but you have to file a change request. How would you implement the architecture on AWS In order to maximize scalability and high ability? A. File a change request to implement Proxy Protocol support In the application Use an ELB with a TCP Listener and Proxy Protocol enabled to distribute load on two application servers in different AZs. B. File a change request to Implement Cross-Zone support in the application Use an ELB with a TCP Listener and Cross-Zone Load Balancing enabled, two application servers in different AZs. C. File a change request to implement Latency Based Routing support in the application Use Route 53 with Latency Based Routing enabled to distribute load on two application servers in different AZs. D. File a change request to implement Alias Resource support in the application Use Route 53 Alias Resource Record to distribute load on two application servers in different AZs.
  31. A newspaper organization has a on-premises application which allows the public to search its back catalogue and retrieve individual newspaper pages via a website written in Java They have scanned the old newspapers into JPEGs (approx 17TB) and used Optical Character Recognition (OCR) to populate a commercial search product. The hosting platform and software are now end of life and the organization wants to migrate Itsarchive to AWS and produce a cost efficient architecture and still be designed for availability and durability Which is the most appropriate? A. Use S3 with reduced redundancy lo store and serve the scanned files, install the commercial search application on EC2 Instances and configure with auto-scaling and an Elastic Load Balancer. B. Model the environment using CloudFormation use an EC2 instance running Apache webserver and an open source search application, stripe multiple standard EBS volumes together to store the JPEGs and search index. C. Use S3 with standard redundancy to store and serve the scanned files, use CloudSearch for query processing, and use Elastic Beanstalk to host the website across multiple availability zones. D. Use a single-AZ RDS MySQL instance lo store the search index 33d the JPEG images use an EC2 instance to serve the website and translate user queries into SQL. E. Use a CloudFront download distribution to serve the JPEGs to the end users and Install the current commercial search product, along with a Java Container Tor the website on EC2 instances and use Route53 with DNS round-robin.
  32. A corporate web application is deployed within an Amazon Virtual Private Cloud (VPC) and is connected to the corporate data center via an iPsec VPN. The application must authenticate against the on-premises LDAP server. After authentication, each logged-in user can only access an Amazon Simple Storage Space (S3) keyspace specific to that user.Which two approaches can satisfy these objectives? (Choose 2 answers) A. Develop an identity broker that authenticates against IAM security Token service to assume a IAM role in order to get temporary AWS security credentials The application calls the identity broker to get AWS temporary security credentials with access to the appropriate S3 bucket. B. The application authenticates against LDAP and retrieves the name of an IAM role associated with the user. The application then cails the IAM Security Token Service to assume that IAM role The application can use the temporary credentials to access the appropriate S3 bucket. C. Develop an identity broker that authenticates against LDAP and then calls IAM Security Token Service to get IAM federated user credentials The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 bucket. D. The application authenticates against LDAP the application then calls the AWS identity and Access Management (IAM) Security service to log in to IAM using the LDAP credentials the application can use the IAM temporary credentials to access the appropriate S3 bucket. E. The application authenticates against IAM Security Token Service using the LDAP credentials the application uses those temporary AWS security credentials to access the appropriate S3 bucket.
  33. You are designing a multi-platform web application for AWS The application will run on EC2 instances and will be accessed from PCs. tablets and smart phones Supported accessing platforms are Windows. MACOS. IOS and Android Separate sticky session and SSL certificate setups are required for different platform types which of the following describes the most cost effective and performance efficient architecture setup? A. Setup a hybrid architecture to handle session state and SSL certificates on-prem and separate EC2 Instance groups running web applications for different platform types running in a VPC. B. Set up one ELB for all platforms to distribute load among multiple instance under it Each EC2 instance implements ail functionality for a particular platform. C. Set up two ELBs The first ELB handles SSL certificates for all platforms and the second ELB handles session stickiness for all platforms for each ELB run separate EC2 instance groups to handle the web application for each platform. D. Assign multiple ELBS to an EC2 instance or group of EC2 instances running the common components of the web application, one ELB for each platform type Session stickiness and SSL termination are done at the ELBs.
  34. Your company has an on-premises multi-tier PHP web application, which recently experienced downtime due to a large burst In web traffic due to a company announcement Over the coming days, you are expecting similar announcements to drive similar unpredictable bursts, and are looking to find ways to quickly improveyour infrastructures ability to handle unexpected increases in traffic.The application currently consists of 2 tiers A web tier which consists of a load balancer and several LinuxApache web servers as well as a database tier which hosts a Linux server hosting a MySQL database.Which scenario below will provide full site functionality, while helping to improve the ability of yourapplication in the short timeframe required? A. Offload traffic from on-premises environment Setup a CloudFront distribution and configure CloudFront to cache objects from a custom origin Choose to customize your object cache behavior, and select a TTL that objects should exist in cache. B. Migrate to AWS Use VM import ‘Export to quickly convert an on-premises web server to an AMI create an Auto Scaling group, which uses the imported AMI to scale the web tier based on incoming traffic Create an RDS read replica and setup replication between the RDS instance and on-premises MySQL server to migrate the database. C. Failover environment: Create an S3 bucket and configure it tor website hosting Migrate your DNS to Route53 using zone (lie import and leverage Route53 DNS failover to failover to the S3 hosted website. D. Hybrid environment Create an AMI which can be used of launch web surfers in EC2 Create an Auto Scaling group which uses the * AMI to scale the web tier based on incoming traffic Leverage Elastic Load Balancing to balance traffic between on-premises web servers and those hosted in AWS.
  35. Your company produces customer commissioned one-of-a-kind skiing helmets combining night fashion with custom technical enhancements Customers can show oft their Individuality on the ski slopes and have access to head-up-displays. GPS rear-view cams and any other technical innovation they wish to embed in the helmet. The current manufacturing process is data rich and complex including assessments to ensure that the custom electronics and materials used to assemble the helmets are to the highest standards Assessments are a mixture of human and an automated assessments you need to add a new set of assessment to model the failure modes of the custom electronics using GPUs with CAD, across a cluster of servers with low latency networking. ​What architecture would allow you to automate the existing process using a hybrid approach and ensure that the architecture can support the evolution of processes over time? A. Use AWS Data Pipeline to manage movement of data & meta-data and assessments Use an auto-scaling group of G2 instances in a placement group. B. Use Amazon Simple Workflow (SWF) to manages assessments, movement of data & meta-data Use an autoscaling group of G2 instances in a placement group. C. Use Amazon Simple Workflow (SWF) to manages assessments movement of data & meta-data Use an autoscaling group of C3 instances with SR-IOV (Single Root I/O Virtualization). D. Use AWS data Pipeline to manage movement of data & meta-data and assessments use auto-scaling group of C3 with SR-IOV (Single Root I/O virtualization).
  36. Your department creates regular analytics reports from your company’s log files All log data is collected in Amazon S3 and processed by daily Amazon Elastic MapReduce (EMR) jobs that generate daily PDF reports and aggregated tables in CSV format for an Amazon Redshift data warehouse. Your CFO requests that you optimize the cost structure for this system. Which of the following alternatives will lower costs without compromising average performance of the system or data integrity for the raw data? A. Use reduced redundancy storage (RRS) for PDF and csv data in Amazon S3. Add Spot instances to Amazon EMR jobs Use Reserved Instances for Amazon Redshift. B. Use reduced redundancy storage (RRS) for all data in S3. Use a combination of Spot instances and Reserved Instances for Amazon EMR jobs use Reserved instances for Amazon Redshift. C. Use reduced redundancy storage (RRS) for all data in Amazon S3 Add Spot Instances to Amazon EMR jobs Use Reserved Instances for Amazon Redshift. D. Use reduced redundancy storage (RRS) for PDF and csv data in S3 Add Spot Instances to EMR jobs Use Spot Instances for Amazon Redshift.
  37. You’re running an application on-premises due to its dependency on non-x86 hardware and want to use AWS for data backup. Your backup application is only able to write to POSIX-compatible block-based storage. You have 140TB of data and would like to mount it as a single folder on your file server Users must be able to access portions of this data while the backups are taking place. What backup solution would be most appropriate for this use case? A. Use Storage Gateway and configure it to use Gateway Cached volumes. B. Configure your backup software to use S3 as the target for your data backups. C. Configure your backup software to use Glacier as the target for your data backups. D. Use Storage Gateway and configure it to use Gateway Stored volumes.
  38. Your website is serving on-demand training videos to your workforce. Videos are uploaded monthly in high resolution MP4 format. Your workforce is distributed globally often on the move and using company-provided tablets that require the HTTP Live Streaming (HLS) protocol to watch a video. Your company has no video transcoding expertise and it required you may need to pay for a consultant. How do you implement the most cost-efficient architecture without compromising high availability and qualityof video delivery’? A. Elastic Transcoder to transcode original high-resolution MP4 videos to HLS S3 to host videos with Utecycle Management to archive original flies to Glacier after a few days CloudFront to serve HLS transcoded videos from S3 B. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number or nodes depending on the length of the queue S3 to host videos with Lifecycle Management to archive all files to Glacier after a few days CloudFront to serve HLS transcoding videos from Glacier C. Elastic Transcoder to transcode original nigh-resolution MP4 videos to HLS EBS volumes to host videos and EBS snapshots to incrementally backup original rues after a fe days.CioudFront to serve HLS transcoded videos from EC2. D. A video transcoding pipeline running on EC2 using SOS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue E8S volumes to host videos and EBS snapshots to incrementally backup original files after a few days CloudFront to serve HLS transcoded videos from EC2
  39. You are the new IT architect in a company that operates a mobile sleep tracking application When activated at night, the mobile app is sending collected data points of 1 kilobyte every 5 minutes to your backend The backend takes care of authenticating the user and writing the data points into an Amazon DynamoDB table. Every morning, you scan the table to extract and aggregate last night’s data on a per user basis, and store theresults in Amazon S3. Users are notified via Amazon SMS mobile push notifications that new data is available, which is parsed and visualized by (the mobile app Currently you have around 100k users who are mostly based out of North America. You have been tasked to optimize the architecture of the backend system to lower cost what would yourecommend? (Choose 2 answers) A. Create a new Amazon DynamoDB (table each day and drop the one for the previous day after its data is on Amazon S3. B. Have the mobile app access Amazon DynamoDB directly instead of JSON files stored on Amazon S3. C. Introduce an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput. D. Introduce Amazon Elasticache lo cache reads from the Amazon DynamoDB table and reduce provisioned read throughput. E. Write data directly into an Amazon Redshift cluster replacing both Amazon DynamoDB and Amazon S3.
  40. You’ve been hired to enhance the overall security posture for a very large e-commerce site They have a well architected multi-tier application running in a VPC that uses ELBs in front of both the web and the app tier with static assets served directly from S3 They are using a combination of RDS and DynamoDB for their dynamic data and then archiving nightly into S3 for further processing with EMR They are concerned because they found questionable log entries and suspect someone is attempting to gain unauthorized access. Which approach provides a cost effective scalable mitigation to this kind of attack? A. Recommend mat they lease space at a DirectConnect partner location and establish a 1G DirectConnect connection to their VPC they would then establish Internet connectivity into their space, filter the traffic in hardware Web Application Firewall (WAF). And then pass the traffic through the DirectConnect connection into their application running in their VPC. B. Add previously identified hostile source IPs as an explicit INBOUND DENY NACL to the web tier subnet. C. Add a WAF tier by creating a new ELB and an AutoScaling group of EC2 Instances running a host-based WAF They would redirect Route 53 to resolve to the new WAF tier ELB The WAF tier would thier pass the traffic to the current web tier The web tier Security Groups would be updated to only allow traffic from the WAF tier Security Group D. Remove all but TLS 1 2 from the web tier ELB and enable Advanced Protocol Filtering This will enable the ELB itself to perform WAF functionality.
  41. You currently operate a web application In the AWS US-East region The application runs on an auto-scaled layer of EC2 instances and an RDS Multi-AZ database Your IT security compliance officer has tasked you to develop a reliable and durable logging solution to track changes made to your EC2, IAM And RDS resources.The solution must ensure the integrity and confidentiality of your log data. Which of these solutions would you recommend?
    A. Create a new CloudTrail trail with one new S3 bucket to store the logs and with the global services option selected Use IAM roles S3 bucket policies and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs. B. Create a new cloudTrail with one new S3 bucket to store the logs Configure SNS to send log file delivery notifications to your management system Use IAM roles and S3 bucket policies on the S3 bucket mat stores your logs. C. Create a new CloudTrail trail with an existing S3 bucket to store the logs and with the global services option selected Use S3 ACLs and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs. D. Create three new CloudTrail trails with three new S3 buckets to store the logs one for the AWS Management console, one for AWS SDKs and one for command line tools Use IAM roles and S3 bucket policies on the S3 buckets that store your logs.
  42. An enterprise wants to use a third-party SaaS application. The SaaS application needs to have access to issue several API commands to discover Amazon EC2 resources running within the enterprise’s account The enterprise has internal security policies that require any outside access to their environment must conform to the principles of least privilege and there must be controls in place to ensure that the credentials used by the SaaS vendor cannot be used by any other third party. Which of the following would meet all of theseconditions? A. From the AWS Management Console, navigate to the Security Credentials page and retrieve the access and secret key for your account. B. Create an IAM user within the enterprise account assign a user policy to the IAM user that allows only the actions required by the SaaS application create a new access and secret key for the user and provide these credentials to the SaaS provider. C. Create an IAM role for cross-account access allows the SaaS provider’s account to assume the role and assign it a policy that allows only the actions required by the SaaS application. D. Create an IAM role for EC2 instances, assign it a policy mat allows only the actions required tor the Saas application to work, provide the role ARM to the SaaS provider to use when launching their application instances.
  43. You are designing a data leak prevention solution for your VPC environment. You want your VPC Instances to be able to access software depots and distributions on the Internet for product updates. The depots and distributions are accessible via third party CONs by their URLs. You want to explicitly deny any other outbound connections from your VPC instances to hosts on the internet. Which of the following options would you consider? A. Configure a web proxy server in your VPC and enforce URL-based rules for outbound access Remove default routes. B. Implement security groups and configure outbound rules to only permit traffic to software depots. C. Move all your instances into private VPC subnets remove default routes from all routing tables and add specific routes to the software depots and distributions only. D. Implement network access control lists to all specific destinations, with an Implicit deny as a rule.
  44. An AWS customer is deploying an application that is composed of an AutoScaling group of EC2 Instances. The customer’s security policy requires that every outbound connection from these instances to any other service within the customer’s Virtual Private Cloud must be authenticated using a unique x 509 certificate that contains the specific instance id. Also, an x 509 certificates must Design by the customer’s Key management service in order to be trusted for authentication. Which of the following configurations will support these requirements? A. Configure an IAM Role that grants access to an Amazon S3 object containing a signed certificate and configure me Auto Scaling group to launch instances with this role Have the instances bootstrap get the certificate from Amazon S3 upon first boot. B. Embed a certificate into the Amazon Machine Image that is used by the Auto Scaling group Have the launched instances generate a certificate signature request with the instance’s assigned instance-id to the Key management service for signature. C. Configure the Auto Scaling group to send an SNS notification of the launch of a new instance to the trusted key management service. Have the Key management service generate a signed certificate and send it directly to the newly launched instance. D. Configure the launched instances to generate a new certificate upon first boot Have the Key management service poll the AutoScaling group for associated instances and send new instances a certificate signature (that contains the specific instance-id.)
  45. An administrator is using Amazon CloudFormation to deploy a three tier web application that consists of a web tier and application tier that will utilize Amazon DynamoDB for storage when creating the CloudFormation template which of the following would allow the application instance access to the DynamoDB tables without exposing API credentials? A. Create an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and associate the Role to the application instances by referencing an instance profile. B. Use me Parameter section in the Cloud Formation template to nave the user input Access and Secret Keys from an already created IAM user that has me permissions required to read and write from the required DynamoDB table. C. Create an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and reference the Role in the instance profile property of the application instance. D. Create an identity and Access Management user in the CioudFormation template that has permissions to read and write from the required DynamoDB table, use the GetAtt function to retrieve the Access and secret keys and pass them to the application instance through user-data.
  46. Your company has recently extended its datacenter into a VPC on AWS to add burst computing capacity as needed Members of your Network Operations Center need to be able to go to the AWS Management Console and administer Amazon EC2 instances as necessary You don’t want to create new IAM users for each NOC member and make those users sign in again to the AWS Management Console Which option below will meet the needs for your NOC members? A. Use OAuth 2 0 to retrieve temporary AWS security credentials to enable your NOC members to sign in to the AWS Management Console. B. Use web Identity Federation to retrieve AWS temporary security credentials to enable your NOC members to sign in to the AWS Management Console. C. Use your on-premises SAML 2 O-compliant identity provider (IDP) to grant the NOC members federated access to the AWS Management Console via the AWS single sign-on (SSO) endpoint. D. Use your on-premises SAML2.0-compliant identity provider (IDP) to retrieve temporary security credentials to enable NOC members to sign in to the AWS Management Console.
  47. You are designing a photo sharing mobile app the application will store all pictures in a single Amazon S3 bucket. Users will upload pictures from their mobile device directly to Amazon S3 and will be able to view and download their own pictures directly from Amazon S3. You want to configure security to handle potentially millions of users in the most secure manner possible. What should your server-side application do when a new user registers on the photo-sharing mobile application? A. Record the user’s Information in Amazon RDS and create a role in IAM with appropriate permissions. When the user uses their mobile app create temporary credentials using the AWS Security Token Service ‘AssumeRole’ function Store these credentials in the mobile app’s memory and use them to access Amazon S3 Generate new credentials the next time the user runs the mobile app. B. Record the user’s Information In Amazon DynamoDB. When the user uses their mobile app create temporary credentials using AWS Security Token Service with appropriate permissions Store these credentials in the mobile app’s memory and use them to access Amazon S3 Generate new credentials the next time the user runs the mobile app. C. Create IAM user. Assign appropriate permissions to the IAM user Generate an access key and secret key for the IAM user, store them in the mobile app and use these credentials to access Amazon S3. D. Create an IAM user. Update the bucket policy with appropriate permissions for the IAM user Generate an access Key and secret Key for the IAM user, store them In the mobile app and use these credentials to access Amazon S3.
  48. You are designing an SSL/TLS solution that requires HTTPS clients to be authenticated by the Web server using client certificate authentication. The solution must be resilient.Which of the following options would you consider for configuring the web server infrastructure? (Choose 2 answers) A. Configure ELB with TCP listeners on TCP/443. And place the Web servers behind it. B. Configure your Web servers with EIPS Place the Web servers in a Route53 Record Set and configure health checks against all Web servers. C. Configure ELB with HTTPS listeners, and place the Web servers behind it. D. Configure your web servers as the origins for a CloudFront distribution. Use custom SSL certificates on your CloudFront distribution.
  49. You have a periodic Image analysis application that gets some files In Input analyzes them and for each file writes some data in output to a ten file the number of files in input per day is high and concentrated in a few hours of the day. Currently you have a server on EC2 with a large EBS volume that hosts the input data and the results it takes almost 20 hours per day to complete the process What services could be used to reduce the elaboration time and improve the availability of the solution? A. S3 to store I/O files. SQS to distribute elaboration commands to a group of hosts working in parallel. Auto scaling to dynamically size the group of hosts depending on the length of the SQS queue B. EBS with Provisioned IOPS (PIOPS) to store I/O files. SNS to distribute elaboration commands to a group of hosts working in parallel Auto Scaling to dynamically size the group of hosts depending on the number of SNS notifications C. S3 to store I/O files, SNS to distribute evaporation commands to a group of hosts working in parallel. Auto scaling to dynamically size the group of hosts depending on the number of SNS notifications D. EBS with Provisioned IOPS (PIOPS) to store I/O files SOS to distribute elaboration commands to a group of hosts working in parallel Auto Scaling to dynamically size the group ot hosts depending on the length of the SQS queue.
  50. You are designing an intrusion detection prevention (IDS/IPS) solution for a customer web application in a single VPC. You are considering the options for implementing IOS IPS protection for traffic coming from the Internet. Which of the following options would you consider? (Choose 2 answers) A. Implement IDS/IPS agents on each Instance running In VPC B. Configure an instance in each subnet to switch its network interface card to promiscuous mode and analyze network traffic. C. Implement Elastic Load Balancing with SSL listeners In front of the web applications D. Implement a reverse proxy layer in front of web servers and configure IDS/IPS agents on each reverse proxy server.
  51. Your fortune 500 company has under taken a TCO analysis evaluating the use of Amazon S3 versus acquiring more hardware The outcome was that ail employees would be granted access to use Amazon S3 for storage of their personal documents. Which of the following will you need to consider so you can set up a solution that incorporates single sign-onfrom your corporate AD or LDAP directory and restricts access for each user to a designated user folder in a bucket? (Choose 3 Answers) A. Setting up a federation proxy or identity provider B. Using AWS Security Token Service to generate temporary tokens C. Tagging each folder in the bucket D. Configuring IAM role E. Setting up a matching IAM user for every user in your corporate directory that needs access to a folder in the bucket
  52. You have an application running on an EC2 Instance which will allow users to download files from a private S3 bucket using a pre-assigned URL. Before generating the URL the application should verify the existence of the file in S3. How should the application use AWS credentials to access the S3 bucket securely? A. Use the AWS account access Keys the application retrieves the credentials from the source code of the application. B. Create a IAM user for the application with permissions that allow list access to the S3 bucket launch the instance as the IAM user and retrieve the IAM user’s credentials from the EC2 instance user data. C. Create an IAM role for EC2 that allows list access to objects in the S3 bucket. Launch the instance with the role, and retrieve the role’s credentials from the EC2 Instance metadata D. Create an IAM user for the application with permissions that allow list access to the S3 bucket. The application retrieves the IAM user credentials from a temporary directory with permissions that allow read access only to the application user.
  53. You are designing a social media site and are considering how to mitigate distributed denial-of-service (DDoS) attacks. Which of the below are viable mitigation techniques? (Choose 3 answers) A. Add multiple elastic network interfaces (ENIs) to each EC2 instance to increase the network bandwidth. B. Use dedicated instances to ensure that each instance has the maximum performance possible. C. Use an Amazon CloudFront distribution for both static and dynamic content. D. Use an Elastic Load Balancer with auto scaling groups at the web. App and Amazon Relational Database Service (RDS) tiers E. Add alert Amazon CloudWatch to look for high Network in and CPU utilization. F. Create processes and capabilities to quickly add and remove rules to the instance OS firewall.
  54. You require the ability to analyze a customer’s clickstream data on a website so they can do behavioral analysis. Your customer needs to know what sequence of pages and ads their customer clicked on. This data will be used in real time to modify the page layouts as customers click through the site to increase stickiness and advertising click-through. Which option meets the requirements for captioning and analyzing this data? A. Log clicks in weblogs by URL store to Amazon S3, and then analyze with Elastic MapReduce B. Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers C. Write click events directly to Amazon Redshift and then analyze with SQL D. Publish web clicks by session to an Amazon SQS queue men periodically drain these events to Amazon RDS and analyze with sql
  55. An AWS customer runs a public blogging website. The site users upload two million blog entries a month The average blog entry size is 200 KB. The access rate to blog entries drops to negligible 6 months after publication and users rarely access a blog entry 1 year after publication. Additionally, blog entries have a high update rate during the first 3 months following publication, this drops to no updates after 6 months. The customer wants to use CloudFront to improve his user’s load times. Which of the following recommendations would you make to the customer? A. Duplicate entries into two different buckets and create two separate CloudFront distributions where S3 access is restricted only to Cloud Front identity B. Create a CloudFront distribution with “US’Europe price class for US/Europe users and a different CloudFront distribution with All Edge Locations’ for the remaining users. C. Create a CloudFront distribution with S3 access restricted only to the CloudFront identity and partition the blog entry’s location in S3 according to the month it was uploaded to be used with CloudFront behaviors D. Create a CloudFront distribution with Restrict Viewer Access Forward Query string set to true and minimum TTL of 0.
  56. A company is running a batch analysis every hour on their main transactional DB. running on an RDS MySQL instance to populate their central Data Warehouse running on Redshift During the execution of the batch their transactional applications are very slow When the batch completes they need to update the top management dashboard with the new data The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies that an update is required The on-premises system cannot be modified because is managed by another team. How would you optimize this scenario to solve performance issues and automate the process as much aspossible? A. Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard B. Replace RDS with Redsnift for the oaten analysis and SQS to send a message to the on-premises system to update the dashboard C. Create an RDS Read Replica for the batch analysis and SNS to notify me on-premises system to update the dashboard D. Create an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises system to update the dashboard.
  57. You are implementing a URL whitelisting system for a company that wants to restrict outbound HTTP’S connections to specific domains from their EC2-hosted applications you deploy a single EC2 instance running proxy software and configure It to accept traffic from all subnets and EC2 instances in the VPC. You configure the proxy to only pass through traffic to domains that you define in its whitelist configuration You have a nightly maintenance window or 10 minutes where all instances fetch new software updates. Each update Is about 200MB In size and there are 500 instances In the VPC that routinely fetch updates After a few days you notice that some machines are failing to successfully download some, but not all of their updates within the maintenance window The download URLs used for these updates are correctly listed in the proxy’s whitelist configuration and you are able to access them manually using a web browser on the instances What might be happening? (Choose 2 answers) A. You are running the proxy on an undersized EC2 instance type so network throughput is not sufficient for all instances to download their updates in time. B. You have not allocated enough storage to the EC2 instance running me proxy so the network buffer is filling up. causing some requests to fall C. You are running the proxy in a public subnet but have not allocated enough EIPs lo support the needed network throughput through the Internet Gateway (IGW) D. You are running the proxy on a affilelentiy-sized EC2 instance in a private subnet and its network throughput is being throttled by a NAT running on an undersized EC2 instance E. The route table for the subnets containing the affected EC2 instances is not configured to direct network traffic for the software update locations to the proxy.
  58. To serve Web traffic for a popular product your chief financial officer and IT director have purchased 10 ml large heavy utilization Reserved Instances (RIs) evenly spread across two availability zones: Route 53 is used to deliver the traffic to an Elastic Load Balancer (ELB). After several months, the product grows even more popular and you need additional capacity As a result, your company purchases two C3.2xlarge medium utilization Ris You register the two c3 2xlarge instances with your ELB and quickly find that the ml largeinstances are at 100% of capacity and the c3 2xlarge instances have significant capacity that’s unused Which option is the most cost effective and uses EC2 capacity most effectively? A. Use a separate ELB for each instance type and distribute load to ELBs with Route 53 weighted round robin B. Configure Autoscaning group and Launch Configuration with ELB to add up to 10 more on-demand mi large instances when triggered by Cloudwatch shut off c3 2xiarge instances C. Route traffic to EC2 ml large and c3 2xlarge instances directly using Route 53 latency based routing and health checks shut off ELB D. Configure ELB with two c3 2xiarge Instances and use on-demand Autoscailng group for up to two additional c3.2xlarge instances Shut on mi .large instances.
  59. A read only news reporting site with a combined web and application tier and a database tier that receives large and unpredictable traffic demands must be able to respond to these traffic fluctuations automatically. What AWS services should be used meet these requirements? A. Stateless instances for the web and application tier synchronized using Elasticache Memcached in an autoscaimg group monitored with CloudWatch. And RDSwith read replicas B. Stateful instances for me web and application tier in an autoscaling group monitored with CloudWatch and RDS with read replicas C. Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch. And multi-AZ RDS D. Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with CloudWatch and multi-AZ RDS
  60. You are running a news website in the eu-west-1 region that updates every 15 minutes. The website has a world-wide audience it uses an Auto Scaling group behind an Elastic Load Balancer and an Amazon RDS database Static content resides on Amazon S3, and is distributed through Amazon CloudFront. Your Auto Scaling group is set to trigger a scale up event at 60% CPU utilization, you use an Amazon RDS extra large DB instance with 10.000 Provisioned IOPS its CPU utilization is around 80%. While freeable memory is in the 2 GB range. Web analytics reports show that the average load time of your web pages is around 1 5 to 2 seconds, but your SEO consultant wants to bring down the average load time to under 0.5 seconds. How would you improve page load times for your users? (Choose 3 answers) A. Lower the scale up trigger of your Auto Scaling group to 30% so it scales more aggressively. B. Add an Amazon ElastiCache caching layer to your application for storing sessions and frequent DB queries C. Configure Amazon CloudFront dynamic content support to enable caching of re-usable content from your site D. Switch Amazon RDS database to the high memory extra large Instance type E. Set up a second installation in another region, and use the Amazon Route 53 latency-based routing feature to select the right region.
  61. You are here: Home > Amazon Questions & Answers > AWS-SAA > Which one of the following architectural suggestions would you make to the customer? Which one of the following architectural suggestions would you make to the customer? Posted by seenagape on February 12, 2016 Leave a comment (2)Go to comments A large real-estate brokerage is exploring the option o( adding a cost-effective location based alert to theirexisting mobile application The application backend infrastructure currently runs on AWS Users who opt in to this service will receive alerts on their mobile device regarding real-estate otters in proximity to their location. For the alerts to be relevant delivery time needs to be in the low minute count the existing mobile app has 5 million users across the us Which one of the following architectural suggestions would you make to the customer? A. The mobile application will submit its location to a web service endpoint utilizing Elastic Load Balancing and EC2 instances: DynamoDB will be used to store and retrieve relevant otters EC2 instances will communicate with mobile earners/device providers to push alerts back to mobile application. B. Use AWS DirectConnect or VPN to establish connectivity with mobile carriers EC2 instances will receive the mobile applications ‘ location through carrier connection: ROS will be used to store and relevant relevant offers EC2 instances will communicate with mobile carriers to push alerts back to the mobile application C. The mobile application will send device location using SQS. EC2 instances will retrieve the relevant others from DynamoDB AWS Mobile Push will be used to send offers to the mobile application D. The mobile application will send device location using AWS Mobile Push EC2 instances will retrieve the relevant offers from DynamoDB EC2 instances will communicate with mobile carriers/device providers to push alerts back to the mobile application.
  62. A company is building a voting system for a popular TV show, viewers win watch the performances then visit the show’s website to vote for their favorite performer. It is expected that in a short period of time after the show has finished the site will receive millions of visitors. The visitors will first login to the site using their Amazon.com credentials and then submit their vote. After the voting is completed the page will display the vote totals. The company needs to build the site such that can handle the rapid influx of traffic while maintaining good performance but also wants to keep costs to a minimum. Which of the design patterns below should they use? A. Use CloudFront and an Elastic Load balancer in front of an auto-scaled set of web servers, the web servers will first can the Login With Amazon service to authenticate the user then process the users vote and store the result into a multi-AZ Relational Database Service instance. B. Use CloudFront and the static website hosting feature of S3 with the Javascript SDK to call the Login With Amazon service to authenticate the user, use IAM Roles to gain permissions to a DynamoDB table to store the users vote. C. Use CloudFront and an Elastic Load Balancer in front of an auto-scaled set of web servers, the web servers will first call the Login with Amazon service to authenticate the user, the web servers will process the users vote and store the result into a DynamoDB table using IAM Roles for EC2 instances to gain permissions to the DynamoDB table. D. Use CloudFront and an Elastic Load Balancer in front of an auto-scaled set of web servers, the web servers will first call the Login. With Amazon service to authenticate the user, the web servers win process the users vote and store the result into an SQS queue using IAM Roles for EC2 Instances to gain permissions to the SQS queue. A set of application servers will then retrieve the items from the queue and store the result into a DynamoDB table.
  63. You are developing a new mobile application and are considering storing user preferences in AWS.2w This would provide a more uniform cross-device experience to users using multiple mobile devices to access the application. The preference data for each user is estimated to be 50KB in size Additionally 5 million customers are expected to use the application on a regular basis. The solution needs to be cost-effective, highly available,scalable and secure, how would you design a solution to meet the above requirements? A. Setup an RDS MySQL instance in 2 availability zones to store the user preference data. Deploy a public facing application on a server in front of the database to manage security and access credentials B. Setup a DynamoDB table with an item for each user having the necessary attributes to hold the user preferences. The mobile application will query the user preferences directly from the DynamoDB table. Utilize STS. Web Identity Federation, and DynamoDB Fine Grained Access Control to authenticate and authorize access. C. Setup an RDS MySQL instance with multiple read replicas in 2 availability zones to store the user preference data .The mobile application will query the user preferences from the read replicas. Leverage the MySQL user management and access privilege system to manage security and access credentials. D. Store the user preference data in S3 Setup a DynamoDB table with an item for each user and an item attribute pointing to the user’ S3 object. The mobile application will retrieve the S3 URL from DynamoDB and then access the S3 object directly utilize STS, Web identity Federation, and S3 ACLs to authenticate and authorize access.
  64. Your team has a tomcat-based Java application you need to deploy into development, test and production environments. After some research, you opt to use Elastic Beanstalk due to its tight integration with your developer tools and RDS due to its ease of management. Your QA team lead points out that you need to roll a sanitized set of production data into your environment on a nightly basis. Similarly, other software teams in your org want access to that same restored data via their EC2 instances in your VPC .The optimal setup for persistence and security that meets the above requirements would be the following. A. Create your RDS instance as part of your Elastic Beanstalk definition and alter its security group to allow access to it from hosts in your application subnets. B. Create your RDS instance separately and add its IP address to your application’s DB connection strings in your code Alter its security group to allow access to it from hosts within your VPC’s IP address block. C. Create your RDS instance separately and pass its DNS name to your app’s DB connection string as an environment variable. Create a security group for client machines and add it as a valid source for DB traffic to the security group of the RDS instance itself. D. Create your RDS instance separately and pass its DNS name to your’s DB connection string as an environment variable Alter its security group to allow access to It from hosts In your application subnets.
  65. You deployed your company website using Elastic Beanstalk and you enabled log file rotation to S3. An Elastic Map Reduce job is periodically analyzing the logs on S3 to build a usage dashboard that you share with your CIO. You recently improved overall performance of the website using Cloud Front for dynamic content delivery and your website as the origin.After this architectural change, the usage dashboard shows that the traffic on your website dropped by anorder of magnitude. How do you fix your usage dashboard’? A. Enable Cloud Front to deliver access logs to S3 and use them as input of the Elastic Map Reduce job. B. Turn on Cloud Trail and use trail log tiles on S3 as input of the Elastic Map Reduce job C. Change your log collection process to use Cloud Watch ELB metrics as input of the Elastic Map Reduce job D. Use Elastic Beanstalk “Rebuild Environment” option to update log delivery to the Elastic Map Reduce job. E. Use Elastic Beanstalk ‘Restart App server(s)” option to update log delivery to the Elastic Map Reduce job.
  66. You are running a successful multitier web application on AWS and your marketing department has asked you to add a reporting tier to the application. The reporting tier will aggregate and publish status reports every 30 minutes from user-generated information that is being stored in your web application s database. You are currently running a Multi-AZ RDS MySQL instance for the database tier. You also have implemented Elasticacheas a database caching layer between the application tier and database tier. Please select the answer that will allow you to successfully implement the reporting tier with as little impact as possible to your database. A. Continually send transaction logs from your master database to an S3 bucket and generate the reports off the S3 bucket using S3 byte range requests. B. Generate the reports by querying the synchronously replicated standby RDS MySQL instance maintained through Multi-AZ. C. Launch a RDS Read Replica connected to your Multi AZ master database and generate reports by querying the Read Replica. D. Generate the reports by querying the ElasticCache database caching tier.
  1. You have a business-to-business web application running in a VPC consisting of an Elastic Load Balancer (ELB), web servers, application servers and a database. Your web application should only accept traffic from predefined customer IP addresses. Which two options meet this security requirement? Choose 2 answers
    1. Configure web server VPC security groups to allow traffic from your customers’ IPs (Web server is behind the ELB and customer IPs will never reach web servers)
    2. Configure your web servers to filter traffic based on the ELB’s “X-forwarded-for” header (get the customer IPs and create a custom filter to restrict access)
    3. Configure ELB security groups to allow traffic from your customers’ IPs and deny all outbound traffic (ELB will see the customer IPs so can restrict access, deny all is basically have no rules in outbound traffic, implicit, and its stateful so would work)
    4. Configure a VPC NACL to allow web traffic from your customers’ IPs and deny all outbound traffic (NACL is stateless, deny all will not work)
  2. A user has created a VPC with public and private subnets using the VPC Wizard. The VPC has CIDR 20.0.0.0/16. The private subnet uses CIDR 20.0.0.0/24. Which of the below mentioned entries are required in the main route table to allow the instances in VPC to communicate with each other?
    1. Destination : 20.0.0.0/24 and Target : VPC
    2. Destination : 20.0.0.0/16 and Target : ALL
    3. Destination : 20.0.0.0/0 and Target : ALL
    4. Destination : 20.0.0.0/16 and Target : Local
  3. A user has created a VPC with two subnets: one public and one private. The user is planning to run the patch update for the instances in the private subnet. How can the instances in the private subnet connect to the internet?
    1. Use the internet gateway with a private IP
    2. Allow outbound traffic in the security group for port 80 to allow internet updates
    3. The private subnet can never connect to the internet
    4. Use NAT with an elastic IP
  4. A user has launched an EC2 instance and installed a website with the Apache webserver. The webserver is running but the user is not able to access the website from the Internet. What can be the possible reason for this failure?
    1. The security group of the instance is not configured properly.
    2. The instance is not configured with the proper key-pairs.
    3. The Apache website cannot be accessed from the Internet.
    4. Instance is not configured with an elastic IP.
  5. A user has created a VPC with public and private subnets using the VPC wizard. Which of the below mentioned statements is true in this scenario?
    1. AWS VPC will automatically create a NAT instance with the micro size
    2. VPC bounds the main route table with a private subnet and a custom route table with a public subnet
    3. User has to manually create a NAT instance
    4. VPC bounds the main route table with a public subnet and a custom route table with a private subnet
  6. A user has created a VPC with public and private subnets. The VPC has CIDR 20.0.0.0/16. The private subnet uses CIDR 20.0.1.0/24 and the public subnet uses CIDR 20.0.0.0/24. The user is planning to host a web server in the public subnet (port 80) and a DB server in the private subnet (port 3306). The user is configuring a security group of the NAT instance. Which of the below mentioned entries is not required for the NAT security group?
    1. For Inbound allow Source: 20.0.1.0/24 on port 80
    2. For Outbound allow Destination: 0.0.0.0/0 on port 80
    3. For Inbound allow Source: 20.0.0.0/24 on port 80
    4. For Outbound allow Destination: 0.0.0.0/0 on port 443
  7. A user has created a VPC with CIDR 20.0.0.0/24. The user has used all the IPs of CIDR and wants to increase the size of the VPC. The user has two subnets: public (20.0.0.0/25) and private (20.0.0.128/25). How can the user change the size of the VPC?
    1. The user can delete all the instances of the subnet. Change the size of the subnets to 20.0.0.0/32 and 20.0.1.0/32, respectively. Then the user can increase the size of the VPC using CLI
    2. It is not possible to change the size of the VPC once it has been created
    3. User can add a subnet with a higher range so that it will automatically increase the size of the VPC
    4. User can delete the subnets first and then modify the size of the VPC
  8. A user has created a VPC with the public and private subnets using the VPC wizard. The VPC has CIDR 20.0.0.0/16. The public subnet uses CIDR 20.0.1.0/24. The user is planning to host a web server in the public subnet (port 80) and a DB server in the private subnet (port 3306). The user is configuring a security group for the public subnet (WebSecGrp) and the private subnet (DBSecGrp). Which of the below mentioned entries is required in the web server security group (WebSecGrp)?
    1. Configure Destination as DB Security group ID (DbSecGrp) for port 3306 Outbound
    2. Configure port 80 for Destination 0.0.0.0/0 Outbound
    3. Configure port 3306 for source 20.0.0.0/24 InBound
    4. Configure port 80 InBound for source 20.0.0.0/16
  9. A user has created a VPC with CIDR 20.0.0.0/16. The user has created one subnet with CIDR 20.0.0.0/16 by mistake. The user is trying to create another subnet of CIDR 20.0.0.1/24. How can the user create the second subnet?
    1. There is no need to update the subnet as VPC automatically adjusts the CIDR of the first subnet based on the second subnet’s CIDR
    2. The user can modify the first subnet CIDR from the console
    3. It is not possible to create a second subnet as one subnet with the same CIDR as the VPC has been created
    4. The user can modify the first subnet CIDR with AWS CLI
  10. A user has setup a VPC with CIDR 20.0.0.0/16. The VPC has a private subnet (20.0.1.0/24) and a public subnet (20.0.0.0/24). The user’s data centre has CIDR of 20.0.54.0/24 and 20.1.0.0/24. If the private subnet wants to communicate with the data centre, what will happen?
    1. It will allow traffic communication on both the CIDRs of the data centre
    2. It will not allow traffic with data centre on CIDR 20.1.0.0/24 but allows traffic communication on 20.0.54.0/24
    3. It will not allow traffic communication on any of the data centre CIDRs
    4. It will allow traffic with data centre on CIDR 20.1.0.0/24 but does not allow on 20.0.54.0/24 (as the CIDR block would be overlapping)
  11. A user has created a VPC with public and private subnets using the VPC wizard. The VPC has CIDR 20.0.0.0/16. The private subnet uses CIDR 20.0.0.0/24 . The NAT instance ID is i-a12345. Which of the below mentioned entries are required in the main route table attached with the private subnet to allow instances to connect with the internet?
    1. Destination: 0.0.0.0/0 and Target: i-a12345
    2. Destination: 20.0.0.0/0 and Target: 80
    3. Destination: 20.0.0.0/0 and Target: i-a12345
    4. Destination: 20.0.0.0/24 and Target: i-a12345
  12. A user has created a VPC with CIDR 20.0.0.0/16 using the wizard. The user has created a public subnet CIDR (20.0.0.0/24) and VPN only subnets CIDR (20.0.1.0/24) along with the VPN gateway (vgw-12345) to connect to the user’s data centre. The user’s data centre has CIDR 172.28.0.0/12. The user has also setup a NAT instance (i-123456) to allow traffic to the internet from the VPN subnet. Which of the below mentioned options is not a valid entry for the main route table in this scenario?
    1. Destination: 20.0.1.0/24 and Target: i-12345
    2. Destination: 0.0.0.0/0 and Target: i-12345
    3. Destination: 172.28.0.0/12 and Target: vgw-12345
    4. Destination: 20.0.0.0/16 and Target: local
  13. A user has created a VPC with CIDR 20.0.0.0/16. The user has created one subnet with CIDR 20.0.0.0/16 in this VPC. The user is trying to create another subnet with the same VPC for CIDR 20.0.0.1/24. What will happen in this scenario?
    1. The VPC will modify the first subnet CIDR automatically to allow the second subnet IP range
    2. It is not possible to create a subnet with the same CIDR as VPC
    3. The second subnet will be created
    4. It will throw a CIDR overlaps error
  14. A user has created a VPC with CIDR 20.0.0.0/16 using the wizard. The user has created both Public and VPN-Only subnets along with hardware VPN access to connect to the user’s data centre. The user has not yet launched any instance as well as modified or deleted any setup. He wants to delete this VPC from the console. Will the console allow the user to delete the VPC?
    1. Yes, the console will delete all the setups and also delete the virtual private gateway
    2. No, the console will ask the user to manually detach the virtual private gateway first and then allow deleting the VPC
    3. Yes, the console will delete all the setups and detach the virtual private gateway
    4. No, since the NAT instance is running
  15. A user has created a VPC with the public and private subnets using the VPC wizard. The VPC has CIDR 20.0.0.0/16. The public subnet uses CIDR 20.0.1.0/24. The user is planning to host a web server in the public subnet (port 80) and a DB server in the private subnet (port 3306). The user is configuring a security group for the public subnet (WebSecGrp) and the private subnet (DBSecGrp). Which of the below mentioned entries is required in the private subnet database security group (DBSecGrp)?
    1. Allow Inbound on port 3306 for Source Web Server Security Group (WebSecGrp)
    2. Allow Inbound on port 3306 from source 20.0.0.0/16
    3. Allow Outbound on port 3306 for Destination Web Server Security Group (WebSecGrp.
    4. Allow Outbound on port 80 for Destination NAT Instance IP
  16. A user has created a VPC with a subnet and a security group. The user has launched an instance in that subnet and attached a public IP. The user is still unable to connect to the instance. The internet gateway has also been created. What can be the reason for the error?
    1. The internet gateway is not configured with the route table
    2. The private IP is not present
    3. The outbound traffic on the security group is disabled
    4. The internet gateway is not configured with the security group
  17. A user has created a subnet in VPC and launched an EC2 instance within it. The user has not selected the option to assign the IP address while launching the instance. Which of the below mentioned statements is true with respect to the Instance requiring access to the Internet?
    1. The instance will always have a public DNS attached to the instance by default
    2. The user can directly attach an elastic IP to the instance
    3. The instance will never launch if the public IP is not assigned
    4. The user would need to create an internet gateway and then attach an elastic IP to the instance to connect from internet
  18. A user has created a VPC with public and private subnets using the VPC wizard. Which of the below mentioned statements is not true in this scenario?
    1. VPC will create a routing instance and attach it with a public subnet
    2. VPC will create two subnets
    3. VPC will create one internet gateway and attach it to VPC
    4. VPC will launch one NAT instance with an elastic IP
  19. A user has created a VPC with the public subnet. The user has created a security group for that VPC. Which of the below mentioned statements is true when a security group is created?
    1. It can connect to the AWS services, such as S3 and RDS by default
    2. It will have all the inbound traffic by default
    3. It will have all the outbound traffic by default
    4. It will by default allow traffic to the internet gateway
  20. A user has created a VPC with CIDR 20.0.0.0/16 using VPC Wizard. The user has created a public CIDR (20.0.0.0/24) and a VPN only subnet CIDR (20.0.1.0/24) along with the hardware VPN access to connect to the user’s data centre. Which of the below mentioned components is not present when the VPC is setup with the wizard?
    1. Main route table attached with a VPN only subnet
    2. A NAT instance configured to allow the VPN subnet instances to connect with the internet
    3. Custom route table attached with a public subnet
    4. An internet gateway for a public subnet
  21. A user has created a VPC with public and private subnets using the VPC wizard. The user has not launched any instance manually and is trying to delete the VPC. What will happen in this scenario?
    1. It will not allow to delete the VPC as it has subnets with route tables
    2. It will not allow to delete the VPC since it has a running route instance
    3. It will terminate the VPC along with all the instances launched by the wizard
    4. It will not allow to delete the VPC since it has a running NAT instance
  22. A user has created a public subnet with VPC and launched an EC2 instance within it. The user is trying to delete the subnet. What will happen in this scenario?
    1. It will delete the subnet and make the EC2 instance as a part of the default subnet
    2. It will not allow the user to delete the subnet until the instances are terminated
    3. It will delete the subnet as well as terminate the instances
    4. Subnet can never be deleted independently, but the user has to delete the VPC first
  23. A user has created a VPC with CIDR 20.0.0.0/24. The user has created a public subnet with CIDR 20.0.0.0/25 and a private subnet with CIDR 20.0.0.128/25. The user has launched one instance each in the private and public subnets. Which of the below mentioned options cannot be the correct IP address (private IP) assigned to an instance in the public or private subnet?
    1. 20.0.0.255
    2. 20.0.0.132
    3. 20.0.0.122
    4. 20.0.0.55
  24. A user has created a VPC with CIDR 20.0.0.0/16. The user has created public and VPN only subnets along with hardware VPN access to connect to the user’s datacenter. The user wants to make so that all traffic coming to the public subnet follows the organization’s proxy policy. How can the user make this happen?
    1. Setting up a NAT with the proxy protocol and configure that the public subnet receives traffic from NAT
    2. Setting up a proxy policy in the internet gateway connected with the public subnet
    3. It is not possible to setup the proxy policy for a public subnet
    4. Setting the route table and security group of the public subnet which receives traffic from a virtual private gateway
  25. A user has created a VPC with CIDR 20.0.0.0/16 using the wizard. The user has created a public subnet CIDR (20.0.0.0/24) and VPN only subnets CIDR (20.0.1.0/24) along with the VPN gateway (vgw-12345) to connect to the user’s data centre. Which of the below mentioned options is a valid entry for the main route table in this scenario?
    1. Destination: 20.0.0.0/24 and Target: vgw-12345
    2. Destination: 20.0.0.0/16 and Target: ALL
    3. Destination: 20.0.1.0/16 and Target: vgw-12345
    4. Destination: 0.0.0.0/0 and Target: vgw-12345
  26. Which two components provide connectivity with external networks? When attached to an Amazon VPC which two components provide connectivity with external networks? Choose 2 answers
    1. Elastic IPs (EIP) (Does not provide connectivity, public IP address will do as well)
    2. NAT Gateway (NAT) (Not Attached to VPC and still needs IGW)
    3. Internet Gateway (IGW)
    4. Virtual Private Gateway (VGW)
  27. You are attempting to connect to an instance in Amazon VPC without success You have already verified that the VPC has an Internet Gateway (IGW) the instance has an associated Elastic IP (EIP) and correct security group rules are in place. Which VPC component should you evaluate next?
    1. The configuration of a NAT instance
    2. The configuration of the Routing Table
    3. The configuration of the internet Gateway (IGW)
    4. The configuration of SRC/DST checking
  28. If you want to launch Amazon Elastic Compute Cloud (EC2) Instances and assign each Instance a predetermined private IP address you should:
    1. Assign a group or sequential Elastic IP address to the instances
    2. Launch the instances in a Placement Group
    3. Launch the instances in the Amazon virtual Private Cloud (VPC)
    4. Use standard EC2 instances since each instance gets a private Domain Name Service (DNS) already
    5. Launch the Instance from a private Amazon Machine image (AMI)
  29. A user has recently started using EC2. The user launched one EC2 instance in the default subnet in EC2-VPC Which of the below mentioned options is not attached or available with the EC2 instance when it is launched?
    1. Public IP address
    2. Internet gateway
    3. Elastic IP
    4. Private IP address
  30. A user has created a VPC with CIDR 20.0.0.0/24. The user has created a public subnet with CIDR 20.0.0.0/25. The user is trying to create the private subnet with CIDR 20.0.0.128/25. Which of the below mentioned statements is true in this scenario?
    1. It will not allow the user to create the private subnet due to a CIDR overlap
    2. It will allow the user to create a private subnet with CIDR as 20.0.0.128/25
    3. This statement is wrong as AWS does not allow CIDR 20.0.0.0/25
    4. It will not allow the user to create a private subnet due to a wrong CIDR range
  31. A user has created a VPC with CIDR 20.0.0.0/16 with only a private subnet and VPN connection using the VPC wizard. The user wants to connect to the instance in a private subnet over SSH. How should the user define the security rule for SSH?
    1. Allow Inbound traffic on port 22 from the user’s network
    2. The user has to create an instance in EC2 Classic with an elastic IP and configure the security group of a private subnet to allow SSH from that elastic IP
    3. The user can connect to a instance in a private subnet using the NAT instance
    4. Allow Inbound traffic on port 80 and 22 to allow the user to connect to a private subnet over the Internet
  32. A company wants to implement their website in a virtual private cloud (VPC). The web tier will use an Auto Scaling group across multiple Availability Zones (AZs). The database will use Multi-AZ RDS MySQL and should not be publicly accessible. What is the minimum number of subnets that need to be configured in the VPC?
    1. 1
    2. 2
    3. 3
    4. 4
  33. Which of the following are characteristics of Amazon VPC subnets? Choose 2 answers
    1. Each subnet maps to a single Availability Zone
    2. A CIDR block mask of /25 is the smallest range supported
    3. Instances in a private subnet can communicate with the Internet only if they have an Elastic IP.
    4. By default, all subnets can route between each other, whether they are private or public
    5. Each subnet spans at least 2 Availability zones to provide a high-availability environment
  34. You need to design a VPC for a web-application consisting of an Elastic Load Balancer (ELB). a fleet of web/application servers, and an RDS database The entire Infrastructure must be distributed over 2 availability zones. Which VPC configuration works while assuring the database is not available from the Internet?
    1. One public subnet for ELB one public subnet for the web-servers, and one private subnet for the database
    2. One public subnet for ELB two private subnets for the web-servers, two private subnets for RDS
    3. Two public subnets for ELB two private subnets for the web-servers and two private subnets for RDS
    4. Two public subnets for ELB two public subnets for the web-servers, and two public subnets for RDS
  35. You have deployed a three-tier web application in a VPC with a CIDR block of 10.0.0.0/28. You initially deploy two web servers, two application servers, two database servers and one NAT instance tor a total of seven EC2 instances The web. Application and database servers are deployed across two availability zones (AZs). You also deploy an ELB in front of the two web servers, and use Route53 for DNS Web (raffle gradually increases in the first few days following the deployment, so you attempt to double the number of instances in each tier of the application to handle the new load unfortunately some of these new instances fail to launch. Which of the following could the root caused? (Choose 2 answers) [PROFESSIONAL]
    1. The Internet Gateway (IGW) of your VPC has scaled-up adding more instances to handle the traffic spike, reducing the number of available private IP addresses for new instance launches.
    2. AWS reserves one IP address in each subnet’s CIDR block for Route53 so you do not have enough addresses left to launch all of the new EC2 instances.
    3. AWS reserves the first and the last private IP address in each subnet’s CIDR block so you do not have enough addresses left to launch all of the new EC2 instances.
    4. The ELB has scaled-up. Adding more instances to handle the traffic reducing the number of available private IP addresses for new instance launches
    5. AWS reserves the first four and the last IP address in each subnet’s CIDR block so you do not have enough addresses left to launch all of the new EC2 instances.
  36. A user wants to access RDS from an EC2 instance using IP addresses. Both RDS and EC2 are in the same region, but different AZs. Which of the below mentioned options help configure that the instance is accessed faster?
    1. Configure the Private IP of the Instance in RDS security group (Recommended as the data is transferred within the the Amazon network and not through internet – Refer link)
    2. Security group of EC2 allowed in the RDS security group
    3. Configuring the elastic IP of the instance in RDS security group
    4. Configure the Public IP of the instance in RDS security group
  37. In regards to VPC, select the correct statement:
    1. You can associate multiple subnets with the same Route Table.
    2. You can associate multiple subnets with the same Route Table, but you can’t associate a subnet with only one Route Table.
    3. You can’t associate multiple subnets with the same Route Table.
    4. None of these.
  38. You need to design a VPC for a web-application consisting of an ELB a fleet of web application servers, and an RDS DB. The entire infrastructure must be distributed over 2 AZ. Which VPC configuration works while assuring the DB is not available from the Internet?
    1. One Public Subnet for ELB, one Public Subnet for the web-servers, and one private subnet for the DB
    2. One Public Subnet for ELB, two Private Subnets for the web-servers, and two private subnets for the RDS
    3. Two Public Subnets for ELB, two private Subnet for the web-servers, and two private subnet for the RDS
    4. Two Public Subnets for ELB, two Public Subnet for the web-servers, and two public subnets for the RDS
  39. You have an Amazon VPC with one private subnet and one public subnet with a Network Address Translator (NAT) server. You are creating a group of Amazon Elastic Cloud Compute (EC2) instances that configure themselves at startup via downloading a bootstrapping script from Amazon Simple Storage Service (S3) that deploys an application via GIT. Which setup provides the highest level of security?
    1. Amazon EC2 instances in private subnet, no EIPs, route outgoing traffic via the NAT
    2. Amazon EC2 instances in public subnet, no EIPs, route outgoing traffic via the Internet Gateway (IGW)
    3. Amazon EC2 instances in private subnet, assign EIPs, route outgoing traffic via the Internet Gateway (IGW)
    4. Amazon EC2 instances in public subnet, assign EIPs, route outgoing traffic via the NAT
  40. You have launched an Amazon Elastic Compute Cloud (EC2) instance into a public subnet with a primary private IP address assigned, an internet gateway is attached to the VPC, and the public route table is configured to send all Internet-based traffic to the Internet gateway. The instance security group is set to allow all outbound traffic but cannot access the Internet. Why is the Internet unreachable from this instance?
    1. The instance does not have a public IP address
    2. The Internet gateway security group must allow all outbound traffic.
    3. The instance security group must allow all inbound traffic.
    4. The instance “Source/Destination check” property must be enabled.
  41. You have an environment that consists of a public subnet using Amazon VPC and 3 instances that are running in this subnet. These three instances can successfully communicate with other hosts on the Internet. You launch a fourth instance in the same subnet, using the same AMI and security group configuration you used for the others, but find that this instance cannot be accessed from the internet. What should you do to enable Internet access?
    1. Deploy a NAT instance into the public subnet.
    2. Assign an Elastic IP address to the fourth instance
    3. Configure a publically routable IP Address in the host OS of the fourth instance.
    4. Modify the routing table for the public subnet.
  42. You have a load balancer configured for VPC, and all back-end Amazon EC2 instances are in service. However, your web browser times out when connecting to the load balancer’s DNS name. Which options are probable causes of this behavior? Choose 2 answers
    1. The load balancer was not configured to use a public subnet with an Internet gateway configured
    2. The Amazon EC2 instances do not have a dynamically allocated private IP address
    3. The security groups or network ACLs are not property configured for web traffic.
    4. The load balancer is not configured in a private subnet with a NAT instance.
    5. The VPC does not have a VGW configured.
  43. When will you incur costs with an Elastic IP address (EIP)?
    1. When an EIP is allocated.
    2. When it is allocated and associated with a running instance.
    3. When it is allocated and associated with a stopped instance.
    4. Costs are incurred regardless of whether the EIP is associated with a running instance.

Question 1 : QuickTechie.com has three different datacenters in Mumbai, Geneva and Navada. Which is planning to extend their data center by connecting their DC with the AWS VPC using the VPN gateway. QuickTechie.com is setting up a dynamically routed VPN connection. Select the information which is not required to setup this configuration?

  1. The type of customer gateway, such as Cisco ASA, Juniper J-Series, Juniper SSG, Yamaha.

  2. Internet-routable IP address (static) of the customer gateway’s external interface.

  3. Elastic IP ranges that the organization wants to advertise over the VPN connection to the VPC.

  4. Border Gateway Protocol (BGP) Autonomous System Number (ASN) of the customer gateway.

  5. None of the above

Correct Answer 3 : Exp : When you create a VPN connection, you must specify the type of routing that you plan to use. The type of routing that you select can depend on the make and model of your VPN devices. If your VPN device supports Border Gateway Protocol (BGP), specify dynamic routing when you configure your VPN connection. If your device does not support BGP, specify static routing. For a list of static and dynamic routing devices that have been tested with Amazon VPC, see the Amazon Virtual Private Cloud FAQs. When you use a BGP device, you don’t need to specify static routes to the VPN connection because the device uses BGP to advertise its routes to the virtual private gateway. If you use a device that doesn’t support BGP, you must select static routing and enter the routes (IP prefixes) for your network that should be communicated to the virtual private gateway. Only IP prefixes that are known to the virtual private gateway, whether through BGP advertisement or static route entry, can receive traffic from your VPC. We recommend that you use BGP-capable devices, when available, because the BGP protocol offers robust liveness detection checks that can assist failover to the second VPN tunnel if the first tunnel goes down. Devices that don’t support BGP may also perform health checks to assist failover to the second tunnel when needed.

To use Amazon VPC with a VPN connection, you or your network administrator must designate a physical appliance as your customer gateway and configure it. We provide you with the required configuration information, including the VPN preshared key and other parameters related to setting up the VPN connection. Your network administrator typically performs this configuration. For information about the customer gateway requirements and configuration, see the Amazon VPC Network Administrator Guide. The following table lists the information that you need to have so that we can establish your VPN connection.

The type of customer gateway (for example, Cisco ASA, Juniper J-Series, Juniper SSG, Yamaha)

Specifies how to format the returned information that you use to configure the customer gateway.

For information about the specific devices that we’ve tested, see What customer gateway devices are known to work with Amazon VPC? in the Amazon VPC FAQ.

Internet-routable IP address (static) of the customer gateway’s external interface.

Used to create and configure your customer gateway (it’s referred to as YOURUPLINK?ADDRESS)

The value must be static and can’t be behind a device performing network address translation (NAT).

(Optional) Border Gateway Protocol (BGP) Autonomous System Number (ASN) of the customer gateway, if you are creating a dynamically routed VPN connection.

Used to create and configure your customer gateway (referred to as YOUR_BGP_ASN).

If you use the wizard in the console to set up your VPC, we automatically use 65000 as the ASN.

You can use an existing ASN assigned to your network. If you don’t have one, you can use a private ASN (in the 64512-65534 range). For more information about ASNs, see the Wikipedia article.

Amazon VPC supports 2-byte ASN numbers.

Internal network IP ranges that you want advertised over the VPN connection to the VPC.

Used to specify static routes.

Question 2 : QuickTechie.com Inc. Have their own datacenter in Geneva, now they wish to use AWS service for better and robust infrastructure as well as secure network. They have created new 50 Instances in the AWS VPC. Now they are planning to start distributing server load (from Geneva datacenter to) on these new 50 instances.

Which of the following needs to be done to start communication between VPC and Geneva datacenteres.

  1. attache a virtual private gateway to the VPC

  2. create a custom route table

  3. update your security group rules

  4. 1 and 3

  5. nbsp; nbsp; All 1,2 and 3

Correct Answer 5 : Exp : By default, instances that you launch into a virtual private cloud (VPC) can’t communicate with your own network. You can enable access to your network from your VPC by attaching a virtual private gateway to the VPC, creating a custom route table, and updating your security group rules.

You can complete this process manually, as described on this page, or let the VPC creation wizard take care of many of these steps for you. For more information about using the VPC creation wizard to set up the virtual private gateway, Although the term VPN connection is a general term, in the Amazon VPC documentation, a VPN connection refers to the connection between your VPC and your own network.

Components of Your VPN

A VPN connection consists of the following components.

Virtual Private Gateway

A virtual private gateway is the VPN concentrator on the Amazon side of the VPN connection.

For information about how many virtual private gateways you can have per region, as well as the limits for other components within your VPC, see Amazon VPC Limits.

Customer Gateway

A customer gateway is a physical device or software application on your side of the VPN connection.

Question 3 : Sometime it ispossible that your customer gateway becomes unavailable, and to protect against a loss of connectivity between QuickTechie.com Inc Geneva datacenter and AWS VPC, You will set up a second VPN connection to your VPC by using a second customer gateway. By using redundant VPN connections and customer gateways, you can perform maintenance on one of your customer gateways while traffic continues to flow over the second customer gateway’s VPN connection. To establish redundant VPN connections and customer gateways on your network, you need to set up a second VPN connection. The customer gateway IP address

  1. for second VPN connection must be publicly accessible

  2. can be the same public IP address that you are using for the first VPN connection.

  3. for second VPN connection must be privately accessible

  4. 1 and 2

  5. 2 and 3

Correct Answer : 1 Exp :To protect against a loss of connectivity in case your customer gateway becomes unavailable, you can set up a second VPN connection to your VPC by using a second customer gateway. By using redundant VPN connections and customer gateways, you can perform maintenance on one of your customer gateways while traffic continues to flow over the second customer gateway’s VPN connection. To establish redundant VPN connections and customer gateways on your network, you need to set up a second VPN connection. The customer gateway IP address for the second VPN connection must be publicly accessible and cant be the same public IP address that you are using for the first VPN connection.

Dynamically routed VPN connections use the Border Gateway Protocol (BGP) to exchange routing information between your customer gateways and the virtual private gateways. Statically routed VPN connections require you to enter static routes for the network on your side of the customer gateway. BGP-advertised and statically entered route information allow gateways on both sides to determine which tunnels are available and reroute traffic if a failure occurs. We recommend that you configure your network to use the routing information provided by BGP (if available) to select an available path. The exact configuration depends on the architecture of your network.

Question : 4 QuickTechie.com has multiple branch offices and existing Internet connections. You also have multiple VPN connections with AWS, but wish to establish secure communication between sites. Select the correct statement.

  1. you can provide secure communication between sites using the AWS VPN CloudHub

  2. To use the AWS VPN CloudHub, you must create a virtual private gateway with multiple customer gateways

  3. To use the AWS VPN CloudHub, you must create a virtual private gateway with single customer gateways

  4. 1 and 2

  5. 1 and 3

Correct Answer : 4 Exp : If you have multiple VPN connections, you can provide secure communication between sites using the AWS VPN CloudHub. The VPN CloudHub operates on a simple hub-and-spoke model that you can use with or without a VPC. This design is suitable for customers with multiple branch offices and existing Internet connections who’d like to implement a convenient, potentially low-cost hub-and-spoke model for primary or backup connectivity between these remote offices.

To use the AWS VPN CloudHub, you must create a virtual private gateway with multiple customer gateways, each with unique Border Gateway Protocol (BGP) Autonomous System Numbers (ASNs). Customer gateways advertise the appropriate routes (BGP prefixes) over their VPN connections. These routing advertisements are received and re-advertised to each BGP peer, enabling each site to send data to and receive data from the other sites. The routes for each spoke must have unique ASNs and the sites must not have overlapping IP ranges. Each site can also send and receive data from the VPC as if they were using a standard VPN connection.

Sites that use AWS Direct Connect connections to the virtual private gateway can also be part of the AWS VPN CloudHub. For example, your corporate headquarters in New York can have an AWS Direct Connect connection to the VPC and your branch offices can use VPN connections to the VPC. The branch offices in Los Angeles and Miami can send and receive data with each other and with your corporate headquarters, all using the AWS VPN CloudHub.

To configure the AWS VPN CloudHub, you use the AWS Management Console to create multiple customer gateways, each with the unique public IP address of the gateway and a unique ASN. Next, you create a VPN connection from each customer gateway to a common virtual private gateway. Each VPN connection must advertise its specific BGP routes. This is done using the network statements in the VPN configuration files for the VPN connection. The network statements differ slightly depending on the type of router you use. When using an AWS VPN CloudHub, you pay typical Amazon VPC VPN connection rates. You are billed the connection rate for each hour that each VPN is connected to the virtual private gateway. When you send data from one site to another using the AWS VPN CloudHub, there is no cost to send data from your site to the virtual private gateway. You only pay standard AWS data transfer rates for data that is relayed from the virtual private gateway to your endpoint. For example, if you have a site in Los Angeles and a second site in New York and both sites have a VPN connection to the virtual private gateway, you pay $.05 per hour for each VPN connection (for a total of $.10 per hour). You also pay the standard AWS data transfer rates for all data that you send from Los Angeles to New York (and vice versa) that traverses each VPN connection; network traffic sent over the VPN connection to the virtual private gateway is free but network traffic sent over the VPN connection from the virtual private gateway to the endpoint is billed at the standard AWS data transfer rate.

Question : 5 There is a big Investment bank who wish to use cloud infrastructire. However, they are having huge portfolio of customers and thier data needs to be confidential. They are having 100 number of App Servers and In house Oracle database setup. How they can leverage the AWS cloud infra.

  1. The organization should plan the all 100 app server on the public subnet and oracle rds in a private subnet so it will not be in the public cloud.

  2. The organization should plan the app server on the public subnet and keep the Oracle database in the organization’s data center and connect them with the VPN gateway.

  3. The organization should plan the app server on the public subnet and use RDS with the private subnet for a secure data operation.

  4. The organization should use the public subnet for the app server and use RDS with a storage gateway to access as well as sync the data securely from the local data center.

Correct Answer : 2 Exp : A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS cloud. You can launch your AWS resources, such as Amazon EC2 instances, into your VPC. When you create a VPC, you specify the set of IP addresses for the VPC in the form of a Classless Inter-Domain Routing (CIDR) block (for example, 10.0.0.0/16).

You can optionally set up a connection between your VPC and your corporate or home network. If you have an IP address prefix in your VPC that overlaps with one of your networks’ prefixes, any traffic to the network’s prefix is dropped. For example, let’s say that you have the following:

A VPC with CIDR block 10.0.0.0/16

A subnet in that VPC with CIDR block 10.0.1.0/24

Instances running in that subnet with IP addresses 10.0.1.4 and 10.0.1.5

On-premises host networks using CIDR blocks 10.0.37.0/24 and 10.1.38.0/24

When those instances in the VPC try to talk to hosts in the 10.0.37.0/24 address space, the traffic is dropped because 10.0.37.0/24 is part of the larger prefix assigned to the VPC (10.0.0.0/16). The instances can talk to hosts in the 10.1.38.0/24 space because that block isn’t part of 10.0.0.0/16.

You can also create a VPC peering connection between your VPCs, or with a VPC in another AWS account. A VPC peering connection enables you to route traffic between the VPCs using private IP addresses; however, you cannot create a VPC peering connection between VPCs that have overlapping CIDR blocks. For more information, see VPC Peering.

We therefore recommend that you create a VPC with a CIDR range large enough for expected future growth, but not one that overlaps with current or expected future subnets anywhere in your corporate or home network, or that overlaps with current or future VPCs.

  1. You have five CloudFormation templates; each template is for a different application architecture. This architecture varies between your blog apps and your gaming apps. What determines the cost of using the CloudFormation templates?
    • The time it takes to build the architecture with Cloud Formation.
    • Cloud Formation does not have any additional cost but you are charged for the underlying resources it builds.
    • 0.10$ per template per month
    • 0.1$ per template per month

Explanation: Answer: (B) There is no additional charges for AWS CloudFormation templates. You only pay for the AWS resources it builds.

  1. Which of the following correctly applies to changing the DB subnet group of your DB instance?
    • An existing DB Subnet group can be updated to add more subnets for existing Availability Zones.
    • An existing DB group cannot be updated to add more subnets for new Availability Zones.
    • Removing subnets from an existing DB subnet group can cause unavailability.
    • Updating an existing DB subnet group of a deployed DB instance is not currently allowed.
    • Explicitly changing the DB Subnet group of a deployed DB instance is not currently allowed.

Explanation: Answer: (A), ©, (D), and (E) An existing DB subnet group can be updated to add more subnets, either for existing Availability Zones, or for new Availability Zones added since the creation of the DB instance. Removing subnets from an existing DB subnet group can cause unavailability for instances.

  1. If you want to use an SSL protocol but do not want to terminate the connection on your load balancer, you can use a __________ protocol for connection from the client to your load balancer. HTTP TSL HTTPS TCP Explanation: Answer: (D) If you want to use an SSL protocol but do not want to terminate the connection on your load balancer, you can use a TCP protocol for connection from the client to your load balancer. Use the SSL protocol for connection from the load balancer to your back-end application, and install certificates on all the back-end instances handling requests.

  2. If you want to build your own payments application, then you should take advantage of the richness and flexibility of _____________. PayPal Payment service EBay Payment service Amazon AWS DevPay Amazon AWS FPS Explanation: Answer: © and (D) Amazon DevPay and Amazon FPS both leverage the Amazon Payments infrastructure to process payments from customers.

  3. You are building an automated transcription service in which “Amazon EC2 worker” instances process an uploaded audio file and generate a text file. You must store both of these files in the same durable storage until the text file is retrieved, but you do not know what the storage capacity requirements are. Which storage option is both cost-efficient and scalable? Multiple Amazon EBS volume with snapshots A single Amazon Glacier Vault A single Amazon S3 bucket Multiple instance stores Explanation: Answer: © Amazon S3 provides a cost effective, durable, and scalable storage option. It provides the developers the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of websites.

  4. Which of the following should be referred to if you want to map Amazon Elastic Block Store to an Amazon EC2 instance for AWS CloudFormation resources? The logical IDs of the instance Reference the logical IDs of both the block stores and the instance Reference the physical IDs of the instance Reference the physical IDs of the both the block stores and the instance Explanation: Answer: (B) As part of the CloudFormation template, you need to build the JSON with all the required attributes.

  5. In the event of a planned or an unplanned outage of your primary DB instance, Amazon RDS automatically switches to a standby replica in another Availability Zone if you have enabled_________. More than one read replica More than one write replica Multiple Availability Zones Multi Region Deployment Explanation: Answer: © In the event of a planned or unplanned outage of your primary DB instance, Amazon RDS automatically switches to a standby replica in another Availability Zone if you have enabled multi-Availability Zones.

  6. Which of the following approaches provides the lowest cost for Amazon elastic block store snapshots while giving you the ability to fully restore data? Maintain two snapshots: the original snapshot and the latest incremental snapshot. Maintain a volume snapshot; subsequent snapshots will overwrite one another. Maintain a single snapshot; the latest snapshot is both incremental and complete. Maintain the most current snapshot; archive the original and increment to Amazon Glacier. Explanation: Answer: (A) After writing data to an EBS volume, you can periodically create a snapshot of the volume to use as a baseline for new volumes or for data backup. If you make periodic snapshots of a volume, the snapshots are incremental which means only the blocks on the device that have changed after your last snapshot are saved in the new snapshot. Even though snapshots are saved incrementally, the snapshot deletion process is designed such that you need to retain only the most recent snapshot in order to restore the volume.

  7. You try to connect via SSH to a newly created Amazon EC2 instance and get one of the following error messages: ‘Network error: connection timed out” or “Error connecting to [instance], reason :-> Connection timed out: connect,’ you have confirmed that the network and security group rules are configured correctly and the instance is passing status checks. What steps should you take to identify the source of the behavior? (Select all that apply) Verify that the private key file corresponds to the Amazon EC2 key pair assigned at launch. Verify that your IAM user policy has permission to launch Amazon EC2 instances. Verify that you are connecting with the appropriate user name for your AMI. Verify that the Amazon EC2 instance was launched with the proper IAM role. Verify that your federation trust to AWS has been established. Explanation: Answer: (A) and © For any EC2 instance, you need the correct key pair and the user account to log into the instance. Without these even AWS support team cannot access that instance.

  8. In a VPC network, access control lists (ACLs) act as a firewall for associated subnets, controlling both inbound and outbound traffic at the __________ level. Full VPC Customer Gateway EC2 instance Subnet Explanation: Answer: (D) Amazon VPC provides two features that you can use to increase security for your VPC: security groups and ACL. Security groups act as a firewall for associated Amazon EC2 instances, controlling both inbound and outbound traffic at the subnet level.

  9. Which of the following is NOT true about the local secondary index? The key of a local secondary index consists of a hash key and a range key. For each hash key, the total size of all indexed items must be 10 GB or less. The local secondary index allows you query over the entire table, across all partitions. When you query a local secondary index, you can choose either eventual consistency or strong consistency. Explanation: Answer: © A local secondary index lets you query over a single partition, as specified by the hash key value in the query. A global secondary index lets you query over the entire table, across all partitions.

  10. A user has created multiple data points for the CloudWatch metrics with the dimensions, Box=UAT, App = Document and Box = UAT, App = Notes. If the user queries CloudWatch with the dimensions parameter as, Server=prod, what data will he get? The last value of the email and sms metric It will not return any data as the dimension for Box=UAT does not exist All values specified for the dimension Box=UAT, App=Document All values specified for the dimension Box=UAT, App=Notes Explanation: Answer: (B) A dimension is a key value pair used to uniquely identify a metric; the user cannot get the CloudWatch metrics statistics if he has not defined the right combination of dimensions for it. In this case the dimension combination is either Box=UAT, App=Document, or Box=UAT, App= Notes.

  11. For Dynamo DB, which of the following statements are correct? (Select all that apply) By using proxy, it is not possible for a developer to achieve item level access control. By using FGAC, it is possible for a developer to achieve item level access control. By using Per-Client Embedded Token, it is possible for a developer to achieve item level access control. By using secret key, it is possible for a developer to achieve item level access control. Explanation: Answer: (A), (B), and © Fine Grained Access Control (FGAC) gives a DynamoDB table owner a high degree of control over data in the table. Specifically, the table owner can indicate who (caller) can access which items or attributes of the table and perform what actions.

  12. You try to enable lifecycle policies on one of the S3 buckets created by you, but you are not able to do so on that particular bucket. What could be the reason? Bucket is corrupted. Versioning is not enabled on that bucket. Bucket type is not correct. Versioning is enabled on the bucket. Explanation: Answer: (B) You can manage an object’s lifecycle by enabling lifecycle policies, which define how Amazon S3 manages objects during their lifetime. You need to enable bucket versioning to manage S3 lifecycle policies.

  13. Each EC2 instance has a default network interface that is assigned a primary private IP address on your Amazon VPC network. What is the name given to the additional network interfaces that can be created and attached to any Amazon EC2 instance in your VPC? Elastic IP Elastic Network Interface AWS Elastic Interface AWS Network ACL Explanation: Answer: (B) An Elastic Network Interface (ENI) is a virtual network interface that you can attach to an instance in a VPC. An ENI can include a primary private IP address.

  14. Which IAM policy condition key should be used if you want to check whether the request was sent using SSL? AWS: secure transport AWS: secure IP AWS: source IP AWS: user agent Explanation: Answer: (A) AWS provides the following predefined keys for all AWS services that support IAM for access control: AWS: current time and AWS: secure transport.

  15. What does the following policy for Amazon EC2 do? { “Statement”: [{ “Effect”:”Allow”, “Action”:”ec2: Describe*”, “Resource”:”*” } Allow users to use all actions on an EC2 instance. Allow users to use actions that start with ‘Describe’ across all the EC2 resources. Allow users to use actions that does not have the keyword “Describe’ across all the EC2 resources. Allow a group to be able to Describe with run, stop, start, and terminate instances. Explanation: Answer: (B) If you want to assign permissions to a user, group, role, or resource, you can create a policy, which explicitly lists the permissions. As per the above policy, the action attribute says, provide permissions to access any “Describe *” calls on every EC2 resources.

  16. For what purpose is the string “create image” API action used? To create an Amazon EBS-backed AMI from an Amazon EBS-backed instance that is either running or has stopped To initiate the copy of an AMI from the specified source region to the current region To deregister the specified AMI. After you deregister an AMI, It can’t be used to launch new instances. To describes one or more of the images (AMIS, AKIS, and ARIS) available to you Explanation: Answer: (A) It creates an Amazon EBS-backed AMI from an Amazon EBS-backed instance that is either running or has stopped. If you customize your instance with instance store volumes or EBS volumes in addition to the root device volume, the new AMI contains block device mapping information for those volumes.

  17. If you launch an instance into a VPC that has an instance tenancy of a ______________, your instance is automatically a Dedicated Instance, regardless of the tenancy of the instance. secured instance dedicated instance default instance new instance Explanation: Answer: (B) Each VPC has a related instance tenancy attribute. You can’t change the instance tenancy of a VPC after you create it. Any instance that you launch in a VPC with a specific tenancy will inherit the same tenancy of the VPC.

  18. In DynamoDB you can issue a scan request. By default, the scan operation processes data sequentially. DynamoDB returns data to the application in _________ increments, and an application performs additional scan operations to retrieve the next ___________ of data. 0, 1 MB 1, 10 MB 1, 1 MB 5, 5 MB Explanation: Answer: © By default, the scan operation processes data sequentially. DynamoDB returns data to the application in 1 MB increments, and an application performs additional scan operations to retrieve the next 1 MB of data.

  19. AWS requires ____________ when you need to specify a resource uniquely across all of AWS, such as in IAM policies, Amazon Relational Database Service (Amazon RDS) tags, and API calls. IAM Used Id Account Id IAM policy Amazon Resource Names Explanation: Answer: (D) Amazon Resource Names (ARNs) uniquely identify AWS resources. An ARN is required when you need to specify a resource unambiguously across all of AWS, such as in IAM policies, Amazon Relational Database Service (Amazon RDS) tags, and API calls.

  20. ___________ is a task coordinator and state management service for cloud applications. Amazon SWF Amazon SNS Amazon SQS Amazon SES Explanation: Answer: (A) Amazon Simple Workflow (Amazon SWF) is a task coordinator and state management service for cloud applications. With Amazon SWF, you can stop writing complex codes or invest in state machinery and business logic that makes your applications unique.

  21. Which of the following IP address mechanisms are supported by ELB? IPv4 IPv5 IPv6 IPv3 Explanation: Answer: (A) and © ELB supports both IPv4 and IPv6. IPv4 is the most widely used form of address. But with the boom of the Internet and connected devices IPv4 is running out of IP addresses; IPv6 is slowly replacing it as it has more IP addresses available.

  22. A ___________ is a physical device or software application on your side of the VPN connection. Customer gateway Gateway level Gateway table Virtual private gateway Explanation: Answer: (A) When you create a VPN connection, the VPN tunnel comes up when traffic is generated from your side of the VPN connection. The virtual private gateway is not the initiator; your customer gateway initiates the tunnels.

  23. You are currently hosting multiple applications in a VPC and have logged numerous port scans coming in from a specific IP address block. Your security team has requested that all access to the offending IP address block be denied for the next 24 hours. Which of the following is the best method to quickly and temporarily deny access to the specified IP address block? Create an AD policy to modify Windows Firewall settings on all hosts in the VPC to deny access to the IP address block. Modify the Network ACLs (NACLs) associated with all public subnets in the VPC to deny access from the IP address block. Modify the Windows Firewall settings on all Amazon Machine Images (AMIs) which your organization uses in that VPC to deny access from the IP address block. Explanation: Answer: (B) AWS has implemented security layers at every level. As per OSI layers, you can restrict access at network level using NACL rules at VPC and below subnet levels. You can configure NACL rules to allow and deny the traffic. After crossing the network layer, if you still want to configure at the instance or resource level, you can configure it using security groups. Per the above context, you need to do it at the network level for a specific period and roll back the changes. You can do this at the network layer by altering allow/deny NACL rules.

  24. Which ELB component is responsible for monitoring the Load Balancers? Controller service Load Balancer Auto Scaling Load Manager Explanation: Answer: (A) Elastic Load Balancing (ELB) consists of two components: the load balancers and the controller service. The load balancers monitor the traffic and handle requests that come in through the Internet. The controller service monitors the load balancers, adding and removing load balancers as needed and verifying that the load balancers are functioning properly.

  25. Which disaster recovery method involves running your site in AWS and on your existing on-site infrastructure in an active-active configuration? Multi-site solution Active-passive solution Pilot light Warm standby solution Explanation: Answer: (A) In multi-site solution both the infrastructures on AWS and on your external data center are always active and you can use one of them in case of a disaster.

  26. An application hosted at the EC2 instances receives HTTP requests through the ELB. Each request has an X-Forwarded-For request header, having three IP addresses. Which of the following IP address will be a part of this header? IP address of ELB IP address of Forward Request IP address of client IP address of CloudWatch Explanation: Answer: (A) The X-Forwarded-For request header helps you identify the IP address of a client when you use HTTP/HTTPS load balancer. Because load balancers intercept traffic between clients and servers, your server access logs contain only the IP address of the load balancer. Elastic Load Balancing stores the IP address of the client in the X-Forwarded-For request header and passes the header along to your server.

  27. You have launched an instance in EC2-Classic and you want to make some change to the security group rule. How will these changes be effective? Security group rules cannot be changed. Changes are automatically applied to all instances that are associated with the security group. Changes will be effective after rebooting the instances in that security group. Changes will be effective after 24-hours. Explanation: Answer: (B) If you’re using EC2-Classic, you must use security groups created specifically for EC2-Classic. When you launch an instance in EC2-Classic, you must specify a security group in the same region as the instance. If you make any changes, they will be automatically applied to all instances that are associated with the security group.

  28. You have an application running on Amazon Web Services. The application has 4 EC2 instances in Availability Zone us-east-1c. You’re using Elastic Load Balancer to load balance traffic across your four instances. What changes would you make to create a fault tolerant architecture? Create EBS backups to ensure data is not lost. Move all four instances to a different Availability Zone. Move two instances to another Availability Zone. Use CloudWatch to distribute the load evenly. Explanation: Answer: © Elastic Load Balancer automatically distributes incoming application traffic across multiple Amazon Elastic Compute Cloud (Amazon EC2) instances. You can set up an elastic load balancer to load balance incoming application traffic across Amazon EC2 instances in a single Availability Zone or multiple Availability Zones. Elastic Load Balancing enables you to achieve greater fault tolerance in your applications and it also seamlessly provides the amount of load balancing capacity that is needed in response to incoming application traffic.

  29. The load balancer does not distribute traffic across ________. One Availability Zone Domains Availability Zones within a region Regions Explanation: Answer: (D) You can set up your Elastic Load Balancing to distribute incoming requests across EC2 instances in a single Availability Zone or multiple Availability Zones within a region. Your load balancer does not distribute traffic across regions.

  30. In context of CloudFormation, which of the following information do you get from the AWS Cloud Formation list-stacks Command? A list of any of the stacks you have created. A list of any of the stacks you have created or have deleted up to 90 days ago. A list of any of the stacks that have been created or deleted up to 60 days ago. A 90 days history list of all your activity on stacks. Explanation: Answer: (B) The AWS CloudFormation list-stacks command enables you to get a list of any of the stacks you have created (even those which have been deleted up to 90 days). You can use an option to filter results by stack status, such as CREATE_COMPLETE and DELETE_COMPLETE. The AWS CloudFormation list-stacks command returns summary information about any of the running or deleted stacks, including the name, stack identifier, template, and status.

  31. When you use the wizard in the console to create a VPC with a gateway, the wizard automatically __________ to use the gateway. updates the route tables updates the IP tables updates the protocol tables updates the IP tables and the protocol tables Explanation: Answer: (A) When you use the wizard in the console to create a VPC with a gateway, the wizard automatically updates the route tables to use the gateway. If you’re using the command line tools or the API to set up your VPC, then you have to update the route tables yourself.

  32. You’ve created production architecture on AWS. It consists of one load balancer, one route53 domain, two Amazon S3 buckets, Auto Scaling policy, and Amazon CloudFront for content delivery. Your manager asks you to duplicate this architecture by using a JSON based template. Which of the following AWS service would you use to achieve this? Amazon DynamoDB Amazon Simple DB Amazon CloudFormation Amazon Bootstrap Explanation: Answer: © AWS CloudFormation gives developers and system administrators an easy way to create and manage a collection of related AWS resources; provisioning and updating them in an orderly and predictable fashion.

  33. You have configured a website www.abc.com and hosted it on WebLogic Server and you are using ELB with the EC2 instances for load balance. Which of the following would you configure to ensure that the EC2 instances accept requests only from ELB? Configure the security group of EC2, which allows access to the ELB source security group. Configure the EC2 instance so that it only listens on the ELB port. Configure the security group of EC2, which allows access only to the ELB listener. Open the port for an ELB static IP in the EC2 security group. Explanation: Answer: (A) A security group acts as a firewall that controls the traffic allowed into a group of instances. When you launch an Amazon EC2 instance, you can assign it to one or more security groups. For each security group, you can add rules that govern the allowed inbound traffic to instances in the group. By configuring the security group of EC2 you can ensure that the EC2 instances accept requests only from ELB.

  34. You have written a CloudFormation template that creates one Elastic Load Balancer fronting two EC2 instances. Which section of the template should you edit so that the DNS of the load balancer is returned upon creation of the stack? Outputs Resources Parameters Mappings Explanation: Answer: (A) The Outputs section defines custom values that are returned by the AWS CloudFormation describe-stacks command and in the AWS CloudFormation console Outputs tab after the stack is created. You can use output values to return information from the resources in the stack such as the URL for a website that was created in the template or the Domain Name Server (DNS).

  35. What does a ‘Domain” refer to in Amazon SWF? Set of predefined fixed IP addresses A Security group in which internal tasks can communicate with each other A collection of related Workflows A collection of related topics Explanation: Answer: © Domains provide a way of scoping Amazon SWF resources within your AWS account. All the components of a workflow, such as the workflow type and activity types, must be specified in a domain. It is possible to have more than one workflow in a domain; however, workflows in different domains cannot interact with each other.

  36. A customer has a website which is accessible over the Internet and he wants to secure the communication and decides to implement HTTPS instead of HTTP. He has configured EC2 instance behind an ELB. Where should you configure the SSL certificate? Not possible in AWS SSL certificate will be installed at ELB and the listener port should be changed from 80 to 443 to allow the traffic to reach EC2 SSL certificate will be installed at EC2 and listener port should be changed from 80 to 443 SSL certificate will be installed at EC2 and listener port can remain at 443 Explanation: Answer: (B) If you secure the communication, you will configure SSL certificates to allow HTTPS secure communication. You can configure and install SSL certificate on ELB in order to enable HTTPS traffic.

  37. Once you’ve successfully created a Microsoft windows stack on AWS CloudFormation, you can log in to your instance with _______ to configure it manually. AWS Command Line Interface Remote Desktop Power shell Windows Command prompt Explanation: Answer: (B) Once you’ve successfully created a Microsoft Windows stack on AWS Cloud Formation, you can log in to your instance with Remote Desktop to configure it manually.

  38. You have created a custom configured Amazon instance using Linux, containing all your software and applications. If you want to use the same setup again, what is the best way to do it? Create a back up copy of the EBS service Create a backup of the EC2 instances only Create a snapshot of the AMI only Create an EBS Image (AMI) Explanation: Answer: (D) The Amazon Linux AMI is a supported and maintained Linux image provided by Amazon Web services for use on Amazon Elastic Compute Cloud (Amazon EC2). It is designed to provide a stable, secure, and high performance execution environment for applications running on Amazon EC2. It also includes packages that enable easy integration with AWS, including launch configuration tools and many popular AWS libraries and tools.

  39. With regards to VPC, what is the default maximum number of virtual private gateways allowed per region? 10 15 5 1 Explanation: Answer: © The default number of VPCs per region is 5. This limit can be increased upon request. The default number of subnets per VPC is 200. This limit can be increased upon request. The default number of Internet gateways per region is 5 and you can create as many internet gateways as your VPCs per region limit. Only one Internet gateway can be attached to a VPC at time.

  40. Elasticity is a fundamental property of the cloud. Which of the following best describes elasticity? The power to scale computing resources up and down easily with minimal friction Ability to create services without having to administer resources Process by which scripts notify you of resource so you can fix them manually. Power to scale computing resources up easily but not scale down. Explanation: Answer: (A) Elasticity can best be described as the power to scale computing resources up and down easily with minimal friction. Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers.

  41. With regards to RDS, the standby should be in the same ______________ as the primary instance. Availability Zone Region VPC Subnet Explanation: Answer: (B) Your standby is automatically provisioned in a different Availability Zone of the same Region as your primary DB instance.

  42. AWS Identity and Access Management is available through which of the following interfaces? AWS Management Console Command line interface (CLI) IAM QUERY API Elastic Load Balancer Cloud Formation Explanation: Answer: (A), (B), and © You can configure the IAM by using AWS console and command line interface using AWS CLI, AWS SDK, and IAM Query API calls.

  43. Scalability is a fundamental property of a good AWS system. Which of the following best describes scalability on AWS? Scalability is the concept of planning ahead for what maximum resources will be required and building your infrastructure based on that capacity plan. The law of diminishing returns will apply to resources as they are increased with workload. Increasing resources result in a proportional increase in performance. Scalability is not a fundamental property of the cloud. Explanation: Answer: © Auto Scaling allows you to scale your Amazon EC2 capacity up or down automatically according to conditions you define. With Auto Scaling, you can ensure that the number of Amazon EC2 instances you’re using increases seamlessly during demand spikes to maintain performance.

  44. Which technique can be used to integrate AWS IAM (Identity and Access Management) with an on-premises LDAP (Light Weight Directory Access Protocol) directory service? Use an IAM policy that references the LDAP account identifiers and the AWS credentials. Use SAML (Security Assertion Markup Language) to enable single sign-on between AWS and LDAP. Use AWS security Token Service from an identity broker to issue short-lived AWS credentials. Use IAM roles to automatically rotate the IAM credentials when LDAP credentials are updated. Use the LDAP credentials to restrict a group of users from launching specific EC2 instance types. Explanation: Answer: (B) You can use SAML Identity Providers in order to integrate IAM between AWS and on premise LDAP or federated SSO implementation. For example, you want to provide a way for users to copy data from their computers to a backup folder, in your organization. You build an application that users can run on their computers. On the back end, the application reads and writes objects in an S3 bucket but the users don’t have direct access to AWS. Instead, the application can communicate with an identity provider (IdP) to authenticate the user. The IdP gets the user information from your LDAP which is the organization’s identity store and then generates a SAML assertion that includes authentication and authorization information about that user. The application then uses that assertion to make a call to the AssumeRoleWithSAML API to get temporary security credentials. The app can then use those credentials to access a folder in the S3 bucket that’s specific to the user.

  45. If you are using a non-transactional engine such as My ISAM, which of the following steps need to be performed to successfully set up your Read Replica so it has a consistent copy of your data? Stop all DML and DDL operations on non-transactional tables and wait for them to complete Flush and lock those tables Create the Read Replica using the Create DB instance Read Replica API Check the progress of the Replica creation using the describe DB instances API Set AWS IAM and KMS Explanation: Answer: (A), (B), ©,and (D) If you are using a non-transactional engine such as My ISAM, you will need to perform the following steps to successfully set up your Read Replica. These steps are required in order to ensure that the Read Replica has a consistent copy of your data. 1. Stop all DML and DDL operations on non-transactional tables and wait for them to complete. 2. Flush and lock those tables. 3. Create the Read Replica using the Create DB instance Read Replica API. 4. Check the progress of the Replica creation using the describe DB instances API.

  46. In CloudFront, if you add a CNAME for www.abc.com to your distribution, you also need to create (or update) a CNAME record with your DNS service to route queries for ___________. www.abc.comto d111111abcdef8.cloudfront.com d111111abcdef8.cloudfront.com to www.abc.com www.abc.com to d111111abcdef8.cloudfront.net d111111abcdef8.cloudfront.net to www.abc.com Explanation: Answer: © You can specify one or more domain names that you want to use for URLs for objects instead of the domain name that CloudFront assigns when you create your distribution.

  47. Your manager has asked you to build a MongoDB replica set in the Cloud. Amazon Web Services does not provide a MongoDB service. How would you go about setting up the MongoDB replica set? You have to build it on another data center. Request AWS to add a Mongo DB service. Build the replica set using EC2 instances and manage the Mongo DB instances yourself. It is not possible to do it. Explanation: Answer: © Mongo DB runs well on Amazon EC2. To deploy Mongo DB on EC2 you can either set up a new instance manually or deploy a pre-configured AMI from the AWS Marketplace.

  48. Your company has an application that requires access to a NoSQL database. Your IT departments have no desire to manage the NoSQL servers. Which Amazon service provides a fully managed and highly available NoSQL service? Elastic Map Reduce Amazon RDS Simple DB DynamoDB Explanation: Answer: (D) DynamoDB is a fast, fully managed NoSQL database service that makes it simple and cost-effective to store and retrieve any amount of data, and serve any level of request traffic. Its guaranteed throughput and single-digit millisecond latency make it a great fit for gaming, advertising technology, mobile, and many other applications.

  49. How many requests per second can Amazon CloudFront handle? 10,000 100 1000 500 Explanation: Answer: © Amazon CloudFront can handle data transfer rate 1,000 Mbps and 1000 requests per second.

  50. When you need to use CloudFront to distribute your content you need to create a distribution. You also need to specify the configuration settings. Which of the following configuration settings would you specify? You can configure the environment variables. You can specify the number of files that you can serve per distribution. You can specify whether you want the files to be available to everyone or you want to restrict access to selected users. You can specify your origin Amazon S3 bucket or HTTP server. Explanation: Answer: (D) When you want to use CloudFront to distribute your content, you create a distribution and specify configuration settings such as: Your origin, which is the Amazon S3 bucket or HTTP server from which CloudFront gets the files that it distributes. You can specify any combination of up to 10 Amazon S3 buckets and/or HTTP servers as your origins.

  51. You currently operate a web application in the AWS US-East region. The application runs on an auto-scaled layer of EC2 instances and an RDS Multi-AZ database. Your IT security compliance officer has tasked you to develop a reliable and durable logging solution to track changes made to your EC2, IAM, and RDS resources. The solution must ensure the integrity and confidentiality of your log data. Which of these solutions would you recommend? Create a new Cloud Trail with one new S3 bucket to store the logs and with the global services option selected. Use IAM roles S3 bucket policies and Multi Factor Authentication (MFA) delete on the S3 bucket that stores your logs. Create a new Cloud Trail with one new S3 bucket to store the logs. Configure SNS to send log file delivery notifications to your management system. Use IAM roles and S3 bucket policies on the S3 bucket that stores your logs. Create a new Cloud Trail with an existing S3 bucket to store the logs and with the global services option selected. Use S3 ACLs and Multi Factor Authentication (MFA) delete on the S3 bucket that stores your logs. Create three new Cloud trails with three new S3 buckets to store the logs-one for the AWS management console, one for AWS SDKs ,and one for command line tools. Use IAM roles and S3 bucket policies on the S3 buckets that store your logs. Explanation: Answer: (A) As Cloud Trail will be stored in S3, to avoid any delete you should enable IAM Role as well as MFA. If you decide to use an existing bucket, when you turn on Cloud Trail for a new region, you might receive the error that there is a problem with the bucket policy. CloudTrail allows you to track changes made to your EC2, IAM, and RDS resources.

  52. Which of the following metrics can have a CloudWatch Alarm? RRS lost object EC2 instance status Check Failed EC2 CPU utilization Auto Scaling group CPU utilization Explanation: Answer: (B), ©, and (D) Amazon CloudWatch provides monitoring for AWS cloud resources and the applications customers run on AWS. Developers and system administrators can use it to collect and track metrics, gain insight, and react immediately to keep their applications and businesses running smoothly. Amazon CloudWatch monitors AWS resources such as Amazon EC2 and Amazon RDS DB instances, and can also monitor custom metrics generated by a customer’s applications and services.

  53. Which of the following payment options are associated with Reserved Instances? Partial Upfront No Upfront Annual Upfront All Upfront Explanation: Answer: (A), (B) and (D) Amazon EC2 Reserved Instances allow you to reserve Amazon EC2 computing capacity for 1 or 3 years, in exchange for a significant discount (up to 75%) compared to On-Demand instance pricing. You can choose between three payment options: All Upfront, Partial Upfront, and No Upfront. If you choose the Partial or No Upfront payment option, the remaining balance will be due in monthly increments over the term.

  54. You have a website www.abc.com which is used quite frequently. Therefore, you decide to use 50 EC2 instances, with two availability zones in two regions, each with 25 instances. However, while starting the servers, you are able to start only 20 servers and then the requests start failing. Why? There is a limit of 20 EC2 instances in each region; you can request to increase the limit. There is a limit of 20 EC2 instances in each availability zone, you can request to increase the limit. You might have exhausted the free space available and need to select paid version of storage. You cannot have more than one availability zone in a region. Explanation: Answer: (A) Unless otherwise noted, there is a limit per region. You are limited to: running up to 20 on-demand instances, purchasing 20 reserved instances, and requesting 5 spot instances per region. New AWS accounts may start with limits that are lower than the limits described here. Certain instance types are further limited per region.

  55. www.picsee.com website has millions of photos and also thumbnails for each photo. Thumbnails can easily be reproduced from the actual photo. However, a thumbnail takes less space than actual photo. Which of the following is the best solution to store thumbnails? S3 Reduced Redundancy Storage DynamoDB Elastic Cache Amazon Glacier Explanation: Answer: (A) Reduced Redundancy Storage(RRS) is an Amazon S3 storage option that enables customers to reduce their costs by storing noncritical, reproducible data at lower levels of redundancy that Amazon S3’s standard storage. It provides a cost effective, highly available solution for distributing or sharing content that is durably stored elsewhere, or for storing thumbnails, transcoded media, or the processed data that can be easily reproduced. The RRS option stores objects on multiple devices across multiple facilities, providing 400 times the durability of a typical disk drive, but does not replicate objects as many times as a standard Amazon S3 storage.

  56. You want your Hadoop job to be triggered based on the event notification of a file upload action. Which of the following components can help you implement this in AWS? S3 SQS SNS EC2 IAM Explanation: Answer: (A), (B), and © Amazon S3 can send event notifications when objects are uploaded to Amazon S3. Amazon S3 event notifications can be delivered using Amazon SQS or Amazon SNS, or sent directly to AWS Lambda, enabling you to trigger workflows, alerts, or other processing.

  57. www.dropbag.com is a website where you have file sharing and storing services like Google Drive and Google Dropbox. During the sync up from desktop you accidently deleted an important file. Which of the simple storage service will help you retrieve the deleted file? Versioning in S3 Secured signed URLs for S3 data access Don’t allow delete objects from S3 (only soft delete is permitted) S3 Reduced Redundancy Storage. Explanation: Answer: (A) Amazon S3 provides further protection with versioning capability. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. This allows you to easily recover from both unintended user actions and application failures. By default, requests will retrieve the most recently written version.

  58. www.picnic.com is a photo and video hosting website and they have millions of users. Which of the following is a good solution for storing big data object, by reducing costs, scaling to meet demand, and increasing the speed of innovation? AWS S3 AWS RDS AWS Glaciers AWS Redshift Explanation: Answer: (A) Whether you’re storing multimedia files such as photos and videos or pharmaceutical files, or financial data Amazon S3 can be used as your big data object store. Amazon Web services offers a comprehensive portfolio of services to help you manage big data by reducing costs, scaling to meet demand, and increasing the speed of innovation.