Proper study guides for Latest Amazon AWS Certified DevOps Engineer Professional certified begins with Amazon AWS-Certified-DevOps-Engineer-Professional preparation products which designed to deliver the Vivid AWS-Certified-DevOps-Engineer-Professional questions by making you pass the AWS-Certified-DevOps-Engineer-Professional test at your first time. Try the free AWS-Certified-DevOps-Engineer-Professional demo right now.
Q1. You are building a mobile app for consumers to post cat pictures online. You will be storing the images in AWS S3. You want to run the system very cheaply and simply. Which one of these options allows you to build a photo sharing application without needing to worry about scaling expensive uploads processes,
authentication/authorization and so forth?
A. Build the application out using AWS Cognito and web identity federation to allow users to log in using Facebook or Google Accounts. Once they are logged in, the secret token passed to that user is used to directly access resources on AWS, like AWS S3.
B. Use JWT or SANIL compliant systems to build authorization policies. Users log in with a username and password, and are given a token they can use indefinitely to make calls against the photo infrastructure.
C. Use AWS API Gateway with a constantly rotating API Key to allow access from the client-side. Construct a custom build of the SDK and include S3 access in it.
D. Create an AWS oAuth Service Domain ad grant public signup and access to the domain. During setup, add at least one major social media site as a trusted Identity Provider for users.
Answer: A
Explanation:
The short answer is that Amazon Cognito is a superset of the functionality provided by web identity federation. It supports the same providers, and you configure your app and authenticate with those providers in the same way. But Amazon Cognito includes a variety of additional features. For example, it enables your users to start using the app as a guest user and later sign in using one of the supported identity providers.
Reference:
https://bIogs.aws.amazon.com/security/post/Tx3SYCORF5EKRCO/How-Does-Amazon-Cognito-Relate-to
-Existing-Web-Identity-Federatio
Q2. There are a number of ways to purchase compute capacity on AWS. Which orders the price per compute or memory unit from LOW to HIGH (cheapest to most expensive), on average?
A. On-Demand B. Spot C. Reserved
A. A, B, C
B. C, B, A
C. B, C, A
D. A, C, B
Answer: C
Explanation:
Spot instances are usually many, many times cheaper than on-demand prices. Reserved instances, depending on their term and utilization, can yield approximately 33% to 66% cost savings. On-Demand prices are the baseline price and are the most expensive way to purchase EC2 compute time. Reference: https://d0.awsstatic.com/whitepapers/Cost_Optimization_with_AWS.pdf
Q3. For AWS Auto Scaling, what is the first transition state an instance enters after leaving steady state when scaling in due to health check failure or decreased load?
A. Terminating
B. Detaching
C. Terminating:Wait
D. EnteringStandby
Answer: A
Explanation:
When Auto Scaling responds to a scale in event, it terminates one or more instances. These instances are detached from the Auto Scaling group and enter the Terminating state.
Reference: http://docs.aws.amazon.com/AutoScaling/latest/DeveIoperGuide/AutoScaIingGroupLifecycIe.html
Q4. You run a clustered NoSQL database on AWS EC2 using AWS EBS. You need to reduce latency for database response times. Performance is the most important concern, not availability. You did not perform the initial setup, someone without much AWS knowledge did, so you are not sure if they configured everything optimally. Which of the following is NOT likely to be an issue contributing to increased latency?
A. The EC2 instances are not EBS Optimized.
B. The database and requesting system are both in the wrong Availability Zone.
C. The EBS Volumes are not using PIOPS.
D. The database is not running in a placement group.
Answer: B
Explanation:
For the highest possible performance, all instances in a clustered database like this one should be in a single Availability Zone in a placement group, using EBS optimized instances, and using PIOPS SSD EBS Volumes. The particular Availability Zone the system is running in should not be important, as long as it is the same as the requesting resources.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
Q5. From a compliance and security perspective, which of these statements is true?
A. You do not ever need to rotate access keys for AWS IAM Users.
B. You do not ever need to rotate access keys for AWS IAM Roles, nor AWS IAM Users.
C. None of the other statements are true.
D. You do not ever need to rotate access keys for AWS IAM Roles.
Answer: D
Explanation:
IAM Role Access Keys are auto-rotated by AWS on your behalf; you do not need to rotate them.
The application is granted the permissions for the actions and resources that you've defined for the role through the security credentials associated with the role. These security credentials are temporary and we
rotate them automatically. We make new credentials available at least five minutes prior to the expiration of the old credentials.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
Q6. What is the order of most-to-least rapidly-scaling (fastest to scale first)?
A. EC2 + ELB + Auto Scaling B. Lambda C. RDS
A. B, A, C
B. C, B, A
C. C, A, B
D. A, C, B
Answer: A
Explanation:
Lambda is designed to scale instantly. EC2 + ELB + Auto Scaling require single-digit minutes to scale out. RDS will take atleast 15 minutes, and will apply OS patches or any other updates when applied. Reference: https://aws.amazon.com/|ambda/faqs/
Q7. You need to create an audit log of all changes to customer banking data. You use DynamoDB to store this customer banking data. |t's important not to lose any information due to server failures. What is an elegant way to accomplish this?
A. Use a DynamoDB StreamSpecification and stream all changes to AWS Lambda. Log the changes to
AWS CIoudWatch Logs, removing sensitive information before logging.
B. Before writing to DynamoDB, do a pre-write acknoledgment to disk on the application sewer, removing sensitive information before logging. Periodically rotate these log files into S3.
C. Use a DynamoDB StreamSpecification and periodically flush to an EC2 instance store, removing sensitive information before putting the objects. Periodically flush these batches to S3.
D. Before writing to DynamoDB, do a pre-write acknoledgment to disk on the application sewer, removing sensitive information before logging. Periodically pipe these files into CloudWatch Logs.
Answer: A
Explanation:
All suggested periodic options are sensitive to sewer failure during or between periodic flushes. Streaming to Lambda and then logging to CIoudWatch Logs will make the system resilient to instance and Availability Zone failures.
Reference: http://docs.aws.amazon.com/Iambda/latest/dg/with-ddb.html
Q8. Your API requires the ability to stay online during AWS regional failures. Your API does not store any state, it only aggregates data from other sources - you do not have a database. What is a simple but effective way to achieve this uptime goal?
A. Use a CloudFront distribution to serve up your API. Even if the region your API is in goes down, the edge locations CIoudFront uses will be fine.
B. Use an ELB and a cross-zone ELB deployment to create redundancy across datacenters. Even if a region fails, the other AZ will stay online.
C. Create a Route53 Weighted Round Robin record, and if one region goes down, have that region redirect to the other region.
D. Create a Route53 Latency Based Routing Record with Failover and point it to two identical deployments of your stateless API in two different regions. Make sure both regions use Auto Scaling Groups behind ELBs.
Answer: D
Explanation:
Latency Based Records allow request distribution when all is well with both regions, and the Failover component enables fallbacks between regions. By adding in the ELB and ASG, your system in the survMng region can expand to meet 100% of demand instead of the original fraction, whenever failover occurs.
Reference: http://docs.aws.amazon.com/Route53/Iatest/DeveIoperGuide/dns-failover.html
You are designing an enterprise data storage system. Your data management software system requires mountable disks and a real filesystem, so you cannot use S3 for storage. You need persistence, so you will be using AWS EBS Volumes for your system. The system needs as low-cost storage as possible, and access is not frequent or high throughput, and is mostly sequential reads. Which is the most appropriate EBS Volume Type for this scenario?
A. gpl
B. iol
C. standard
D. gp2
Answer: C
Explanation:
standard volumes, or Magnetic volumes, are best for: Cold workloads where data is infrequently accessed, or scenarios where the lowest storage cost is important.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVoIumeTypes.htmI