Want to know Actualtests AWS-Certified-DevOps-Engineer-Professional Exam practice test features? Want to lear more about Amazon AWS Certified DevOps Engineer Professional certification experience? Study Virtual Amazon AWS-Certified-DevOps-Engineer-Professional answers to Regenerate AWS-Certified-DevOps-Engineer-Professional questions at Actualtests. Gat a success with an absolute guarantee to pass Amazon AWS-Certified-DevOps-Engineer-Professional (AWS Certified DevOps Engineer Professional) test on your first attempt.
Q17. You need to deploy an AWS stack in a repeatable manner across multiple environments. You have selected CIoudFormation as the right tool to accomplish this, but have found that there is a resource type you need to create and model, but is unsupported by CIoudFormation. How should you overcome this chaHenge?
A. Use a CIoudFormation Custom Resource Template by selecting an API call to proxy for create, update, and delete actions. CIoudFormation will use the AWS SDK, CLI, or API method of your choosing as the state transition function for the resource type you are modeling.
B. Submit a ticket to the AWS Forums. AWS extends CIoudFormation Resource Types by releasing tooling to the AWS Labs organization on GitHub. Their response time is usually 1 day, and they complete requests within a week or two.
C. Instead of depending on CIoudFormation, use Chef, Puppet, or Ansible to author Heat templates, which are declarative stack resource definitions that operate over the OpenStack hypervisor and cloud environment.
D. Create a CIoudFormation Custom Resource Type by implementing create, update, and delete functionality, either by subscribing a Custom Resource Provider to an SNS topic, or by implementing the logic in AWS Lambda.
Answer: D
Explanation:
Custom resources provide a way for you to write custom provisioning logic in AWS CIoudFormation template and have AWS CIoudFormation run it during a stack operation, such as when you create, update or delete a stack. For more information, see Custom Resources.
Reference:
http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/template-custom-resources.html
You run a 2000-engineer organization. You are about to begin using AWS at a large scale for the first time. You want to integrate with your existing identity management system running on Microsoft Active Directory, because your organization is a power-user of Active Directory. How should you manage your AWS identities in the most simple manner?
A. Use a large AWS Directory Service Simple AD.
B. Use a large AWS Directory Service AD Connector.
C. Use an Sync Domain running on AWS Directory Service.
D. Use an AWS Directory Sync Domain running on AWS Lambda
Answer: B
Explanation:
You must use AD Connector as a power-user of Microsoft Active Directory. Simple AD only works with a subset of AD functionality. Sync Domains do not exist; they are made up answers.
AD Connector is a directory gateway that allows you to proxy directory requests to your on-premises Nlicrosoft Active Directory, without caching any information in the cloud. AD Connector comes in 2 sizes; small and large. A small AD Connector is designed for smaller organizations of up to 500 users. A large AD Connector is designed for larger organizations of up to 5,000 users.
Reference: https://aws.amazon.com/directoryservice/detaiIs/
Q18. Which EBS volume type is best for high performance NoSQL cluster deployments?
A. iol
B. gpl
C. standard
D. gp2
Answer: A
Explanation:
io1 volumes, or Provisioned IOPS (PIOPS) SSDs, are best for: Critical business applications that require sustained IOPS performance, or more than 10,000 IOPS or 160 MiB/s of throughput per volume, like large database workloads, such as MongoDB.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVo|umeTypes.htm|
Q19. You are building a Ruby on Rails application for internal, non-production use which uses IV|ySQL as a database. You want developers without very much AWS experience to be able to deploy new code with a single command line push. You also want to set this up as simply as possible. Which tool is ideal for this setup?
A. AWS CIoudFormation
B. AWS OpsWorks
C. AWS ELB + EC2 with CLI Push
D. AWS Elastic Beanstalk
Answer: D
Explanation:
Elastic BeanstaIk's primary mode of operation exactly supports this use case out of the box. It is simpler than all the other options for this question.
With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS cloud without worrying about the infrastructure that runs those applications. AWS Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.
Reference: http://docs.aws.amazon.com/elasticbeanstaIk/Iatest/dg/create_depIoy_Ruby_raiIs.html
QUESTION N0: 65
What is the scope of AWS IAM?
A. Global
B. Availability Zone
C. Region
D. Placement Group
Answer: A
Explanation:
IAM resources are all global; there is not regional constraint. Reference: https://aws.amazon.com/iam/faqs/
Q20. You need to migrate 10 million records in one hour into DynamoDB. All records are 1.5KB in size. The data is evenly distributed across the partition key. How many write capacity units should you provision during this batch load?
A. 6667
B. 4166
C. 5556
D. 2778
Answer: C
Explanation:
You need 2 units to make a 1.5KB write, since you round up. You need 20 million total units to perform this load. You have 3600 seconds to do so. DMde and round up for 5556.
Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ProvisionedThroughp ut.htmI
Q21. Which of these techniques enables the fastest possible rollback times in the event of a failed deployment?
A. Rolling; Immutable
B. Rolling; Mutable
C. Canary or A/B
D. Blue-Green
Answer: D
Explanation:
AWS specifically recommends Blue-Green for super-fast, zero-downtime deploys - and thus rollbacks, which are redeploying old code.
You use various strategies to migrate the traffic from your current application stack (blue) to a new version of the application (green). This is a popular technique for deploying applications with zero downtime. Reference: https://d0.awsstatic.com/whitepapers/overview-of-deployment-options-on-aws.pdf
Q22. You are getting a lot of empty receive requests when using Amazon SQS. This is making a lot of unnecessary network load on your instances. What can you do to reduce this load?
A. Subscribe your queue to an SNS topic instead.
B. Use as long of a poll as possible, instead of short polls.
C. Alter your visibility timeout to be shorter.
D. Use <code>sqsd</code> on your EC2 instances.
Answer: B
Explanation:
One benefit of long polling with Amazon SQS is the reduction of the number of empty responses, when there are no messages available to return, in reply to a ReceiveMessage request sent to an Amazon SQS queue. Long polling allows the Amazon SQS service to wait until a message is available in the queue before sending a response.
Reference:
http://docs.aws.amazon.com/AWSSimpIeQueueService/latest/SQSDeveIoperGuide/sqs-long-polling.html
Q23. You meet once per month with your operations team to review the past month's data. During the meeting, you realize that 3 weeks ago, your monitoring system which pings over HTTP from outside AWS recorded a large spike in latency on your 3-tier web service API.
You use DynamoDB for the database layer, ELB, EBS, and EC2 for the business logic tier, and SQS, ELB, and EC2 for the presentation layer.
Which of the following techniques will NOT help you figure out what happened?
A. Check your CIoudTraiI log history around the spike's time for any API calls that caused slowness.
B. Review CIoudWatch Metrics graphs to determine which component(s) slowed the system down.
C. Review your ELB access logs in S3 to see if any ELBs in your system saw the latency.
D. Analyze your logs to detect bursts in traffic at that time.
Answer: B
Explanation:
Metrics data are available for 2 weeks. If you want to store metrics data beyond that duration, you can retrieve it using our GetMetricStatistics API as well as a number of applications and tools offered by AWS partners.
Reference: https://aws.amazon.com/cIoudwatch/faqs/
Q24. You are experiencing performance issues writing to a DynamoDB table. Your system tracks high scores for video games on a marketplace. Your most popular game experiences all of the performance issues. What is the most likely problem?
A. DynamoDB's vector clock is out of sync, because of the rapid growth in request for the most popular game.
B. You selected the Game ID or equivalent identifier as the primary partition key for the table.
C. Users of the most popular video game each perform more read and write requests than average.
D. You did not provision enough read or write throughput to the table.
Answer: B
Explanation:
The primary key selection dramatically affects performance consistency when reading or writing to DynamoDB. By selecting a key that is tied to the identity of the game, you forced DynamoDB to create a hotspot in the table partitions, and over-request against the primary key partition for the popular game. When it stores data, DynamoDB dMdes a tabIe's items into multiple partitions, and distributes the data primarily based upon the partition key value. The provisioned throughput associated with a table is also dMded evenly among the partitions, with no sharing of provisioned throughput across partitions. Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GuideIinesForTabIes.htmI#GuideIi nesForTabIes.UniformWorkIoad