Sunday, July 3, 2016

AWS Adds On-Premises and Multi-Cloud Support to EC2 Run Command Tool



Amazon Web Services (AWS), the largest public cloud available today, announced that it is adding to the site and multi-cloud his command Run the EC2 function, which allows users to run scripts and other leadership positions on multiple machines. So far, the service allows users to make multiple EC2 instances, or virtual slices of physical servers in data centers Amazon.

"Many customers also have AWS servers in the site or in another cloud, and were looking for a single, unified way to manage their scale hybrid environment," chief AWS evangelist Jeff Barr wrote in a blog.

The important thing here is that AWS beyond the provision of tools for working in the public cloud. The idea of ​​the hybrid cloud infrastructure is to rely on multi - perhaps AWS private cloud and on-site, perhaps AWS and Microsoft Azure and private cloud, for example - and AWS now taking another step to adopt this scenario. What happened a few times before in the past. For example, in April 2015 it introduced AWS CodeDeploy site for support, providing ongoing services. The team AWS snowball to send efficiently hard drives AWS is also a recognition of classes that companies still maintain places of equipment in the data center, even if the product premise is to migrate data in AWS.

Support infrastructure in public clouds various AWS - either blue or Google Cloud Platform SoftLayer or IBM - less than previous and more surprising.

Last week, during a discussion on the stage of public sector AWS Summit in Washington, DC, AWS CEO Andy Jassy spoke about the disadvantages of using more than one public cloud:

"If you want to multi-cloud across the board, you must standardize all areas with the lowest common denominator ... Most companies and organizations do not want to give up all the management capacity of several piles - are .. it is a pain in the butt to do so, it is difficult, it is resource-intensive, it is expensive to try to have to delve into several (cloud) is really .. difficult and very expensive. you give a little of your lever purchase "

However, Microsoft, the AWS closest competitor "was carried out in a cloud strategy exposure management tools of public and private infrastructure and providing the same tools on the site that is available in Azure. In essence, cloud Microsoft is more hybrid AWS. AWS is now making a bit more focus, standing beyond the cloud of Google, which made even less to embrace the scene computer. tools based management cloud Google and Microsoft's offer (Operations Management Suite and Stackdriver, respectively) that can enable customers to work with some Amazon's AWS infrastructure but usually, as you would expect from the market leader, has not extended the same courtesy the past. EC2 Run command is not the same type of service, but eventually your choice of underlying product design to work with the infrastructure that is not your own is taken.

All customers have to do is install the SSM Agent AWS 'on their servers. It works with both Windows Server (2003-2012) and Linux (Red Hat Enterprise Linux and CentOS). The agent can also be installed on the guest operating systems when VMware ESXi, Microsoft Hyper-V or KVM hypervisors are used.

The command feature extended run is available through new AWS regions: US East (Northern Virginia), US West (Northern California and Oregon), the European Union (Frankfurt, Ireland), Asia Pacific (Singapore, Sydney and Tokyo) and South America (Sao Paulo). AWS was first announced in October Run.

Sunday, June 26, 2016

U.S. OKs AWS Cloud For Sensitive Data

Amazon Web Services cloud has expanded its reach deeper into the infrastructure of government with an announcement this week that has been certified to handle a greater share of federal work loads.

The giant public cloud , which provides computing services secure cloud for agencies CIA and Defense, said on Thursday (June 24), the region of the United States of its platform GovCloud was appointed as a safe environment for running "very sensitive work loads." The interim authority after a review by a joint government called the Risk Management Program and federal authorization or FedRAMP committee.

The federal authorization of the United States to manage cloud of sensitive data includes over 400 security controls that enable AWS to offer cloud services for workloads, including personal information about federal employees, sensitive records patients, budget and other financial data, files of law enforcement and the designated class "controlled unclassified information."

The approval follows a massive network of violation to the Office of Personnel Management It is believed that the past to have compromised personal data of tens of millions of federal employees, including some security permissions maintenance year.

In a statement, the company said the FedRAMP "bottom-up" authorization facilitate the process of transfer of sensitive workloads the Government in the AWS GovCloud platform. The company estimates that more than 2,300 federal customers currently use their cloud services. Workloads are analyzing data from social genomic data disseminated to the collection of images of Mars from NASA planetary probes means.

The FedRAMP program aims to provide a standard approach to safety assessment, authorization and supervision of cloud services and products that federal agencies make a slow transition to the cloud. The category of "high base" includes data in case of theft by hackers could seriously affect operations and agency staff. AWS category called "more rigorous FedRAMP level to date" for the normalization of the security controls in the cloud.

AWS has already won a huge contract to provide secure cloud computing services for the CIA, and perhaps other intelligence agencies in the United States. The new authorization would provide the platform GovCloud civilian agencies, the Department of Defense, Department of Veterans Affairs and other sensitive personal care agencies and other data.

Launched in 2011, the AWS GovCloud is an isolated region of the United States designed to house sensitive workloads to the cloud. With FedRAMP, the platform conforms to the provisions of US arms export international arms trafficking, the Department of Justice and the requirements of DoD systems classified as levels 2 and 4.

The region offers American AWS Amazon Elastic Compute Cloud, virtual private cloud, Amazon Simple Storage Service with Elastic Block Storage services identity management and access, according to the cloud provider.

Sunday, June 19, 2016

How REA Group Weathered The AWS Cloud Outage

Real estate giant REA Group has in recent Amazon Web Services outage relatively unscathed Sydney availability zone through a multi-region and multi-availability cloud architecture area.

Earlier this month, one of Sydney AWS Availability Zones "sank after bad weather caused a UPS (UPS) configuration failure of the company.

The blackout sent some of the largest Web properties scrambling Australia when EC2 instances and EBS in AZ have become inaccessible services and others, including the API elastic internal DNS lookup and flows of the problems experienced.


REA Group, a big user of AWS services, was one of those affected, but managed to get away with only a slightly slower server rotates ads, a Web application offline, an application for Android wobbly and response times some services.

"... If we are not totally insensitive well, overall it was a good result," said Jeremy Burton greater technical chief.

Being prepared and luck ..

While the court has led many to reconsider their cloud architecture, REA Group said the design failure - along with the "luck" - helped him weather the storm.

SSP production systems are implemented in a preset multiple availability zone. His most critical systems - as well as Redshift not offer options Multi-AZ - are designed to run most regions, especially in Frankfurt and Sydney.

The IT team manages copies of independent systems that interact with the REA master data store in each region to eventual consistency, Burton said.

"The only thing that is common is the source of the data," he wrote.

"That way, if a region has problems, the other is affected at all."

API customers can talk cross region if local copies are not available, Burton said, using a combination of AWS Route53 routing latency and health checks Route53.

This approach was initiated during the recent Sydney court AZ - "one of our automatic switching services in our region of Europe, where some of his authorities had problems," Burton said.

Moreover, continuing with the host of some of its core systems demand data center and deployment directly to S3 for static assets REA helped avoid severe inactivity time.

"S3 by its nature is more durable than an EC2 instance, and more likely to survive a failure AZ" Burton said.

"It's multi-AZ default, and as events have shown weekend just be mutli-AZ is not necessarily enough to be resistant to failure AZ, the S3 service has."

deep pockets necessary

However, be prepared to see double infrastructure costs by adopting a multi-region approach, Burton said.

"You need good architecture systems running on eventual consistency, and to disengage in a way that provides redundancy in the relevant parts of the infrastructure," he said.

"Making your unchanging infrastructure has a cost of automation.

"And in some cases, simply not worth it. That SLA does not mean a need for multi region, or the system is not enough to justify the costs of critical engineering or infrastructure."

Sunday, June 12, 2016

AWS blames 'latent bug' for prolonging Sydney EC2 Outage

Amazon Web Services said the court extended its services to Sydney suffered last weekend downtime attributed to a combination of supply problems and a "latent error in our case management software."

Sydney has recorded over 150 mm of rain on the weekend. Sunday 5 City copped only 93 mm, more wind gusts of up to 96 km / h.

Amazon said that bad weather meant that "A 22:25 PDT on June 4 [Sunday afternoon in Sydney - Ed], our power company suffered a loss of power in a regional substation after bad weather in the region. this failure resulted in a total loss of power network to multiple facilities EMA ".

AWS has two backup power systems, but in some cases, both backups failed on the night in question.

The giant cloud explanation says that your backups employ a "diesel rotary UPS (DRUPS), which incorporates a diesel generator and a mechanical reverse."

"In normal operation, the DRUPS using AC power to spin a wheel that stores energy. If the power supply is interrupted, the DRUPS uses this stored energy to continue to provide power to the data center while the integrated generator active to continue to provide power until power is restored. "

Last week, however, "a set of switches to isolate the mains responsible DRUPS did not open fast enough." It was bad because these switches must "ensure that the reserve power DRUPS is used to support the load of the data center during the transition to power the generator."

"Instead, the power reserve system DRUPS evacuated quickly in the gradient of the network."

This failure meant that diesels could not send juice to the data center, which fell quickly.

AWS technicians have things running again at 23:46 PDT and 1:00 pm PST 5, "more than 80% and volumes were affected customers back online and running." Some workloads have been slower to recover, thanks to AWS called "DNS resolution errors internal DNS host for availability zone were back online and manages the burden of recovery."

However, some cases have not returned. AWS now says this is due to "a latent error in our case management software" which meant some cases should be restored manually. AWS has not explained the nature of this error.

Other cases have been affected by dead units meant data is not available immediately. It requires manual work to restore the data.

As always the case after such damage, AWS has promised tougher concepts that have failed.

"Even if we had an excellent operating performance power configuration used in this facility," says mea culpa "it is clear that we need to improve this particular design to avoid falling a similar affect our infrastructure power distribution power" .

More switches are on the agenda "to ensure that we start faster to the gradient power supply connections to allow our generators to activate before the UPS systems have been exhausted."

Improvements are also expected in the software, including "changes to ensure our APIs are even more resistant to failure" so that those who use multiple AWS regions can rely on switching between bits of barns.

It is expected that these changes to the earth in the area of ​​Sydney in July.

AWS is far from being the only one who has suffered physical problems or software with cloud. Salesforce also had conflicts with circuit breakers .. Google broke its own cloud with an error and lost data after a lightning strike.

Thursday, June 9, 2016

AWS-Certified-Solutions-Architect-Professional Exam Question No 25

Question No 25:

To serve Web traffic for a popular product, your chief financial officer and IT director have purchased 10 m1.large heavy utilization Reserved Instances (RIs), evenly spread across two availability zones; Route 53 is used to deliver the traffic to an Elastic Load Balancer (ELB). After several months, the product grows even more popular and you need additional capacity. As a result, your company purchases two c3.2xlarge medium utilization RIs. You register the two c3.2xlarge instances with your ELB and quickly find that the m1. large instances are at 100% of capacity and the c3.2xlarge instances have significant capacity that's
unused. Which option is the most cost effective and uses EC2 capacity most effectively?

A. Configure Auto scaling group and Launch Configuration with ELB to add up to 10 more on-demand m1.large instances when triggered by Cloud watch. Shut off c3.2xlarge  instances.
B. Configure ELB with two c3.2xlarge instances and use on-demand Auto scaling group for up to two additional c3.2xlarge instances. Shut off m1.large instances.
C. Route traffic to EC2 m1.large and c3.2xlarge instances directly using Route 53 latenc  based routing and health checks. Shut off ELB.
D. Use a separate ELB for each instance type and distribute load to ELBs with Route 53 weighted round robin.

Answer:B

Sunday, June 5, 2016

AWS Endures Extended Outage in Australia

Heavy clouds take out clouds

System administrators in Sydney had horrible Sunday as their CEOs double their attention to wonder why their sports programming Rugby Foxtel did not work.

The Amazon Web Services AWS "energy event" AP-SUR 2 Region was almost certainly caused by a system of massive storm that went from Brisbane, on the south coast of New South Wales, leading to a weekend of flooding , blocked roads and coastal erosion. A lot of normally dry areas between Byron Bay and Sydney became watercourses.

relevant services, including EC2, Elastic Load Balancing, ElastiCache, redshift, service relational database, Route 53 Private DNS, CloudFormation, CloudHSM, migration service database, Elastic Beanstalk and storage systems.

There was a bit of "what happens?" The discussion on the mailing list Network Operators Group Australia (AusNOG), who designed the thunder warning to the media not to report the list of gossip shown in the view (Registration is assumed in response a previous history South Vulture attracted Ausnog messages).

So do not tell you that some network executives were working Sunday to the problem.

AWS had not yet fully recovered at 19:00 Sydney time on Sunday night if the tweets were anything to go

Thursday, June 2, 2016

AWS-Certified-Solutions-Architect-Professional Exam Question No 24

Question No 24:

An ERP application is deployed across multiple AZs in a single region. In the event of failure, the Recovery Time Objective (RTO) must be less than 3 hours, and the Recovery Point Objective (RPO) must be 15 minutes. The customer realizes that data corruption occurred roughly 1.5 hours ago. What DR strategy could be used to achieve this RTO and RPO in the event of this kind of failure?
A. Take 15 minute DB backups stored in Glacier with transaction logs stored in S3 every 5 minutes.
B. Use synchronous database master-slave replication between two availability zones.
C. Take hourly DB backups to EC2 instance store volumes with transaction logs stored In S3 every 5 minutes.
D. Take hourly DB backups to S3, with transaction logs stored in S3 every 5 minutes.

Answer: C