Amazon Web Services Debuts Cloud Solutions at re:Invent

Amazon Web Services is pumping out announcement after announcement this week at the AWS re:Invent conference in Las Vegas, Nevada. The sold out symposium focuses on cloud Relevant Products/Services computing solutions and features workshops for developers and database architects, as well as other technical decision-makers.

So far, AWS has made four key announcements, starting with new programs and initiatives to support its growing partner network Relevant Products/Services for cloud computing services. Amazon also announced Aurora, a MySQL-compatible database engine for the Amazon Relational Database Service; plus, new services for enterprise Relevant Products/Services security Relevant Products/Services and governance Relevant Products/Services, as well as some new application lifestyle management services.

On the services front, AWS rolled out expanded Amazon Partner Network (APN) benefits, new Managed Service and SaaS Partner Programs, and APN partner-specific training. In addition, Amazon introduced a slate of 2015 Premier Consulting Partners, comprised of 28 APN partners that offer strong customer Relevant Products/Services service and expertise.

Terry Wise, Director of Worldwide Partner Ecosystem at AWS, said the company plans to more than double its investments in its APN partners in 2015, with significant updates, enhancements, and new benefits.

Any Game Changers?

The Amazon Aurora database engine could be the biggest news at re:Invent, with its promise to combine the speed and availability of high-end commercial databases with the ease of use and cost-effectiveness of open source databases. The promised result: up to five times better performance than typical MySQL databases and availability that’s as good as or better than commercial databases or high-end SANs — without sacrificing scalability and security. AWS claims the cost is one-tenth the price of high-end commercial database offerings. Customers pay an hourly charge for each Aurora database instance they use.

Raju Gulabani, Vice President of Database Services at AWS, said he consistently hears from customers that want an easier way to get the performance of commercial databases at the price of open source engines. “This is why we built Amazon Aurora,” he said. “We’ve spent the last three years working on a MySQL-compatible database that innovates on the engine and storage layers to deliver five times the performance of MySQL at one-tenth the price of commercial database solutions.”

Quickbooks-maker Intuit’s results seem to prove Gulabani’s point. Intuit invests significantly to own and operate high-end commercial databases underpinning its accounting-software Relevant Products/Services business Relevant Products/Services. Until now, Troy Otillio, Director of Intuit’s Public Cloud, said there wasn’t a real alternative to obtain the reliability and performance its customers need.

“Amazon Aurora is a game-changer for us: providing the performance and availability features that rival expensive on-premises databases and SANs at a significantly lower price point,” Gulabani said. “The RDS management capabilities on top of Amazon Aurora will allow us to focus our resources and energy on what matters most — building great applications and delighting our customers.”

Customers can sign up for a preview at the AWS site.

Security, Governance and Compliance

Amazon is also using the re:Invent conference to promote three new services that aim to make it easier for enterprises to maintain security, governance, and compliance of their resources in the AWS Cloud. These services are available now from AWS.

First, AWS Key Management Service is a fully managed service that lets customers create and control the encryption keys used to encrypt their data Relevant Products/Services on the AWS Cloud.

Next, AWS Config is a fully managed service that gives customers visibility into their AWS resources and associated relationships, notifies them of resource configuration changes, and lets them audit Relevant Products/Services resource configuration history.

Finally, the AWS Service Catalog lets enterprise administrators choose which AWS resources they want their employees deploying, in what configurations, who has access to each of these options, and then makes them discoverable to their employees through a personalized portal.

“More enterprises are moving data to the cloud and they expect the same degree of security as if data were on premises,” said Ojas Rege, Vice President of Strategy at MobileIron, an enterprise data integration software maker. “AWS Key Management Service provides protection Relevant Products/Services for and management of encryption keys which allows [AWS] to develop a cloud services architecture that assures corporate data remains safeguarded as securely as in an on-premises, TPM-protected environment.”

An Automated Deployment System

Last but not least are AWS’s new Application Lifestyle Management Services: CodeDeploy, CodeCommit and CodePipline. CodeDeploy is a fully managed, high-scale deployment service that lets developers automate the process of deploying and updating applications on Amazon EC2. That release is available now from AWS.

In early 2015, AWS will introduce AWS CodePipeline, an automated continuous delivery service to aid smooth deployments, and AWS CodeCommit, a fully managed source control service in the cloud. With these services, AWS said developers don’t have to worry about hosting or maintaining their own source code infrastructure Relevant Products/Services. They can model and automate their software release and deployment processes on the AWS Cloud.

Jamie Begin, CEO of RightBrain Networks, a company that specializes in designing and supporting scalable AWS-powered applications, explained how RightBrain designs and manages SaaS applications running on AWS infrastructure with complex multi-tier architectures and critical uptime requirements. Begin said that in the past, creating and managing the custom solutions to deploy each of these applications took RightBrain engineers’ focus away from working with customers to build great apps.

Now, Begin said, AWS CodeDeploy gives the team an automated deployment system that can be used to roll out software updates across all their applications. “It was easy to reuse our existing setup scripts with AWS CodeDeploy, and the console gave us a central dashboard to track deployments and spot any issues.”

As a result, Begin said, the team now spends less time managing deployments and more time working with customers to solve complex architectural problems for their specific needs.

Article source:

What Cloud Computing Means to Your Job

Log in to manage your products and services from The New York Times and the International New York Times.

Don’t have an account yet?
Create an account »

Subscribed through iTunes and need an account?
Learn more »

Article source:

The digital revolution: benefits of cloud computing

The cloud has created a paradigm shift that’s every bit as important as the industrial revolution for businesses and consumers, says Gary Turner, managing director, Xero UK. For firms that haven’t already made the most of it, the opportunity to re-imagine services and create innovate new ways to add real value for clients is beckoning.

The year 2014 has heralded a real turning point in the maturity of digital. Almost three billion people – 40% of the world’s population – are using the Internet according to the latest figures from the United Nations. Facebook has 1.35 billion active monthly users, which is incredible when you consider the site has only been around for ten years.

Where can digital take you?

With the web becoming a way of life around the world, commercially the cloud as a mechanism is no longer the primary focus. Rather, what’s important is the benefit that digital can bring. In this respect, it’s not about the engine, it’s about all the new places that engine can take you.

The emergence of cloud-hosted digital innovation is a shift that’s every bit as important as the industrial revolution two centuries ago – it’s honestly that mould-breaking. History shows that the invention of steam power radically rewrote the entire fabric of production, economics and employment. In the longer term, it even ended up rewriting the fabric of Western society itself.

With the introduction of the steam locomotive, people were no longer held back by the restrictions and limited scope of horse-drawn transport. Towns and cities that were once distant and out of reach were now accessible to all. That meant new ways to trade, transport goods and a geographical and social mobility that had been unheard of until that moment in history.

So how does this link to the modern day – and what are the implications of cloud society-wide?

The additional benefits of the cloud

What was important about industrialisation wasn’t that it was a steam engine rather than a horse supplying the transportation. What made a steam engine better than a horse were the extra, brand-new benefits that a steam train offered.

They ran on tracks on a growing rail infrastructure. They were quicker, more powerful and had far more capacity than a horse and carriage. And they created new opportunities that simply couldn’t have existed without the invention of steam power. This new-found power was only important because it brought these additional benefits and changed the existing status quo.

At this crucial point in the 21st century, the digital revolution is not about apps and software being hosted in the cloud. That’s the mechanism, but it’s not the revolutionary aspect. What creates this new shift in models is the additional benefit that the cloud engine offers. You can be quicker, more powerful, have more capacity and be more efficient – in exactly the same way that steam did. And, here’s the revolutionary part: cloud creates benefits and opportunities that simply weren’t possible before now. And in doing so, it changes everything.

For example, if you think you’re ‘doing cloud accounting’ by sharing some Excel spreadsheets with your accountant via Dropbox then you’re really missing the point. You’ve not completed a shift in your methodology. All you’ve done is swapped the engine that drives the process. You’ve created your own steam-powered horse carriage. And you can imagine how much use that would be.

Revolutionising how you do business

Thinking of cloud as the ability to use browser-based software, but then not changing your processes, products and service offerings is short-sighted to say the least. You recognise the role that tech has to play in supporting your customers but application of tech isn’t enough. You need to consider carefully how your business processes can change to better support your customers. And that means revisiting your entire business strategy.

What’s needed is the ability to step outside your business and think about the new opportunities that are available to you as a cloud-enabled business. For a start, you can be completely mobile and work from anywhere. That ability in itself raises many new ways of working with your customers, flexing your work/life balance and creating teams of people who aren’t tied to one specific location.

As an example, there’s nothing to stop you working with a customer who’s based in Scotland, even if you’re based in London. And the team that services that client don’t have to be in Scotland either. Or in London. You’ll all have access to the same data and you can all access critical software applications on a mobile device anywhere in the world. And that means the barriers and restrictions of geography are completely removed. How this happens (i.e. through the cloud) is almost irrelevant. What’s important is that this way of working brings with it completely new capabilities.

Now’s the time to innovate

The new capabilities you have – plus the benefits they can bring for your clients – are limited only by your strategic imagination, your ability to innovate and your capability to fund new growth and resources.

So, let’s start really being innovative and make more of the vast potential the cloud offers up. Let’s throw away the old rule book and start writing the next chapter in the digital revolution. And let’s see what brand new ideas we can come up with that really revolutionise the services on offer to clients. The world is your oyster.

For further information, visit You can get more information on the benefits of cloud accounting with our Xero Small Business Guides.

Content on this page is paid for and produced to a brief agreed with Xero, sponsor of the business essentials hub.

Article source:

Cloud Computing Adoption Continues Accelerating In The Enterprise

A recent study by IDG found that 69% of enterprises have either applications or infrastructure running in the cloud today, up 12% from 2012.  The IDG Enterprise Cloud Computing Study 2014 found that cloud investments have increased by 19% in large-scale enterprises (1,000+ employees) spending on average $3.3M 3M a year.  In 2015, 24% of IT budgets will be allocated to cloud solutions, with the highest percentage being allocated to SaaS models.

These and other findings are from the IDG Enterprise Cloud Computing Study 2014 published earlier this month. You can download the study and methodology here (PDF, no opt in).

Additional key take-aways from the study include the following:

  • 69% of enterprises have at least one application or a portion of their computing infrastructure in the cloud, up from 57% of enterprises in 2012. 18% plan to use cloud-based applications and/or computing infrastructure via the cloud in the next 12 months, and 13% are planning to use cloud-based applications and/or computing infrastructure via the cloud within 1 to 3 years.  The graphic below compares three years of survey data:

cloud adoption business staple


  • Enterprise investment in cloud computing have increased 19% since 2012, with the average investment of large-scale enterprises (+1,000 employees) reaching $3.33M in 2014. Mid- and smaller scale enterprises with less than 1,000 employees spent $400K this year on cloud solutions and technologies.  The following graphic shows the spending breakouts by size of companies:

cloud spending

  • The ability to get up and running quickly with cloud-based applications (39%) and the lower cost of ownership they provide (39%) are the two most popular reasons why enterprises are transitioning to the cloud today.  Replacing on-premise legacy technology (35%) is the third most common reason.  This is especially true in manufacturing industries, where legacy ERP systems that can’t scale to current and future business models are gradually being replaced.

speed and costs graphic


  • Business/data analytics (19%), data storage/data management (18%), collaboration/conferencing solutions, content management systems, and IT infrastructure management (16%) are the top three application areas enterprises are planning to migrate to the cloud in the 12 months.  The following graphic provides an overview of application adoption in the cloud:

distribution of applications


  • The three biggest disconnects holding cloud-based infrastructure and applications back from greater adoption from an IT senior management perspective include concerns about the security (61%), integration challenges (46%) and information governance (35%).  IT leaders perceive that line-of-business (LOB) leaders are most concerned about security (52%), difficulty measuring Return on Investment (ROI) and determining the accurate economic value of cloud solutions (37%) and a tie between information governance and cloud-based applications being able to meet enterprise and/or industry standards (32%).  The following graphic compares the perceptions of IT leaders regarding cloud computing adoption challenges:

perception differences

  • 24% of enterprise’s IT budgets next year are already allocated to cloud solutions, with the largest percentage allocated to SaaS-based applications (49%) followed by IaaS (28%), PaaS (18%) and other (5%). Being able to get a new application up and running quickly, lowering the total cost of ownership, and the need to move off of legacy applications and systems that no longer scale to current business model needs are three of several factors driving this distribution of budget funds in 2015.

budget distribution

  • Marketing (45%), Sales (43%) and Human Resources (40%) are the three most common departments funding cloud initiatives outside of IT.
  • 56% of enterprises are still identifying IT operations that are candidates for cloud hosting and 38% have identified all IT operations they want hosted in the cloud, given the current state of cloud technologies and services.
  • Security continues to dominate the concerns of both IT and line-of-business (LOB) leaders across the spectrum of enterprises surveyed. 19% of all respondents are not very/not at all confident in cloud security, with 74% somewhat confident. Despite the many advances in cloud security technologies, concerns over cloud security continue to limit broader adoption.

Please click on the “following” button to get every new blog post as soon as its goes live.

Article source:

NIST Releases Cloud Computing Roadmap

By Christine Kern

NIST Releases Cloud Computing Roadmap

The National Institute of Standards and Technology (NIST) has published the final version of the US Government Cloud Computing Technology Roadmap, Volumes I and II. Reflecting the input of more than 200 comments on the initial draft, this report leverages the available strengths and resources and highlights the strategic and tactical objectives necessary to support accelerated cloud computing adoption by federal agencies, according to a press release.

“Decision makers contemplating cloud computing adoption face a number of challenges relating to policy, technology, guidance, security, and standards,” the report states. “Strategically, there is a need to augment standards and to establish additional security, interoperability, and portability standards to support the long-term advancement of the cloud computing technology and its implementation.”

“Cloud computing is still in an early deployment stage, and standards are crucial to increased adoption,” according to the report. “The urgency is driven by rapid deployment of cloud computing in response to financial incentives.”

Volume I of the report, “High Priority Requirements to Further USG Agency Cloud Computing Adoption,” outlines the purpose and scope of the roadmap, focusing on five priorities: security, interoperability, portability, performance, and accessibility. The roadmap also identifies 10 requirements seen by NIST as necessary to maintain innovative federal cloud adoption, including a need for international standards, security solutions, and identification of clear and consistent cloud services categories.

The roadmap also provides accompanying “priority action plans” for each requirement, including target completion dates.

Volume II of the report, “Useful Information for Cloud Adopters,” is designed as a technical reference work for strategic and tactical cloud computing initiatives. According to the release, it introduces a conceptual model, the NIST Cloud Computing Reference Architecture and Taxonomy, provides some sample cases, and also identifies relevant existing cloud interoperability, portability, and security standards and highlights areas of concern that need to be addressed via new standards, guidance, and technology. In addition, Volume II addresses cloud security issues.

As Business Solutions Magazine reported, NIST recently announced three public working groups to address cloud services, federated community cloud, and interoperability and portability. The working groups will bring together industry, government, and academic experts to address requirements laid out in the Cloud Computing Standards and Technology Roadmap.

“Recognizing the significance and breadth of the emerging cloud computing trend, NIST designed its program to support accelerated US government adoption, as well as leverage the strength and resources of government, industry, academia, and standards organization stakeholders to support cloud computing technology innovation,” the report explained.

Included in the 10 requirements in the report are a demand for focus on technical specifications that enable development of high-quality service level agreements (SLAs) in provisioning cloud services; a need for improved frameworks to support federal clouds; a need to improve cloud service metrics, including standardized units of measurement for cloud resources; and the role of parallel technologies including big data and cybersecurity in cloud services.

“Big Data subject matter experts commonly refer to cloud computing as being indistinguishable from Big Data,” according to the NIST document. “Just as cloud computing struggled with definition early in its adoption, and similarly was represented as an “old” or “new” capability depending on the perspective of those defining it, Big Data as a concept is the focus of definition and framing discussions.”

Cybersecurity also has a complicated interdependency with cloud, which “presents certain unique security challenges resulting from the cloud’s very high degree of outsourcing, dependence on networks, sharing (multi-tenancy) and scale,” the roadmap explains.

Article source:

The Expert’s take on Cloud Computing Trend in 2015

Towards the end of another year, we are back to the time where every expert will discuss the IT and Business trends for the next year – 2015. One of the topics that’ll be high on discussion is: CLOUD

As James Bourne states – ‘’everyone who’s got an opinion will be telling the world and his dog about their predictions for cloud computing in 2015.’’

In this blog, we bring to you the viewpoints of a leading expert and writer – David H. Deans on Cloud Computing.

In his opinion CIOs and IT managers are keen to embrace business technologies that automate tedious routine system administration tasks, so that they can redirect their focus to higher-priority activities.

As a result Cloud Computing Offerings will gain more traction in the marketplace.

Read further as we bring to you exclusive excerpts from his write-up:

International Data Corporation (IDC)

  • 2014: Public Cloud Computing Services Spending has reached $56.6 billion.
  • 2018: Public Cloud Computing Services Spending will grow more than $127 billion.
  • 2018: Public IT Cloud Services will account for more than half of worldwide software spending growth.
  • Public Cloud Computing Services Spending will represent a five-year compound annual growth rate (CAGR) of 22.8 percent, – about six times the rate of growth for the overall global IT market.

Frank Gens, senior vice president and chief analyst at IDC states -

“Over the next four to five years, IDC expects the community of developers to triple and to create a ten-fold increase in the number of new cloud-based solutions.’’

Explaining the things further, IDC states –

It’s the craze among the IT heads to implement new digital solutions that is driving the growth of public cloud services. Cloud Based Software Companies have entered a revolutionary phase where they have modified their offerings as per the current and future demands of the corporate world. They are producing an explosion of new digital solutions which is putting the companies at the front pedestal of Power and Growth.

For instance -

Cloud Based Software Companies are offering New-Age Enterprise Applications. These new applications are created in vertically-focused platforms which help to reshape how companies operate their increasingly essential business functions.

A stark contrast to the earlier on-premise apps, these new generations of enterprise applications are built on the pervasive cloud computing infrastructure. They have the winning combination of mobile and cloud technologies to create a strong, independent workspace that facilitates today’s on-the-go business professionals.

CIO Research Center

Businesses across the globe are spending on New-Age Cloud Applications for Business Management.

According to a survey of 150 business professionals, here are the possible reasons cited by them for doing do:

  • Operational Efficiency
  • Employee Productivity
  • Customer Relation
  • Sales
  • Decision Making
  • Employee Satisfaction
  • Competitive Advantage
  • Costs of Doing Business

David H Deans expresses his viewpoint –

‘’The future outlook for many companies is likely to include embracing the Mobile Cloud scenario, where the two most apparent enterprise technology trends morph together into a cohesive whole. The combination of capable mobile devices and cloud computing services will provide an adaptive and flexible business technology foundation.’’

Paraphrasing his comments further -

Already, Mobile Cloud Applications have radically transformed how information is being accessed, used and shared in the enterprise. The anytime-anywhere accessibility benefit have facilitated the savvy Line of Business leaders at progressive companies. They have enabled employees to sync file, and do other tasks on the move – that may not have been possible earlier.

It’s interesting to see that while mobile cloud applications have come under the limelight now, the concept had already been forecasted earlier by some perceptive corporate IT leaders. They had sensed the upcoming traction of mobile enterprise strategy and hence, have immediately got involved with a proactive plan to build and support corporate mobile cloud apps.

Considering the multifarious benefits that Mobile Cloud Applications have and will bring to the forefront of the business realm, it’s only advisable that, organizations implement the Mobile Cloud Strategy to remain competitive in this era.

Article source:

Take the private cloud out of hybrid cloud computing

Private cloud hesitation often hampers hybrid cloud adoption. A DevOps mindset lets enterprises go hybrid without the private cloud commitment.

Nearly every company can find benefits in public cloud services, but most cannot justify a private cloud. Many enterprises have virtualized data centers, but see no need to go further. These enterprises run mission-critical applications on dedicated servers, where they’re likely to stay. Hybrid cloud computing seems like a possibility, but the private cloud portion remains an obstacle. So, what if companies could create a hybrid cloud…


* remove unnecessary class from ul
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

* Replace “errorMessageInput” class with “sign-up-error-msg” class
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {

* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
return validateReturn;

* DoC pop-up window js – included in moScripts.js which is not included in responsive page
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {, “Consent”, “width=500,height=600,scrollbars=1″);

without that private cloud commitment? It’s possible — and easier than you’d think.

One barrier to hybrid cloud adoption has been private cloud use. A traditional hybrid cloud is a combination of public cloud services and a private cloud with orchestration and automation between the two. However, some companies are using cloud-friendly integration to combine public cloud application components with components that remain in the data center. And DevOps has made this possible.

Installing any application, particularly hybrid cloud apps, traditionally breaks into two phases: deployment and integration. In the deployment phase, copies of the application’s components are installed where they will run. The integration phase connects components to one another and to application users. And while the deployment phase changes when you adopt the cloud — public or private — integration remains the same. You can reduce the turmoil of hybrid cloud adoption by leaving your on-premises data center resources in house.

DevOps saves the day

IT departments typically turn to flexible DevOps tools for app deployment — in and out of the cloud. Tools like Chef and Puppet accommodate a variety of cloud management systems, which allows administrators to select the best public cloud services. But it’s often overlooked that these tools both deploy and integrate application components; deployment doesn’t have to be in the cloud. With DevOps, it’s less important for the private cloud portion of a hybrid cloud application to actually reside in the private cloud.

DevOps enables businesses to take advantage of all the public cloud’s benefits without changing internal IT practices to adopt a private cloud. There are three benefits to doing so:

  1. Many mission-critical applications are difficult to migrate to the cloud (even private cloud) for two reasons. First, the software was designed for dedicated server use. Second, the application’s performance and stability requirements are best met with dedicated server and storage resources. Hybridizing these applications without moving core components to a private cloud offers cloud benefits to a whole new set of apps.
  2. Applications that run on older software platforms may be difficult to migrate to a private cloud. The business case for hybrid cloud is improved when these apps can simply run as they are.
  3. To run a private cloud, an organization needs tools and skills that most don’t possess. Additionally, your company may not have enough IT staff or skills to create an efficient private cloud resource pool. Hybridizing public cloud services with internal non-cloud components can eliminate this.

Hybrid cloud computing minus private: How-to

To take advantage of a hybrid cloud without a private cloud commitment, look at the application workflow to determine which of its components can move to the public cloud. Generally, those components will be the front-end applications. Therefore, you’ll need to develop cloud deployment practices for these apps.

The second step is to separate the application component deployment process from the component and user integration processes using workflows. The goal of deployment, from a hybridization perspective, is to create a set of directory entries that allow components to be located. In the same terms, the goal of integration is to thread connections through these directory entries to move information. It’s crucial not to mix the two or you’ll have to redo all your application lifecycle management (ALM) practices and tool decisions each time you make a change with your cloud provider or internal IT platform.

The biggest problem for hybridizing internal non-cloud components with public cloud services is elastically moving components across the cloud boundary on-demand or in failover scenarios. Don’t plan to cloud-host something that you’re not allowed to run in the public cloud for internal security or compliance reasons. Deal with the reliability, availability and scalability of the app components that need to run internally in other ways.

When you can cloudburst an on-premises application component, remember the principle of separating integration from deployment in DevOps and ALM practices. Component deployment differs depending on where it’s hosted, but integration is the same if you’ve ensured both cloud and non-cloud deployments leave a trail of directory entries that represent each deployed component. Integration only works on these; as long as both cloud and non-cloud deployments result in the same integration directory entries, you can use the same practices and tools to perform the integration.

About the author:
Tom Nolle is president of
 CIMI Corp., a strategic consulting firm specializing in telecommunications and data communications since 1982.

Next Steps

Re-defining the true meaning
of hybrid cloud

Five hybrid
cloud management

What to avoid with your DevOps

Dig deeper on Building a hybrid cloud



Enjoy the benefits of Pro+ membership, learn more and join.

Article source:

The top cloud computing threats and vulnerabilities in an enterprise environment

Picture credit: iStockPhoto

Analysis I’ve seen different companies with operational models 90% based on cloud services, where the rest of the 10% is constituted of in-house servers. The basic response after asking about security issues related to cloud services was that the cloud service provider will take care of them and they don’t have to worry about it.

This isn’t necessarily the case with every cloud service provider, since some CSPs have a good security model in place, while others clearly do not. There are many advantages of cloud services, which is why the cloud service model is being used extensively, but they are out of scope of this article.

Before continuing, let’s quickly describe the difference between a threat and a vulnerability we’ll be using throughout the article:

Vulnerability: is a weakness that can be exploited by the attacker for his own personal gain. A weakness can be present in software, environments, systems, network, etc.

Threat: is an actor who wants to attack assets in the cloud at a particular time with a particular goal in mind, usually to inflict his own financial gain and consequentially financial loss of a customer.

Cloud computing vulnerabilities

When deciding to migrate to the cloud, we have to consider the following cloud vulnerabilities:

Session Riding: Session riding happens when an attacker steals a user’s cookie to use the application in the name of the user. An attacker might also use CSRF attacks in order to trick the user into sending authenticated requests to arbitrary web sites to achieve various things.

Virtual Machine Escape: In virtualized environments, the physical servers run multiple virtual machines on top of hypervisors. An attacker can exploit a hypervisor remotely by using a vulnerability present in the hypervisor itself – such vulnerabilities are quite rare, but they do exist. Additionally, a virtual machine can escape from the virtualized sandbox environment and gain access to the hypervisor and consequentially all the virtual machines running on it.

Reliability and Availability of Service: We expect our cloud services and applications to always be available when we need them, which is one of the reasons for moving to the cloud. But this isn’t always the case, especially in bad weather with a lot of lightning where power outages are common. The CSPs have uninterrupted power supplies, but even those can sometimes fail, so we can’t rely on cloud services to be up and running 100% of the time. We have to take a little downtime into consideration, but that’s the same when running our own private cloud.

Insecure Cryptography: Cryptography algorithms usually require random number generators, which use unpredictable sources of information to generate actual random numbers, which is required to obtain a large entropy pool. If the random number generators are providing only a small entropy pool, the numbers can be brute forced. In client computers, the primary source of randomization is user mouse movement and key presses, but servers are mostly running without user interaction, which consequentially means lower number of randomization sources. Therefore the virtual machines must rely on the sources they have available, which could result in easily guessable numbers that don’t provide much entropy in cryptographic algorithms.

Data Protection and Portability: When choosing to switch the cloud service provider for a cheaper one, we have to address the problem of data movement and deletion. The old CSP has to delete all the data we stored in its data center to not leave the data lying around.

Alternatively, the CSP that goes out of the business needs to provide the data to the customers, so they can move to an alternate CSP after which the data needs to be deleted. What if the CSP goes out of business without providing the data? In such cases, it’s better to use a widely used CSP which has been around for a while, but in any case data backup is still in order.

CSP Lock-in: We have to choose a cloud provider that will allow us to easily move to another provider when needed. We don’t want to choose a CSP that will force us to use his own services, because sometimes we would like to use one CSP for one thing and the other CSP for something else.

Internet Dependency: By using the cloud services, we’re dependent upon the Internet connection, so if the Internet temporarily fails due to a lightning strike or ISP maintenance, the clients won’t be able to connect to the cloud services. Therefore, the business will slowly lose money, because the users won’t be able to use the service that’s required for the business operation. Not to mention the services that need to be available 24/7, like applications in a hospital, where human lives are at stake.

Cloud computing threats

Before deciding to migrate to the cloud, we have to look at the cloud security vulnerabilities and threats to determine whether the cloud service is worth the risk due to the many advantages it provides. The following are the top security threats in a cloud environment:

Ease of Use: The cloud services can easily be used by malicious attackers, since a registration process is very simple, because we only have to have a valid credit card. In some cases we can even pay for the cloud service by using PayPal, Western Union, Payza, Bitcoin, or Litecoin, in which cases we can stay totally anonymous. The cloud can be used maliciously for various purposes like spamming, malware distribution, botnet CC servers, DDoS, password and hash cracking.

Secure Data Transmission: When transferring the data from clients to the cloud, the data needs to be transferred by using an encrypted secure communication channel like SSL/TLS. This prevents different attacks like MITM attacks, where the data could be stolen by an attacker intercepting our communication.

Insecure APIs: Various cloud services on the Internet are exposed by application programming interfaces. Since the APIs are accessible from anywhere on the Internet, malicious attackers can use them to compromise the confidentiality and integrity of the enterprise customers. An attacker gaining a token used by a customer to access the service through service API can use the same token to manipulate the customer’s data. Therefore it’s imperative that cloud services provide a secure API, rendering such attacks worthless.

Malicious Insiders: Employees working at cloud service provider could have complete access to the company resources. Therefore cloud service providers must have proper security measures in place to track employee actions like viewing a customer’s data. Since cloud service provides often don’t follow the best security guidelines and don’t implement a security policy, employees can gather confidential information from arbitrary customers without being detected.

Shared Technology Issues: The cloud service SaaS/PasS/IaaS providers use scalable infrastructure to support multiple tenants which share the underlying infrastructure. Directly on the hardware layer, there are hypervisors running multiple virtual machines, themselves running multiple applications.

On the highest layer, there are various attacks on the SaaS where an attacker is able to get access to the data of another application running in the same virtual machine. The same is true for the lowest layers, where hypervisors can be exploited from virtual machines to gain access to all VMs on the same server (example of such an attack is Red/Blue Pill). All layers of shared technology can be attacked to gain unauthorized access to data, like: CPU, RAM, hypervisors, applications, etc.

Data Loss: The data stored in the cloud could be lost due to the hard drive failure. A CSP could accidentally delete the data, an attacker might modify the data, etc. Therefore, the best way to protect against data loss is by having a proper data backup, which solves the data loss problems. Data loss can have catastrophic consequences to the business, which may result in a business bankruptcy, which is why keeping the data backed-up is always the best option.

Data Breach: When a virtual machine is able to access the data from another virtual machine on the same physical host, a data breach occurs – the problem is much more prevalent when the tenants of the two virtual machines are different customers. The side-channel attacks are valid attack vectors and need to be addressed in everyday situations. A side-channel attack occurs when a virtual machine can use a shared component like processor’s cache to access the data of another virtual machine running on the same physical host.

Account/Service Hijacking: It’s often the case that only a password is required to access our account in the cloud and manipulate the data, which is why the usage of two-factor authentication is preferred. Nevertheless, an attacker gaining access to our account can manipulate and change the data and therefore make the data untrustworthy. An attacker having access to the cloud virtual machine hosting our business website can include a malicious code into the web page to attack users visiting our web page – this is known as the watering hole attack. An attacker can also disrupt the service by turning off the web server serving our website, rendering it inaccessible.

Unknown Risk Profile: We have to take all security implications into account when moving to the cloud, including constant software security updates, monitoring networks with IDS/IPS systems, log monitoring, integrating SIEM into the network, etc. There might be multiple attacks that haven’t even been discovered yet, but they might prove to be highly threatening in the years to come.

Denial of Service: An attacker can issue a denial of service attack against the cloud service to render it inaccessible, therefore disrupting the service. There are a number of ways an attacker can disrupt the service in a virtualized cloud environment: by using all its CPU, RAM, disk space or network bandwidth.

Lack of Understanding: Enterprises are adopting the cloud services in every day operations, but it’s often the case they don’t really understand what they are getting into. When moving to the cloud there are different aspects we need to address, like understanding how the CSP operates, how the application is working, how to debug the application when something goes wrong, whether the data backups are already in place in case the hard drive dies, etc. If the CSP doesn’t provide additional backup of the data, but the customer expects it, who will be responsible when the hard drive fails? The customer will blame the CSP, but in reality it’s the customer’s fault, since they didn’t familiarize themselves enough with the cloud service operations – the result of which will be lost data.

User Awareness: The users of the cloud services should be educated regarding different attacks, because the weakest link is often the user itself. There are multiple social engineering attack vectors that an attacker might use to lure the victim into visiting a malicious web site, after which he can get access to the user’s computer. From there, he can observe user actions and view the same data the user is viewing, not to mention that he can steal user’s credentials to authenticate to the cloud service itself. Security awareness is an often overlooked security concern.


When an enterprise company wants to move their current operation to the cloud, they should be aware of the cloud threats in order for the move to be successful. We shouldn’t rely on the cloud service provider to take care of security for us; rather than that, we should understand the security threats and communicate with our CSP to determine how they are addressing the security threats and continue from there.

We should also create remote backups of our data regardless of whether the CSP is already providing backup service for us – it’s better to have multiple data backups than figure out the data was not backed up at all when the need for data restoration arises.

Related Stories
  • » Salesforce customers: Learn from Code Spaces’ swift demise
  • » 7 reasons why cloud governance is a challenge: Should we eradicate shadow IT?

Leave a comment

log in


This will only be used to quickly provide signup information and will not
allow us to post to your account or appear on your timeline.

Article source:

Gartner’s cloud showdown: Amazon Web Services vs. Microsoft Azure

Gartner IaaS Research Director Kyle Hilgendorf says one of the most common questions he gets from enterprise customers looking to go to the cloud is: AWS or Azure?

Amazon Web Services has been anointed the public IaaS cloud leader by Gartner and many others, but over the past year or so Satya Nadella’s Microsoft has made significant advancements to its public cloud platform. AWS now has competition.

AWS clearly has a lead, and a pretty sizeable one, Hilgendorf says. But, it’s a marathon, not a sprint: “The race has just begun, and it’s a very long race,” Hilgendorf said during a presentation at AWS re:Invent comparing the two providers.

+ MORE AT NETWORK WORLD: re:Cap of re:Invent 10 Cool new features to Amazon’s cloud | Hot new products from AWS re:Invent +

Hilgendorf conducted in-depth research of both IaaS public clouds, evaluating each against a 205-point criteria assessment across eight categories: compute, storage networking, security/access, service offerings, support levels, management and price/billing. Gartner organizes them into groups of required, preferred and optional features for enterprise customers.

Azure vs. AWS

Of the features that Gartner believes are required for enterprise use cases, AWS has 92% of them covered; Azure has 75%. AWS has more features than Microsoft in seven of the eight categories. When Hilgendorf did the assessment this summer, AWS had 18 “required” features that Azure did not. It’s a big reason why AWS has remained the clear leader in Gartner’s IaaS Magic Quadrant report.

AWS just has a better cloud in a number of ways. The company not only has a massive public cloud (Gartner estimates AWS has five times the capacity of its next 14 cloud competitors combined), but Hilgendorf says AWS makes it easy to scale those on-demand resources up and down through a management dashboard or APIs.

AWS has a strong encryption platform, it integrates with many third-party network vendors to provide direct links into its cloud, it offers best-in-market high availability offerings through its Region and Availability Zone architecture, and it has competitive offerings in the database, analytics and data warehousing markets, all hosted in its cloud. AWS excels at giving customers building blocks to make a wide variety of service offerings atop its cloud, Hilgendorf says.

Even though AWS has a lead, Hilgendorf says there are reasons users may choose Azure over AWS. The biggest one: Microsoft’s relationships with its customers. Microsoft has been selling into the enterprise market for decades, so it has great rapport with many large customers. Its cloud sales teams leverage that advantage by offering sometimes double-digit discounts in long-term Enterprise Agreements. In a recent survey by Gartner, 64% of users said their biggest reason for using Azure was their relationship with Microsoft.

Many of Azure’s other areas of strength stem from the company’s experience in offering enterprise-grade services and integrating its platform with its existing products, which are widely used in the enterprise. An IT group that is heavily invested in Office 365 or Microsoft’s Hyper-V platform will find Azure to be a seamless extension of its existing operations, allowing customers to create a hybrid cloud. AWS relies heavily on partners and network connectivity tools to enable hybrid clouds.

There are a handful of other smaller items where Azure beats AWS. Microsoft makes its Azure disaster recovery plan available for customers to view (under the condition of a legally-binding non-disclosure agreement), for example. And Microsoft guarantees that any change to its service-level agreement (SLA) will be announced 90 days in advance. Its service health dashboard has a 60-day review; AWS’s is about half that.

These may seem like minor details, but for enterprise customers Hilgendorf says they can be important. If the SLA on a critical service changes, a business wants to know about that as soon as possible to mitigate for it. Hilgendorf has found a number of customers using Azure as a backup to AWS too. Some users may not want to go all-in with a single provider, and Azure is a “good-enough” offering in the market to start using.

For rapid self-service provisioning and ability to scale up to massive levels, AWS has Azure beat. AWS has an innovative set of tools, such as its DynamoDB NoSQL database and its newly announced Lambda event-driven computing platform. AWS continues to lead the market with innovative new offerings in the cloud computing market, Hilgendorf says. Azure, has been playing catch-up to gain feature parity with AWS and has now started to attract enterprise attention to its cloud platform.

The dynamics between these two heavyweights of the IaaS market are changing constantly though. A few weeks ago one could have said AWS has a market-leading network of third-party apps in its Marketplace. But Azure recently announced a renewed effort to beef up its Azure marketplace. This could be an area Microsoft could exploit its broad partner network to take on AWS with. With Satya Nadella (who formerly led Microsoft’s cloud computing division) now head of the entire company, Hilgendorf says Microsoft is in this for the long-haul.

AWS may have more to worry about than just Azure: Google Cloud Platform is turning into an enticing offering for cloud users as well. Hilgendorf plans to put the GCP through the same rigorous test he did with AWS and Azure to see where it stacks up. There are other providers like VMware, HP, Verizon, CenturyLink, Rackspace and many others customers can consider as well.

Hilgendorf says each customer should evaluate these cloud providers based on the criteria that are most important for their use case.

Article source:

5 Cloud Computing Funding Stories You Might Have Missed, Nov. 21

Each week Talkin’ Cloud compiles a list of cloud computing financing stories for readers who might have missed the news earlier in the week. This week’s column features funding news from SysCloud, CipherCloud, Prezi, Mainframe2 and Bigcommerce.

These stories have been gathered from Talkin’ Cloud’s article database and other media sources. If we missed something, feel free to leave a comment below. We might just add it into the mix.

Here’s this week’s list of 5 Cloud Computing Funding Stories You Might Have Missed, November 21.

SysCloud Secures $2.5M in Funding to Accelerate Growth of Data Protection Suite for Google Apps. SysCloud has raised $2.5 million in funding for its data backup and security suite. The company expects to leverage financing to expand sales and marketing efforts, scale current operations and accelerate development for continued product innovation. Inventus Capital Partners led the Series A financing round, with participation from existing investor, KAE Capital. The additional funds bring the company’s total funding to date to $3.5 million.

5 Cloud Computing Funding Stories You Might Have Missed, Nov. 14

4 Cloud Computing Funding Stories You Might Have Missed, Nov. 7

CipherCloud Raises $50M in Funding for Cloud-Based Security Platform. The San Jose, California-based cloud security solutions company has raised $50 million in funding to assist with accelerating go-to-market activities, supporting international growth, driving the adoption of its cloud-based security platform in the enterprise, and advancing product innovation efforts. CipherCloud’s product portfolio aims to protect popular cloud applications such as Salesforce, Box, Microsoft Office 365, and ServiceNow by focusing its efforts on cloud application discovery and risk assessment, data protection, and user activity.

Click here for Talkin’ Cloud’s Top 100 CSP list

Cloud presentation startup Prezi raises $57M, hits 50M users. The San Francisco-based presentation startup company has raised $57 million in new financing. Prezi, which now has 50 million users, said it plans the additional funds to accelerate its growth globally. The company competes with Microsoft (MSFT)’s PowerPoint.

Mainframe2 Raises $2.2 Million to Deliver Any Windows Software From the Cloud. The Menlo Park, California-based company, which provides a platform that enables developers and businesses to move Windows applications to the cloud and deliver them to any device, this week announced it has closed $2.2 million in seed funding. The company said it plans to use the financing to accelerate development and expand its customer base.

Bigcommerce Raises $50M Series D From SoftBank, Telstra, And American Express. The e-commerce platform has raised $50 million in financing. This brings the company’s total funding to date to $125 million. SoftBank Capital led the Series D financing, with participation from Telstra Ventures, American Express, and returning investors General Catalyst and Revolution Growth.

Follow CJ Arlotta on Twitter @cjarlotta and Google+ for further updates on the story above — or if you just want to say hello.

Article source: