Enrique Castro-Leon

Enrique Castro-Leon

Biography

Enrique Castro-Leon is an enterprise architect and technology strategist with Intel Corporation working on technology integration for highly efficient virtualized cloud data centers to emerging usage models for cloud computing.

He is the lead author of two books, The Business Value of Virtual Service Grids: Strategic Insights for Enterprise Decision Makers and Creating the Infrastructure for Cloud Computing: An Essential Handbook for IT Professionals.

He holds a BSEE degree from the University of Costa Rica, and M.S. degrees in Electrical Engineering and Computer Science, and a Ph.D. in Electrical Engineering from Purdue University.

Contributions

rss  subscribe to this author

Raghu Yeluri

Raghu Yeluri

Biography

Raghu Yeluri is a Principal Engineer in the Intel Architecture Group at Intel with focus on virtualization, security and cloud architectures. He is responsible for understanding enterprise and data center needs, developing reference architectures and implementations aligned with Intel virtualization, security and cloud related platforms and technologies. Prior to this role, he has worked in various architecture and engineering management positions in systems development, focusing on service-oriented architectures in engineering analytics and information technology. He has multiple patents and publications, and has co-authored an Intel Press book on Cloud Computing – "Building the Infrastructure for Cloud Computing, an essential handbook for IT Professionals".

Contributions

rss  subscribe to this author

Bookmarks



The Trusted Cloud: Addressing Security and Compliance Published: February 27, 2014 • Service Technology Magazine Issue LXXXI PDF

Abstract: This article will address one of the biggest barriers impeding broader adoption of cloud computing is security—the real and perceived risks of providing, accessing and control services in multitenant cloud environments. IT managers would like to see higher levels of assurance before they can declare their cloud-based services and data are adequately protected. Organizations require compute platforms to be secure and compliant with relevant rules, regulations and laws. These requirements must be met regardless of whether a deployment uses a dedicated service available via a private cloud or a service shared with other subscribers via a public cloud. There's no margin for error for security breaches.

According to a research study conducted by the Ponemon Institute and Symantec, the average organizational cost of a data breach in 2010 increased to $7.2 million, and the cost of lost business was about $4.5 million. It is the high cost of breaches and inadequate security monitoring capabilities offered as part of cloud services that pose a barrier to the wider adoption of cloud computing and create resistance within organizations to public cloud services. From an IT manager's perspective cloud computing architectures bypass or work against traditional security tools and frameworks. The ease with which services are migrated and deployed in a cloud environment brings significant benefits, but they are a bane from a compliance and security perspective.

Security Considerations for Cloud

Cloud computing applies the pooling of an on-demand, self-managed virtual infrastructure, consumed as a service. This approach abstracts applications from the complexity of the underlying infrastructure, allowing IT to focus on the enabling of business value and innovation instead of getting bogged down by technology deployment details. Organizations welcome the presumed cost savings and business flexibility associated with cloud deployments. However, IT practitioners unanimously cite security, control, and IT compliance as primary issues that slow the adoption of cloud computing. These considerations often denote general concerns about privacy, trust, change management, configuration management, access controls, auditing, and logging. Many customers also have specific security requirements mandating control over data location, isolation, and integrity. These requirements have traditionally been met through fixed hardware infrastructure.

Under the current state of cloud computing, the means to verify a service's compliance are labor-intensive, inconsistent, non-scalable, or just impractical to implement. The necessary data, APIs and tools are not available from the provider. Process mismatches occur when service providers and consumers work under different operating models. For these reasons, many corporations only deploy less critical applications in the public cloud and restrict sensitive applications to dedicated hardware and traditional IT architecture running in corporate owned vertical infrastructure. For business-critical applications and processes and sensitive data, third-party attestations of security controls usually aren't enough. In such cases, it is absolutely critical for organizations to be able ascertain that the underlying cloud infrastructure is secure enough for the intended use. This requirement drives the next frontier of cloud security and compliance: implementing a level of transparency at the bottom-most layers of the cloud through the development of standards, instrumentation, tools, and linkages to monitor and prove that the Infrastructure as a Service (IaaS) clouds' physical and virtual servers are actually performing as they should and meet defined security criteria.

The expectation is that the security of a cloud service should match or exceed equivalent in house capabilities before it can be considered an appropriate replacement. Today, security mechanisms in the lower stack layers (for example, hardware, firmware, and hypervisors) are almost absent. The bar is higher for externally sourced services. In particular, the requirements for transparency are higher: while certain monitoring and logging capabilities might not have been deemed necessary for an in house component, they become absolute necessities when sourced from third parties to support operations, SLA compliance and to have audit trails should litigation and forensics become necessary. On the positive side, the use of cloud services will likely drive the re-architecting of crusty applications with much higher levels of transparency and scalability with hopefully moderate cost impact due to the efficiency the cloud brings.

Cloud providers and the IT community are working earnestly to address these requirements, allowing cloud services to be deployed and managed with predictable outcomes, with controls and policies in place to monitor trust and compliance of these services in cloud infrastructures. Specifically, Intel Corporation and other technology companies have come together to enable a highly secure cloud infrastructure, based on a hardware root of trust, providing tamper proof measurements of key physical and virtual components in the computing stack, including hypervisors. These organizations collaborate to develop a framework to integrate the secure hardware measurements provided by the hardware root of trust into adjoining virtualization and cloud management software. The intent is to improve visibility, control, and compliance for cloud services. For example, having visibility into the trust and integrity of cloud servers allows cloud orchestrators to provide improved controls on onboarding services for their more sensitive workloads, offering more secure hardware and subsequently better control over the migration of workloads and ability to deliver on security policies.

We will introduce the concept of trusted clouds and present a set of usage models to achieve the vision of a trusted cloud infrastructure, one of the foundational pillars for trusted clouds.

Cloud Security, Trust, and Assurance

There is a significant amount of focus and activity across various standards organizations and forums to define the challenges, issues, and a solution framework to address cloud security. The Cloud Security Alliance, NIST, and the Open Cloud Computing Interface (OCCI), are examples of organizations promoting cloud security standards. The Open Data Center Alliance (ODCA), an alliance of customers, recognizes that security is the biggest challenge organizations face as they plan for migration to cloud services. The ODCA is developing usage models that providing standardized definitions for security in cloud services and detailed procedures for service providers to demonstrate compliance, and seeks to give organizations an ability to validate adherence to security standards within cloud services. Here are some important considerations behind the current work on cloud security

  • Visibility, compliance, and monitoring. Providing seamless access to security controls, conditions, and operating states within a cloud's virtualization and hardware layers for auditability and at the bottom-most infrastructure layers of the cloud security providers. The measured evidence enables organizations to comply with security policies and with regulated data standards and controls such as FISMA and DPA (NIST 2005).
  • Data discovery and protection. Cloud computing places data in new and different places, not just user data, but also application and VM data (source). Key issues include data location and segregation, data footprints, backup, and recovery.
  • Architecture. Standardized infrastructure and applications provide opportunities to exploit a single vulnerability many times over. This is the BORE (Break Once, Run Everywhere) principle at work. Considerations for the architecture include:
  • Protection. Protecting against attacks with standardized infrastructure when the same vulnerability can exist at many places, due to the standardization.
  • Support for multitenant environments; ensuring that systems and applications from different tenants are isolated from each other appropriately
  • Security policies. Making sure that security policies are accurately and fully implemented across cloud architectures
  • Identity management. Identity management (IdM) is described as "the management of individual identities, their authentication, authorization, roles, and privileges/permissions within or across system and enterprise boundaries with the goal of increasing security and productivity while decreasing cost, downtime, and repetitive tasks". From a cloud security perspective, questions like, "how do you control passwords and access tokens in the cloud?" and "how do you federate identity in the cloud?" are very real and thorny questions for cloud providers and subscribers to address.
  • Automation and policy orchestration. The efficiency, scale, flexibility, and cost-effectiveness that cloud computing brings is because of the automation; the ability to rapidly deploy resources, scale up and scale down with processes, applications, and services provisioned securely "on-demand." A high degree of automation and policy evaluation and orchestration are required so that security controls and protections are handled correctly with very minimal scope of errors and with minimal intervention.

Trends Affecting Data Center Security

There are three overriding security considerations in the Data Center, namely

  • New types of attacks
  • Changes in IT systems architecture
  • Increased compliance requirements

The nature and types of attacks on information systems is changing dramatically. The threat landscape is changing. Attackers are going from the hackers working on their own and merely looking for personal fame, to, organized, sophisticated attacks targeting specific types of data and seeking to gain and retain control of assets. These attacks are concerted, stealthy and organized. The attacks have predominantly been targeted at operating systems and application environments, but new attacks are no longer constrained to software and the operating system. Increasingly they are moving lower in the solution stacks down to the platform, affecting entities such as the BIOS, various firmware sites in the platform and the hypervisor running on the bare metal where it is easy to hide and the number of controls are still minimal, and leverage is significant. Imagine, in a multi-tenant cloud environment, the impact malware can have if it gets control of a hypervisor.

Similarly, we see the evolving IT architecture creating new security challenges. Risks exist anywhere there are connected systems. It does not help that servers, whether in a traditional datacenter or in a cloud implementation were designed to be connected systems. Today, there is an undeniable trend towards virtualization, outsourcing and cross-business and cross-supply chain collaboration that blurring the concept of data "inside" an organization and organizational data boundaries. Drawing perimeters around these abstract and dynamic models is quite a challenge and may not even be practical. The traditional perimeter oriented models aren't as effective as in the past. Perhaps they never were anyway, but the cloud profiles these issues to the point that they can't be ignored anymore. The power of cloud and virtualization is in the abstraction so that workloads can migrate for efficiency, reliability and optimization.

This fungibility of infrastructure blurs the perimeter compounding the security and compliance problems. A vertically owned infrastructure at least provided the possibility of a unified view of the infrastructure to facilitate running critical applications with high security and compliance requirements. This view becomes unfeasible in a multitenant environment. With this loss of visibility, how is it possible to verify the integrity of the infrastructure on which your workloads would be instantiated and run?

Adding to the burden of securing more data in these abstract models is a growing legal or regulatory compliance burden to secure personally identifiable data, intellectual property or financial data. The risks (and costs) of non-compliance continue to grow.

These trends have significant bearing on the security and compliance challenges organizations need to address as they commit to migrating the workloads to Clouds.

Corporate owned infrastructure can presumably provide a security advantage by virtue of being inside the enterprise perimeter. The first is security by obscurity. Resources inside the enterprise perimeter, especially inside a physical perimeter are difficult to reach by intruders from outside. The second is genetic diversity: given that IT processes vary from company to company, an exploit that worked to breach one company may not work for another company. However these presumed advantages are unintended, and therefore difficult to quantify and in practice offer little comfort or practical utility.

Security and Compliance Challenges

The four basic security and compliance challenges to organizations are:

  • Governance. Cloud computing typically increases an organization's reliance on the cloud providers' logs, reports, and attestations in proving compliance. When companies outsource parts of their IT infrastructure to cloud providers, they effectively give up some control over their information infrastructure and processes, even as they are required to bear greater responsibility for data confidentiality and compliance. While enterprises still get to define how information is handled, who gets access to that information, and under what conditions in their private or hybrid clouds, they must largely take cloud providers at their word or their SLA trusting security policies and conditions are indeed being met. Even then service customers may be forced to compromise to a capability that the provider can deliver. The organization's ability to monitor actual activities and verify security conditions within the cloud is usually very limited and there are no standards or commercial tools to validate conformance to policies and SLAs
  • Co-Tenancy and Noisy or Adversarial Neighbors. Cloud computing introduces new risk resulting from multi-tenancy, an environment where different users within a cloud share the same physical requirement to run their virtual machines. Creating secure partitions between co-resident virtual machines has proven challenging for many cloud providers, ranging from the unintentional, "noisy-neighbor" syndrome where workload that consumes more than its fair share of compute, storage or I/O resources starves other virtual tenants on that host, to the deliberately malicious; such as when malware is injected into the virtualization layer, enabling hostile parties to monitor and control any of the virtual machines residing on a system. Researchers at UCSD and MIT were able to pinpoint the physical server used by programs running on the EC2 cloud and then extract small amounts of data from these programs, by placing their own software there and launching a side-channel attack.
  • Architecture and Applications. Cloud services are typically virtualized, which adds a hypervisor layer to a traditional IT application stack. This new layer in the application stack introduces opportunities for improving security and compliance, but also creates new attack surfaces and potential exposure to risks. Organizations must evaluate the new monitoring opportunities and the risks presented by the hypervisor layer and account for them in policy definition and compliance reporting.
  • Data. Cloud services raise access and protection issues for user data and applications, including source code. Who has access, and what is left behind when you scale down a service? How is corporate confidential data protected from the virtual infrastructure administrators and cloud co-tenants? Encryption of data, at rest, in transit, and eventually in use becomes a basic requirement. Encryption can be used, but it comes with a performance cost. If we truly want to encrypt everywhere, how is it done in cost effective and efficient manner? Finally, data destruction or end of life is a subject not often discussed. There are clear regulations on how long data has to be retained. The assumption is that this data gets destroyed or disposed of once the retention period expires. Examples of these regulations include the Sarbanes-Oxley Act (SOX), Section 802: 7 years (US Security and Exchange Commission 2003), HIPAA, 45 C.F.R. § 164.530(j): 6 years, and FACTA Disposal Rule (Federal Trade Commission 2005).

With many organizations using cloud services today for not mission critical or of low confidentiality applications, security and compliance challenges seem manageable, but this is a policy of avoidance. These services don't deal with data and applications governed by strict information security policies such as health regulations, FISMA regulations, and the Data Protection Act in Europe. The security and compliance challenges mentioned above would become central to cloud providers and subscribers once these higher-value business functions and data begin migrating to private cloud and hybrid clouds, creating very strong requirements for cloud security to provide and prove compliance. Industry pundits believe that cloud value proposition will increasingly drive the migration of these higher-value applications and information and business processes to cloud infrastructures. And as more and more sensitive data and business-critical processes move to cloud environments, the implications for security officers in organizations would be very wide-ranging to provide a transparent and deep compliance and monitoring framework for information security.

So how do we address these challenges and requirements? With the concept of trusted clouds. Trusted clouds address many of these challenges and provide the ability for organizations to migrate regular as well as mission critical applications to leverage the benefits of cloud computing. Trusted clouds is the subject for the next section.

Trusted Clouds

There are many definitions and industry descriptions for trusted clouds. At the core of all these definitions are four foundational pillars:

  1. The trusted computing infrastructure
  2. Trusted cloud identity and access management
  3. Trusted software and applications
  4. Operations and risk management

Each one of these pillars is broad and deep with a rich cohort of technologies, patterns of development, and of course security considerations. We will focus on first pillar in the list, the trusted infrastructure. Before we delve into trusted computing infrastructure, let's review some key security concepts to gain clarity in the discussion. These terms lay the foundation of what visibility, compliance and monitoring entail. Let us start with baseline definitions for the terms trust and assurance:

  • Trust. Revolves around the assurance and confidence that people, data, entities, information, and processes will function or behave in expected ways. Trust may be human to human, machine to machine (e.g.: handshake protocols negotiated with in certain protocols), human to machine (e.g. when a consumer reviews a digital signature advisory notice on a website) or machine to human. At a deeper level, trust might be regarded as a consequence of progress towards security or privacy objectives.
  • Assurance. Provides the evidence or grounds for confidence that the security controls implemented within an information system are effective in their application. Assurance can be obtained by:
    • Actions taken by developers, implementers, and operators in the specification, design, development, implementation, operation, and maintenance of security controls.
    • Actions taken by security control assessors to determine the extent to which the controls are implemented correctly, operating as intended, and producing the desired outcome with respect to meeting the security requirements for the system.

With these definitions sorted out, let us now take a look at trusted computing infrastructure where computing infrastructure applies to three domains, namely compute, storage and network.

Trusted Computing Infrastructure

Trusted computing infrastructure systems consistently behaves in expected ways, with hardware and software working together to enforce these behaviors. The behaviors are consistent across compute or servers, storage and network elements in the data center.

In traditional infrastructure, hardware is a bystander to provide security measures, as most of the anti-malware prevention, detection, and remediation are handled by software in the operating system, applications, or services layers. This approach is no longer adequate, as software layers have become more easily circumvented or corrupted. In order to deliver on the promise of trusted clouds, a better approach is to create a root of trust at the most foundational layer of a system, that is, in hardware, and then to extend that root of trust upwards into and through the operating system, applications, and services layers. This new security approach is known as hardware-based or hardware-assisted security, and becomes the basis for enabling trusted clouds.

Trusted computing relies on cryptographic and measurement techniques to help enforce a selected behaviors by authenticating the launch and authorizing processes. This authentication allows someone to verify that only authorized code runs on a system. This typically covers initial booting and may also cover applications and scripts.

Establishing trust for a particular component also implies an ability to establish the trust for that component relative to other trusted components. This transitive trust path is known as the chain of trust, with the first component defined as the root of trust.

A geometry system is built on a set of postulates assumed to be true. Likewise a trusted computing infrastructure starts with a root of trust as a trusted set of elemental set of functions assumed to be immune from physical and other attacks. Since an important requirement for trust is to be tamper-proof, cryptography or some immutable unique signature that identifies a component is used. A hardware platform is usually a good proxy for a root of trust; for most attackers the risk, cost and difficulty of tampering directly with hardware usually exceeds potential benefits.

With the use of hardware as the initial root of trust, one can then measure (which means taking a hash like a MD5 or SHA1, of the image of component or components) software such as hypervisor or operating system, to determine whether unauthorized modifications have been made to it. In this way, a chain of trust relative to the hardware can be established. Trust techniques include hardware encryption, signing, machine authentication, secure key storage, and attestation. Encryption and signing are well-known techniques, but these are hardened by the placement of keys in protected hardware storage. Machine authentication provides a user a higher level of assurance, as the machine is indicated as known and authenticated. Attestation provides a means for a third party (also called trusted third party) to affirm that firmware and software that are loaded are correct, true, or genuine. This is particularly important to cloud architectures based on virtualization.

Tiered Cloud Applications and Services

While the cloud provides organizations with a more efficient, flexible, convenient, and cost-effective alternative to owning and operating their own servers, storage, networks, and software, it also erases many of the traditional, physical boundaries and controls that traditionally defined and protected an organization's data assets. Physical servers are replaced by virtual machines. Perimeters are established not by firewalls alone but also by highly mobile virtual machines. As virtualization proliferates throughout the data center, the IT manager can no longer point to a specific physical node as being the home to any one critical process or data, because virtual machines (VMs) move around to satisfy policies for high availability or resource allocation. Public cloud resources usually host multiple tenants concurrently, increasing the need for isolated and trusted compute infrastructure as a compensating control. For this reason, the vast majority of data and applications handled by clouds today isn't business critical and has lower security requirements and expectations, tacitly imposing a limit on value delivered. These are often referred to as Tier 2 and Tier 3 applications, and generally are not directly related to the core of the business of the organization.

Higher-value business data and processes or Tier 1 or line-of-business (LOB) applications, however been slower to move into the cloud. These business critical functions—for example, the cash management system for a bank or patient records management within a hospital, finance, ecommerce, 911 response systems, stock and commodity trading systems, and airline reservation systems - are usually run instead on in-house IT systems to ensure maximum control over the confidentiality, integrity, and availability of those processes and data. Although some organizations are using cloud computing for higher value information and business processes, they're still reluctant to outsource the underlying IT systems, because of concerns about their ability to enforce security strategies and to use familiar security controls in proving compliance.

Definition of Tier-1 Applications

While there are many specific definitions of what a Tier-1 or line-of-business application is, a generally accepted definition is:

An application that is critical to running a business and hold a special level of importance in the corporate enterprise because their failure (measured in terms of reduced service quality or complete outages) would have a devastating effect on the business

This vague term, while generally descriptive, doesn't get down to the level of specificity needed to understand how one application compares to another in importance, scope, or complexity. In order to better understand the set of applications being delivered and managed, we need to get more precise and granular. Application criticality is a good metric to distinguish applications from each other in terms of importance to the business as well as their relative scope of influence on the business. Below are the 4 levels of criticality and their definitions. The descriptions of these levels of criticality are defined in terms of the impact on the business if these applications become unavailable.

While there may be other ways to define these classes of criticality, it is best to define them in the language that the business owner understands and cares about.

Tier Criticality Level Failures of applications in this class can result in
Tier 1 Mission Critical
  • Widespread business stoppage with significant revenue impact
  • Risk to human health/environment
  • Public, wide-spread damage to organization's reputation
  • Significant Compliance violations
Tier 1 Business Essential
  • Direct revenue impact
  • Direct negative customer satisfaction
  • Compliance violation
  • Non-public damage to organization's reputation
Tier 2 Business Core
  • Indirect revenue impact
  • Indirect negative customer satisfaction
  • Significant employee productivity degradation
Tier -3 Business Supporting
  • Moderate employee productivity degradation

Table 1

Most of these applications are built to be run in the context of an enterprise data center, so the way they store and the way they transmit data to other systems is assumed to be trusted or secure. Applications running in the cloud are assumed to be running in a somewhat more hostile environment than those running on premise. All the components previously considered trusted and assume to be running in a safe environment now are running in an untrusted environment. As discussed above, the top cloud application security concern is lack of control over the computing infrastructure. An enterprise moving a legacy application to a cloud computing environment gives up control over the networking infrastructure, including servers, access to logs, incident response and patch management. This control is now deferred to a third party. While moving to the cloud can bring extraordinary cost savings and lighten the administrative burden, it also moves the level of control way up the stack.

The threat model is different now. For example, the lack of physical control over the networking infrastructure might mandate the use of encryption in the communication between servers of an application that processes sensitive data to ensure its confidentiality.

Trusted Cloud Usage Models

In this abstracted and fungible cloud environment, in order to provide an infrastructure that is trusted to enable broad migration of Tier-3, Tier-2 and Tier-1 applications, specific and deliberate focus has to be applied to enable security across the three infrastructure domains. Mitigating risk becomes more complex, as cloud introduces an ever expanding, transient chains of custody for sensitive data and applications. Only when security is addressed in a transparent and auditable way, would enterprises and developers have

  • Confidence in their applications and workloads being equally safe in a multi-tenant clouds,
  • Visibility and control of the operational state of the infrastructure, to compensate for the loss in this abstracted environment.
  • A capability for continuous monitoring for compliance.
    • These translate to a set of needs for trusted clouds that can be summarized as:

      • More protections to address new threats
      • More visibility to compensate for lost physical control
      • Ability for the infrastructure to attest to its integrity to cloud tenants
      • More control over the resources that host critical workloads
      • More capabilities to ensure compliance with security policies

      A cloud consumer may not articulate the needs in this fashion. From their perspective, there are key mega needs like:

      • How can I trust the cloud enough to use it?
      • How can I protect my application and workloads in the cloud, and, from the cloud?
      • How can I broker between device and cloud services to ensure trust and security?

      A cloud provider has to address these in a meaningful way for their tenants. These needs translate into set of foundational usage models for trusted clouds that apply across the three infrastructure domains.

      • Boot integrity and protection
      • Data Governance and Protection, at rest, in motion and during execution
      • Runtime integrity and protection.

      The scope and semantics of these usage models changes across the three infrastructure domains, but the purpose and intent is the same. How they would manifest and get implemented in each of the domains could differ.

      For example, data protection in the context of the compute domains entails protection (both confidentiality and integrity) of the virtual machines at rest, in motion and during execution: their configuration, state, secrets, keys, certificates, and other entities stored within.

      The same data protection usage from the network domain has different focus – it would include protection of the network flows, network isolation, confidentiality on the pipe, tenant specific IPS, IDS, firewalls, deep packet inspection, etc.

      In the storage domain, data protection would be about strong isolation/segregation, confidentiality, sovereignty and integrity. Data confidentiality, which is a key part of data protection across the three domains, would use the same technological components and solutions, that is, encryption.

      It is not sufficient for the Service Provider to enable encryption for some key aspects. More and more organizations demand that service providers encrypt everything, which can be an operationally expensive operation that can benefit from hardware support. Beyond that, the organizations would like to control the encryption algorithms and the keys for decryption. It is in the interest of the service provider to provide a tenant-controllable encryption and decryption process, not necessarily aligned with provider controlled security and encryption policies.

      As a solution provider, methodical development and instantiation of these usage models across all the domains will provide the necessary assurance for organizations migrating their critical applications to a cloud infrastructure, and enables the foundational pillar for Trusted Clouds.

      Conclusion

      In this article, we covered the Cloud Security and Compliance challenges, and introduced the concept of trusted clouds. We covered the motivations for trusted clouds and introduced key usage models to enable bringing up a trusted computing infrastructure, a foundational pillar for trusted clouds. These models provide a foundation for enhanced security that can evolve with new technologies from Intel and others in the hardware and software ecosystem.

      As the reader is painfully aware of, there are no silver bullets for security, where a single technology solves all problems, security is too multifaceted for such a simplistic approach. But it is very clear that a new set of security capabilities are needed, and it is best to start at the most foundational elements. Trusted platforms provide such a foundation. Such platforms provide:

      • Increased visibility into the operational state of the critical controlling software of the cloud environment through attestation capabilities
      • A new control point, capable of identifying and enforcing local known good configurations of the host operating environment and reporting the resultant launch trust status to cloud and security management software for subsequent use.

      Disclaimer

      This article is based on material found in the book Cloud Security and Infrastructure by Raghu Yeluri and Enrique Castro-Leon.

      This article is published with the permission of the authors and Publisher. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without the prior written permission of the Publisher.