Manish Dave Biography

Manish Dave is a senior network engineer in Intel IT and has been with Intel for over 10 years. He is always looking to improve our network security and is very passionate about technical leadership. He can be reached at manish.dave@intel.com.

Contributions

rss  subscribe to this author

Toby Kohlenberg

Toby Kohlenberg

Biography

Toby Kohlenberg is a senior information security specialist in Intel's Information Risk and Security Group. Toby loves to analyze systems, networks, and applications to identify weak or vulnerable points. He has been with Intel for over 15 years and is a regular public speaker on security topics. Toby can be reached at toby.kohlenberg@intel.com.

Contributions

rss  subscribe to this author

Stacy Purcell Biography

Stacy Purcell is a senior security architect who has been with Intel for over two decades, focused on networking and security. He is very passionate about Security Business Intelligence and can be reached at stacy.p.purcell@intel.com.

Contributions

rss  subscribe to this author

Alan Ross

Alan Ross

Biography

Alan Ross is a senior principal engineer who leads up Intel IT’s security architecture and technology development team. Alan is passionate about new security models and ways to attack old problems. He has been with Intel for over a decade. Alan can be reached at alan.d.ross@intel.com.

Contributions

rss  subscribe to this author

Jeff Sedayao Biography

Jeff Sedayao is a researcher in Intel IT’s Labs group, focused on cloud security and manageability. Jeff has been with Intel for over 20 years and has played a variety of roles in network and security areas. He often speaks publicly and continues to drive innovative thought in research and technology development. He can be reached at jeff.sedayao@intel.com.

Contributions

rss  subscribe to this author

Bookmarks



Some Key Cloud Security Considerations Published: September 30, 2013 • Service Technology Magazine Issue LXXVI PDF

Abstract: As enterprises embark on their cloud computing journey, tactical as well as strategic elements that complement existing security capabilities must be considered. The enterprise must demonstrate legal and regulatory compliance while supporting application and data access via cloud service providers. The first consideration is federation across cloud service providers and disparate environments and frameworks. It is important to share identity and authorization information with partner entities in a reliable, secure, and scalable fashion. A second consideration is the protection of information, not just of an enterprise's intellectual property, but the private information of individuals as well. The shared nature of cloud computing environments demands that steps must be taken to protect privacy. A third consideration is the need to segregate computing environments into trust zones and to deliver the appropriate level of detective, preventative, and corrective controls to match the criticality of the information and applications in a given service level. The final consideration is security business intelligence to support operations, investigations, and forensics. Enterprises must ensure that the right information and predictive analytics are available to support the business. This article will walk through these four elements in order to provide tactical and strategic guidance on some of the extended elements of cloud computing security.

Introduction

Cloud security has been a hot topic for the past several years, garnering attention of enterprises as well as researchers and providers, who seek ways to ensure appropriate risk mitigation and information protection within hosted environments.

Intel IT is no different from other large enterprises in their desire to have flexible and scalable alternatives for computing. We have made excellent strides in our internal cloud computing environment and now look for the most appropriate ways to scale our enterprise environment externally. We are looking at platform, software, and infrastructure as a service models to find the most appropriate usages.

Security remains top of mind within the company and, as some of the people responsible for forward thinking in the security domain, we have spent some time working out the key capabilities needed to make a big move to external cloud service providers.

The purpose of this article is to share some of our current thinking and plans to help make cloud services more palatable from a security perspective. We have a broader strategy and architecture that includes data protection, identity management, and application security, but thought that the four topics represented herein give a glimpse into some areas that we have focus in but others may not. Our purpose is not to give prescriptive guidance but to open up new areas of thought and exploration for others.

Let's look at four key areas of cloud security: federation, anonymization, segmented environments, and security business intelligence.

Federation Across Service Providers and Partners

As compute, storage, and network resources are used across autonomous domain boundaries, policy management frameworks need to be extended beyond single autonomous domains. The manual and cumbersome methods used today to create, negotiate, and manage policies across federated domains are not scaling as the demand for cross-domain resources usage becomes ubiquitous and the de facto standard for computing. Business trends are driving an increased demand for collaboration from anywhere, anytime not only within the enterprise network but externally also. Outsourcing of specific business processes is becoming common as opportunity of moving business flows towards external service providers and partners becomes more feasible from a cost and maturity perspective over time. A large enterprise like Intel federates with hundreds of partners, customers, and suppliers, and the number of new autonomous domains with which to federate increases every year.

The biggest roadblocks for enterprises to enable seamless collaboration with partners and adoption of maturing cloud-based service offerings are related to security and privacy. The enterprise security policy driven by business policies, intellectual property protection, regulatory compliance, and so on, is either not met by the partners or requires a manual, cumbersome process per instance of each partnership. To fill this gap, we see a need to have an automated, simplified way for partners to federate on security and privacy policies for establishment, negotiation, implementation, and audit. Some examples of such policy domains and the relationships required for enterprises are with partners, service providers (including cloud), offshore design centers, suppliers, or even organizations within the company (as it may apply to mergers and acquisitions as they happen over time). In this section, we explore how the policy negotiation, federation, agreement, and audits can occur for such a relationship between these policy domains.

Our Framework Design

Our approach has the following components and the high level framework comprised of the following elements:

  • Distributed PBMS across multiple domains. As the resources used for an application or transaction are across multiple domains whose policy-based management systems (PMBS) are outside of control of a single policy management authority (PMA), we need to be able to deal with distributed PBMS. Note that even within an autonomous domain, it is not uncommon to have multiple PBMS due to organization boundaries, security requirements, and zones or structures of the infrastructure used.
  • Interfaces that securely expose PBMS functionality. As we are dealing with cross-domain policy management, the management APIs for these distributed PBMS must be securely exposable across autonomous domains. While this may serve the purpose of some simple forms of federation (commonly used today by cloud providers), by itself, this does not resolve the key issues and gaps for PBMS in a true federated environment.
  • Trusted federation models. We need to base the cross-domain PBMS trust models on existing and realistic inter-domain relationships. This may require a preexisting trust model or a dynamic setup and teardown of the trust relationship.
  • A services model. We propose the use of policy services to perform various common functions required to make the multi-domain policy enforcement, verification, and audit possible. Services have been successfully used as the abstracted components of a system that can interact with the underlying systems and also provide the functionality needed across individual systems. By creating a common layer using these policy services, we can allow integration and interface with the existing distributed PBMS for management by multiple autonomous policy management authorities.

Some examples of such services are:

  • Policy agreement/negotiation services. This category of services can be used by multiple trusted PMAs to work together on a policy agreement, negotiation, and other pre-enforcement aspects of creating the cross- domain policies. It requires that domains have a way of securely and programmatically exposing their policy so that applications looking for resources can determine what resources can be used.
  • Policy translation and normalization services. As the PBMS may not follow a standard interface or policy language, this category of services will perform the function of translating and normalizing the "agreed upon" policies (between two or more PMAs).
  • Policy conflict resolution services. This category of services can be used to resolve any pre-verification policy conflicts as a validation, as well as work in conjunction with other services such as the policy agreement and translation/normalization service.
  • Policy interpretation services. This category of services will interpret the translated and normalized policies after going through the pre-verification, such as conflict resolution to interpret the policy agreements (between PMAs), so they can be implemented across the distributed PBMS.
  • Policy audit and verification service. To provide governance, auditability, and post-verification for the implemented policies on the distributed PBMS.

There are existing federation implementations such as PlanetLab's implementation of GENI's slice-based facility architecture that offer APIs that allow for determining a domain's resource policies. Knowing a policy is only the first step, however, as our design proposes services for not just learning about policies but finding conflicts between domain policies and allowing for negotiations that could resolve those conflicts.

Implementation Considerations

Policy services must be implemented with the required level of assurance and integrity to make trusted interaction possible on top of untrusted layers (analogous to IPSEC implementation on top of unprotected IP Internet) to provide the sufficient level of security and privacy.

The policy services can be hosted by third-party organizations or in a "closed user group" relationship. These services must be available for any ad-hoc, dynamic relationship to be set up and can be potentially used even for complex multi-party relationships. The economic feasibility of such services can provide the incentives for such implementations on the commercial Internet similar to security services implemented today such as network/cloud-based firewall, intrusion detection, and antivirus services hosted in the Internet cloud.

In addition, an organization or enterprise may choose to implement some or all of these policy services within their domain and provide the extensibility for its private federation partners. The key benefit for this proposed service- oriented architecture is that services can be selected as required by the level of trust and relationship to federate the resources. Not all services must be used in a relationship and a few can be used optionally (such as, for example, the post- verification and audit service).

Some examples for security and privacy policies that can be implemented using this model are authentication, authorization, encryption, rights management, and access control. However, the framework can be easily extended to any policy across domains such as resource management, service levels, or even pay-as-you-go policies. For example, a policy requirement for a consumer of infrastructure as a service would be not to exceed certain levels of resource utilization, so as to maintain the maximum usage charges per hour or day.

Applying the Federation Framework

Now that we have designed an intercompany policy federation we need to find places where our framework can be used. After talking to a number of security architects and other security personnel, we came up with a list of possible scenarios where policy can work. One example: Intel has a supplier called Supplier A who must periodically log into systems inside of Intel to perform maintenance or to debug some problems. Our current approach is to create special limited access accounts every time that a vendor needs to access equipment. An implementation using our framework would allow our suppliers to use their own identities, saving Intel the time and cost of creating accounts for each access. An audit service and controls at the Intel federation gateway provides detective controls in case the vendor's personnel attempt something inappropriate.

A similar use case might be in a co-development situation, where Intel is developing hardware and software with a partner. The partner needs access to some set of machines at Intel containing newly developed unannounced hardware in order to develop drivers on them. Rather than having to manage the identities and attributes of all those needing access, Intel could use the identities provided by the development partner.

We are currently talking to different architects whose business lines require intercompany connectivity for further access. Our most likely avenue to adoption would be in new intercompany connections, as the cost of transitioning to a new approach that is not yet fully engineered would be very high for existing connections.

Future Work

There is much work to be done with our federation policy framework. As mentioned above, a key step is getting our approach accepted by those implementing intercompany connections. Our approach needs to be broadly understood and accepted in order to justify the investment needed to get it into production.

The next set of work involves improving the current implementation. Our POC implementation is rudimentary and needs to be able to work with a host of different IDAM products, from active directory to LDAP servers. There needs to be an easier, graphical interface for configuring policies. We used SOAP for communicating security policy between federation gateways, but did not implement security features like message signing. In addition, other companies might want to implement other security policy protocols like SAML or XACML.

We have created a framework for policy federation that can enable Intel to be more agile in establishing intercompany collaboration and do so at lower setup and management cost. Our next steps are to find the best fit for our approach and to improve its implementation to enable a more agile and efficient Intel.

Anonymization

"Anonymization: The act or process of making anonymous, of hiding or disguising identity."

Enterprises need to keep data about people and other enterprises but need to maintain confidentiality about particular parts. Not only are enterprises concerned with protecting intellectual property and other proprietary information, but there are also regulatory and legal considerations, particularly concerning personally identifiable information. Enterprises like Intel would like to use public cloud computing, but they must make sure that their data is secure.

Anonymization is one potential answer to privacy concerns and for cloud computing security. By obscuring key pieces of customer data and other confidential data, privacy could be maintained while still having usable information for processing. Anonymized data could be put in the cloud (or elsewhere) and processed without having to worry if other people captured that data. Later, the results could be collected and matched back to the private data in a secure area.

img

Figure 1 - using anonymization to do safe computing in the cloud (Source: Intel June, 2012)

We show how such a process could work in Figure 1. We want to calculate total revenue but don't want to expose company names associated with that revenue. We want to make sure that the customer/revenue relationship is kept private even if the data in the cloud is completely compromised. Total revenue can be calculated in the cloud and corrected internally by subtracting off fictitious company amounts. Our approach even blocks some kinds of data mining attacks, as by adding fictitious data, we make it impossible to calculate things like the number of customers, companies with the most revenue, and companies with the least revenue.

Furthermore, as users traverse the compute continuum, devices increasingly implement GPS capabilities, and as location-oriented social media (such as Foursquare) becomes more common, location tracking becomes more of an issue. Location information can be very useful for providing customized and localized service, but the storage and mining of location data has privacy and possibly regulatory issues. Anonymization will become an emerging issue in that space. Anonymization can be a tricky process, and can have severe consequences if not done properly.

The following sections goes over commonly used anonymization techniques and planned next steps.

Anonymization Techniques

In our discussion of anonymization concepts, we talked about things like "obscuring data" for quasi-identifying attributes. Exactly how can we "obscure" data? This section discusses these obscuring techniques. We start of describing some example log data. That log data will be modified as an example of different anonymization techniques. For each technique, in addition to showing an example, we will describe the most appropriate uses for it. The section concludes with an anonymization example and a list of different techniques used to create it.

With hiding a value is replaced with a constant value (typically 0). Sometimes it is called "black marker."

Hiding is useful for suppressing sensitive attributes that may not be needed for processing. If we needed to publish data for a phone directory, information like salary would not be needed.

A hash function maps each incoming value to a new (not necessarily unique) value. It is often used to map a large, variable amount of data into a number of a certain length.

Permutation maps each original value to a unique new value. This technique allows us to map back the new value back to the original value, provided we store the mappings. This property of permutation lets us do processing in the cloud using permutation-changed values, and then map the results to our original values in a secure place.

Shift adds a fixed offset to the numerical values.

Shift conceals data while letting us do computations in areas like the cloud. We can subtract out the effects of shifts in a secure area.

Enumeration maps each original value to a new value so that ordering is preserved. This allows us to perform studies on the data involving ordering.

Updated is not really a technique for anonymization, but it is a closely related. Updated data has checksums recalculated for the log file changes.

With truncation, a field is shortened, losing data at the end. This allows the ability to hide data but keep the information that the data is part of a group.

Next Steps

It is clear that more work that needs to be done with anonymization. The potential pitfalls of anonymization do not seem to be well known, so education about anonymization uses and tradeoffs (particularly with use in clouds) needs to be an ongoing process. There are problems with available tools for doing anonymization. Available open source tools seem capable but are not well documented, and some have been abandoned by their creators. Our POC used AES encryption to do anonymization. AES-NI instructions could be used to speed up the anonymization process, as a potential use case for AES-NI is secure use of public clouds.

Environment Segregation

The goal and theory of general cloud environments is to enable one large seamless, transparent computing space where system owners pay no attention to the infrastructure or underlying environment. While this works in situations where all the customers require a consistent level of security, most enterprises require varying levels of security depending on the sensitivity and importance of the data and computation being performed. In order to enable enterprises to take advantage of the public cloud environments there needs to be some implementation of segregated environments with distinct controls and security requirements that match the needs of the enterprise. This is also necessary to fully enable the concept of policy domains that we previously discussed.

In order to support the varying security requirements that an enterprise needs and that may be defined by different policies in a federated cloud we have implemented the concept of multiple trust zones. Each trust zone is an environment that is designed to meet the security requirements for a specific class of data and computing. A common cloud environment can be thought of as low trust. It has default controls to protect the infrastructure but no restrictions on the applications or virtual machines that run within the cloud. For the purpose of this discussion, we will focus on our work implementing a High Trust Zone (HTZ).

Key Risks

In developing the concept of the HTZ, we identified specific risks that must be addressed:

  • Security of the infrastructure management systems. While the hypervisor frequently has the most focus from an attack perspective, we have found that the management systems are often more vulnerable and more useful for attackers to compromise.
  • Security of the hypervisor. The hypervisor itself must be understood to be nothing more than another operating system. As such it must be secured and monitored for indications of compromise.
  • Security of the guest VMs. Because no hypervisor is perfect, any vulnerability in a guest VM must be considered additive to the overall vulnerability of the environment. Compromise of one VM makes it easier to compromise the hypervisor and the rest of the cloud.
  • Security of the applications running in the guest VMs. In the same vein as the security of the guest VMs, the applications running in the cloud must be considered as part of the attack surface for the overall environment.

Key Controls

To reduce the consequences of these identified risks, the HTZ uses 24 administrative controls. We are implementing these controls in three phases, two of which are complete. The first phase used controls to isolate the virtualization management infrastructure from the servers being virtualized and to protect the accounts used to manage virtualization. The second phase established controls for extensive security monitoring, taking a holistic approach that included developing deep logging capabilities, and solutions for monitoring the management agents. For the third and final phase, we are adding complex network monitoring that includes a diverse mix of host and network intrusion detection capabilities. We have begun deploying our HTZ architecture and virtualization process in multiple data centers. Our HTZ solution:

  • Takes advantage of virtualization and cloud computing to improve the agility and efficiency of our high security-sensitive applications
  • Provides controls we can apply to our Internet-facing environments to further reduce their risk, as well as improve their agility and operating efficiency
  • Prepares us to take advantage of public cloud services for internal and external facing applications in the future

Extending Trust Zones to Cloud Service Providers

As we move toward external cloud computing environments it will be important for us to communicate our key risks and controls to the cloud service providers to help ensure that they are meeting our basic requirements for hosting applications, services, and data in different trust levels. This is critical in terms of extending our enterprise cloud computing capabilities outside of Intel and should help us with overall security for cloud service providers. Our goal is that we can ultimately ensure that applications and information are protected independent of where they are being hosted.

Federated Security Business Intelligence

Security Business Intelligence (SBI) is Intel IT's collection of services that is designed to capitalize on our detective controls: logging, alerts, threat intelligence, and so on. We have been working on SBI for the past decade and continue to grow our capabilities internally while looking to what will be needed externally in terms of information gathered from other sources (that is, cloud service providers, external threat intelligence services, and so on).

SBI Architecture

Our SBI architecture is based around four key business objectives: keep Intel legal, keep information available, keep information protected, and keep controls cost effective. These are drivers that the business has articulated from a threat management, investigations, and forensics perspective.

We also realized early on that it is a combination of people, processes, tools, and applications that would define an enterprise SBI service offering. With that in mind we determined the need to create business process applications, analytics applications, business intelligence capabilities along with an information infrastructure.

Foundationally we focus on layers to deliver the SBI architecture: the first layer is that of a common logging service, which facilitates the aggregation and normalization of multiple event sources and streams. Normalizing data is important because different event sources provide different event/log formats, structures, and time stamps. Aggregation is the ability to put all of these normalized event sources in one repository so they can be accessed and analyzed.

The second layer of the architecture is the correlation engine, which is used to write rules that allow us to correlate events across different sources in the common logging service. Rules are written that will allow us to detect new threats, correlate evidence of attacks or attempted intrusions, and help facilitate investigations. The correlation engine is the heart of the SBI architecture because without it we would end up looking through billions of events on a daily basis and trying to determine which ones were real, relevant, and meaningful. The correlation engine provides the capability to funnel down the raw event streams into actionable chunks.

The third layer of the architecture is the predictive analytics piece. This is where specific risk models are written and data is mined to seek evidence of these risk models in the environment. Predictive analytics are very useful for us to look for specific events or the evidence of activities against high value asset repositories where critical data is stored in the enterprise.

The instantiation of these architectural layers has been a multiyear process and continues to evolve with time. Each layer requires specific expertise on the part of analysts and engineers who work with security information. This foundation is very scalable within the enterprise and has helped us articulate and build a number of underlying services.

SBI Services

The underlying services that comprise the SBI solution each play a role in insuring the timely availability of current and older information that has been collected through our detective control sources.

Transaction data services are provided via an extract, transform, and load (ETL) process from one repository to the common logging service. This includes, but is not limited to, things like access logs from applications, firewalls, VPN services, and servers.

Context data services are another example of an ETL process that is used to provide relevant information related to employee data, IP address to location, time information as well as hardware asset information and software applications in use within the environment.

Stream data services are a third example of an ETL process but they are related to streams of log data that comes in near real time (that is, network intrusion sensor alerts) and then are put into the common logging service where custom business logic is used to filter, normalize, and aggregate these events.

Analysis and mining services are process cubes that include measures, dimensions, and key process indicators. These services are used to run model queries, to execute common data mining tasks, to fill reporting databases, and to capture metrics from all other service components.

Reporting services are the primary interface for customers and stakeholders to consume SBI information. These consist primarily of web reports, ad hoc queries through SQL, cube, and drilldown views along with web service endpoints. Reporting services also include dashboards and decision support tolls along with data mining viewers.

Key Use Cases

We continue to explore new use cases internally and have monthly meetings where stakeholders can propose new use cases, but our SBI program was built on some primary use cases.

First, we monitor for at-risk behaviors: calculating the components of the "who, what, when" risk equation and triggering the follow-up processes for patterns that meet a certain threshold. The risk models associated with this use case are constantly evolving as we seek new ways to identify threats and intrusion attempts.

A second use case is the collection of incident data. Once we know that an incident has occurred, we need to be able to pull all of the relevant data based on an identity, asset, or resource. This is typically accomplished using our analysis and reporting services to examine events within our common logging framework.

Once we have collected any available incident-related data, the analysis process kicks off to assess the impact of an incident and to recommend any corrective or preventative actions that should be taken based on the type of incident.

We also have a use case for managing security configuration within the environment that is supported by our SBI services. This is the process of keeping hardware and software configuration in line with our security requirements and reporting when there have been excursions from policy.

Another key use case is the ability to use SBI systems to quantify risk of a threat or scenario, analyze the risk and potential impact, rank the risks seen in the environment, and mitigate risks or make recommendations on steps to mitigate and when we should accept risk in the environment. Without an SBI system in place it is difficult to facilitate the risk management process.

Federation of SBI

As we move to external cloud-based services it becomes critical that we are able to take information learned from external sources and include it with our internal security business intelligence repositories. We are just embarking on this journey but know the types of information we will be seeking along with some key elements that will help make our ability to externalize applications and services possible.

The first step will be to have near real-time event streams of any log information coming from a service provider back to our common logging service. This is necessary for us to identify threats and intrusion attempts against our resources whether they are inside Intel, at a partner's site, or at a cloud service provider.

A second objective is to be able to aggregate and correlate real-time threat information with our SBI system so that we can execute the risk management process. We are seeking ways to facilitate the communication of threat management information to Intel from external providers.

A third objective in the future will be to tag information based on where it is coming from (outside Intel or inside) and be able to associate a confidence level in the information over time. This will allow us to make better risk management decisions and more proactively deal with incidents and investigations.

The combination of internal and external data will be a powerful tool for Intel going forward and through the maturity of anonymization techniques we see a time when we can share information across companies and entities to raise the overall bar of information security around the world.

Conclusions and Next Steps

The path to a new compute model can be long and complicated—this has been true with our journey to cloud computing. Intel IT has made great strides with our internal cloud computing environment and is now embarking on the process of extending to external service providers.

As we move down the external path there are several security considerations that need to be incorporated and we believe that tackling the four topics in this article will go a long way in enabling our computing needs for the future.

Copyright

Copyright © 2013 Intel Corporation. All rights reserved.
Intel, the Intel logo, and Intel Atom are trademarks of Intel Corporation in the U.S. and other countries.
*Other names and brands may be claimed as the property of others.