This section of the Private Cloud Security Considerations Guide covers a number of security design considerations that you will need to think about and options for making the best decisions for securing your private cloud deployment.
Table of Contents
Yuri Diogenes – Microsoft
Tom Shinder – Microsoft
Anthony Stevens – Content Master
Reviewers (for the updated version):
Clint Rousseau – Microsoft
Avery Spates – Microsoft
Jeremy Girven – Microsoft
Fernando Cima – Microsoft
Frank Koch – Microsoft Corporation
Scott Culp – Microsoft Corporation
Allen Brokken – Microsoft Corporation
The Private Cloud Security v-team, Microsoft Corporation
This article is part of the Private Cloud Security Considerations Guide. The intent behind this article is to provide a web-based resource for the same information that is contained in the Private Cloud Security Considerations Guide downloadable Word document. The following sections from the downloadable Word document are available for reading online:
- Private Cloud Security Considerations Guide – Introduction and Overview
- Private Cloud Security Considerations Guide – Security Design Considerations
- Private Cloud Security Considerations Guide – Private Cloud Security Challenges
The private cloud security model presented in the figure below uses the same design as the private cloud reference model but replaces the capabilities with mechanisms for implementing security. You can use this model to understand the private cloud areas in which you need to include security considerations throughout your design process. In the figure below you can see how these components tie in to the different layers of the private cloud reference model:
By leveraging the Private Cloud Security Model, you have the opportunity re-examine the provision of security within your datacenter. You can take a holistic view of the central importance of security and ensure that you achieve this goal within your private cloud design. The following sections will cover in more details the security considerations of the components presented in the above figure.
Security Architect’s Alert:
in 2013 Cloud Security Alliance (CSA) launched an updated version of their Reference Architecture that can also be used as a reference for private cloud security considerations. You can also use their Scenario Application to compare business scenarios against the reference architecture.
The private cloud must implement security in all steps of the design process; it must be the foundation for the entire design. Every transaction then must pass through this security wrapper on any data transition within the cloud, for example:
- Client to the service delivery layer
- Service delivery layer to the software layer
- Software to platform layers
- Platform to infrastructure layers
- Provider to the management stack
- Management stack to the software, platform, or infrastructure layers
In addition, security applies to all intra-layer communications, to data being processed, and to data at rest. The exact security mechanism applied will depend on the data type, the source and destination layers, or the environment in which that data is being transmitted, processed or stored.
For example, software developed for cloud implementations should follow the security development lifecycle (SDL) guidelines.
For more information on these guidelines, see Microsoft Security Development Lifecycle, at http://www.microsoft.com/security/sdl/default.aspx.
4.1.1 Identity and Access Management
Identity and access management (IdAM) covers the overarching issue of establishing identity and then using that identity to control access to resources. Identity and access management is fundamental to private cloud design, as you must be able to establish the identity of a cloud consumer and then manage their access to resources within the cloud environment. However, it is important to remember that IdAM also applies to administrators and to services that may access your private cloud.
IdAM can include the following security topics:
- Directory service
- Security policies
- Credential management
The first functionality that your IdAM framework must provide is that of establishing identity through the process of authentication, for example by requiring a user to enter a user name and a password. These credentials are then checked with a directory service, metadirectory, or other authentication mechanism, typically by using some form of hashing algorithm so that the user’s password is not transmitted across the network.
If the authenticating mechanism validates the user credentials, then the operating system or federation environment generates a token. This token may contain information about the user and the groups of which that user is a member. Alternatively, in a federated environment, the token may contain one or more claims about the user. Note that this token does not contain any permissions.
The second part of IdAM is the access management part, which includes the process of authorization. Authorization and authentication have to work together both to identify users and control the resources that they access.
In server-based computing, authorization typically involves setting permissions on objects, such as files, folders, shares, and processes. In virtualized environments, you also need to set permissions on virtual machines and virtual networks. In private cloud implementations, you should also control compute resources, storage groups, and service end points.
Access to a resource comes from comparing a user’s access token with the permissions set on the resource. Typically, these resource permissions are cumulative, so a user with read permission from their own account and read and write permission resulting from group member ship has read and write access to the resource.
Deny permissions trump allow permissions, so a user with read and write permission from their personal account but who is a member of a group that is denied access to the resource will not have access to the resource.
In private and hybrid cloud environments, identity must be able to flow dynamically between resources that may not have any common mechanism for exchanging identity information. You can accomplish this task by use of federation technologies, which implement claims-based authentication to a centralized identity store, such as a directory service. Services, applications, and other resources can then use these claims to check the user’s identity, based on a federation trust model between two or more federation providers.
4.1.4 Role-Based Access Control
Role-Based Access Control (RBAC) is at the heart of user access and control to private cloud resources. Private clouds should abstract away hardware, networks, storage devices and capacity into logical groups of resources that may run on disparate systems.
RBAC should be used to grant access to and control capacity for logical resources. For example – a resource pools with 100 CPU, 100GB RAM, 10TB of storage, on GUESTNET1 and GUESTNET2 – should use Domain access to assign users to this grouping of resources.
4.1.5 Anonymous Permissions
A key element with authorization in private cloud environments is the control of anonymous permissions. Anonymous permissions enable unauthenticated users or services to access resources. Many operating systems allow anonymous logons, which are typically used for public access to web sites. With most private cloud implementations, users would typically be known to the organization and therefore authenticated. However, there are scenarios where anonymous access might be required, for example to enable members of the public to interact with an online communication session as a guest. In all cases, there should be strict partitioning between any resource that allows anonymous access and ones that require authenticated access.
4.1.6 Federation Claims
Federation is a mechanism for authenticating users from one security domain and authenticating them on another domain without the requirement for an intrinsic trust relationship between the two organizations. The organizations themselves may be running different operating systems, directory services, certification authorities, and security protocols. Hence, this approach is particularly useful in hybrid cloud implementations and is carried out using claims-based authentication.
Key Security Definition:
A claim is a collection of assertions about a user, such as their user name, email address, or groups of which they are a member. Claims are generated by a security token service (STS) in one organization where that user is able to authenticate against that organization’s directory service. Claims are electronically signed to prevent tampering in transit and the communication channel over which claims are exchanged may also be encrypted.
To exchange authentication information, the two organizations establish a public key infrastructure (PKI) trust between the STS in one organization and the STS in the other organization. When a user wants to authenticate to a resource controlled by the other organization, their logon request is redirected to their home realm and authenticated by the IdAM system in that realm. After authentication, the home STS generates the cryptographically signed security token containing the claims about the authenticated user. This token is then submitted to the requested service at the other organization.
Because the home realm has authenticated that user and the cryptographic signing guarantees that the token has not been altered in transit, then the target service accepts the token and, depending on the claims in that token, authorizes that user to access the service.
Federation is particularly important in both private and hybrid cloud environments, as services may run in completely different security contexts. In the case of private cloud implementations, the home realm for every user may be the organization’s directory service and applications or services can be configured to establish federated trust relationships with that STS. The services at the service delivery layer, the applications in the software layer, the virtual machines within the platform layer, the operating systems integral to the infrastructure layer, and the management consoles and services forming the management stack can then all use this federated environment for authentication, authorization, and RBAC.
Together with authentication and authorization, auditing is an essential component of your private cloud IdAM environment, particularly with regard to establishing compliance and implementing effective governance. With public cloud implementations, you are also likely to want to achieve Statement on Auditing Standards (SAS) Type I or II accreditation or possibly ISO 27001 compliance to demonstrate to your customers that you take security seriously.
With private cloud implementations, showing compliance with external auditing standards may not be so important. What you will want are the answers the following types of questions:
- Which user accounts have been locked out over the last day, week, or month?
- Who has attempted to access resources to which they do not have permission?
- Have any administrators changed access permissions that would enable them to view consumer data?
These are in effect relatively straightforward questions to answer and a central auditing system should help you identify when these events occur. But there are questions that might require more sophisticated analysis:
- Are any users or administrators behaving in a suspicious manner?
- Are access requests for one resource being redirected to another resource?
- Are attempts being made to communicate between virtual machines or between virtual machines and host computers?
As mentioned earlier in this paper, a significant change in private cloud implementations is that you can no longer assume that attackers are unauthenticated. Hence your auditing implementation must be able to identify unexpected or suspicious activity and be able to filter out that activity from the thousands of regular operations without imposing an unacceptable performance burden or generating excessive numbers of false positives.
Typically, the starting point for auditing is the directory service and most commercial directory services implement effective monitoring capabilities. However, you are also going to need to monitor at other levels within your private cloud infrastructure. These levels include monitoring the following resources:
- Service endpoint
- Virtual machine monitoring through the operating system
- Host computer operating system
You can use an audit collection and collation service to forward these events to a centralized database. To ensure that events do not swamp the database, you would implement event filtering so that only events of particular interest are collated. Analysis tools can then interrogate this database to identify suspicious activities.
Auditing must be of sufficient precision and granularity to be able to track the actions of a single individual right the way through the entire private cloud environment. This end-to-end auditing of individuals is vital for checking up on the actions of your administrators.
The auditing database itself also requires auditing and management. Unless properly managed, the number of events can cause the database to grow excessively.
Finally, the auditing output must directly support any compliance requirements that apply to your organization. Ideally, this information would be displayed as a dashboard, with easily-assimilated indicators showing current and historic levels of compliance.
4.1.8 Data Protection at Rest
The main aim of security in cloud environments is to protect data both at rest and in transit. Hence, data protection is a major factor that needs to be incorporated into the very fabric of your private cloud service blueprint. Data protection at rest includes consideration of the following factors:
- Software encryption or hardware encryption
- File and folder encryption or full disk encryption
- ACLs and access control entries (ACEs)
- Storage policies
Protection of data at rest requires that information to be encrypted. This encryption can be applied in a number of ways, depending on factors such as cost, performance, and ease of configuration.
4.1.9 Hardware Disk Encryption
There are two forms of hardware-based disk encryption. One uses a specialized microprocessor that is part of the disk hardware; the other uses either the main processor or a host bus adaptor (HBA). In both cases, hardware encryption enables the entire disk to be encrypted, which gives rise to the term full or whole disk encryption. Full disk encryption protects the master boot record (MBR), the files and folders, the folder structure and the partition table.
Performance for a disk with dedicated hardware-based encryption is similar to that for a non-encrypted device. Hardware-based encryption with external processing may perform less well if the main processor is busy.
As the whole disk is encrypted, this arrangement protects the disks if they are physically removed from the private cloud environment. As private cloud architectures stress resiliency over redundancy and tend to use arrays of hard disks, this approach prevents data from failed drives being read and also ensures that an attacker cannot read a hard disk that they have physically removed from the environment.
Administrators can instantly and irretrievably wipe a hard disk by using the cryptographic disk erasure process. This process generates a new key for the hard disk, thus making all the old data inaccessible in milliseconds, compared to several minutes for a repeated disk wipe.
Protect Against Brute Force:
The key for hardware disk encryption is typically 32 bytes or 256 bits in length, which gives 2256 or 1.16×1077 combinations and makes a successful brute force attack highly unlikely. However, after the disk is mounted, the operating system has full access to all parts of the disk as the encryption is now provided transparently by the hardware. Hence, the limitation with hardware-based encryption in a private cloud environment is that it only uses one key to encrypt the whole disk and you cannot use the hardware key to partition data on the disk between different tenants.
4.1.10 Software Disk Encryption
Software-based encryption can work in two ways; it can encrypt either the full disk or just a set of specified files and folders. Unlike hardware-based full disk encryption, software-based full disk encryption of the boot disk does not encrypt the MBR. Performance is also reduced compared to dedicated hardware-based encryption, as the operating system needs to decrypt data on the partition.
File and folder-based software encryption does not encrypt an entire volume but enables you to encrypt individual files and folders. The advantage of file and folder-based software encryption is that you can encrypt different folders with differing encryption keys, thus enabling data partitioning between users or business units that is not possible with full disk encryption.
When you create your design, you will need to consider whether you need disk encryption and then how to apply that disk encryption. Note that you can combine dedicated hardware-based full disk encryption with software-based file and folder encryption to reap the benefits of both systems.
4.1.11 Data Protection in Transit
Data protection in transit is a different proposition to data protection at rest and requires you to consider a range of approaches and technologies to provide effective security. While very few organizations would not consider implementing data encryption for data transiting the Internet, many are still not implementing equivalent levels of encryption and data protection within their organizations. Private cloud environments should also seek to improve security by implementing encryption for every transaction, not just those from the client to the service endpoint. Hence, your design must consider encryption of the following data transit paths:
- Service endpoint to software layer
- Software layer internal communications
- Software layer to platform layer
- Platform layer internal communications
- Platform layer to infrastructure layer
- Infrastructure layer internal communications
- Management layer to service delivery, software, platform and infrastructure layers
- Private cloud to public cloud environment (for hybrid implementations)
- Physical transportation of the data from one datacenter to another
- Public cloud storage to on-premises storage
To use encryption to protect your data in transit, you may consider the following technologies:
- Secure Sockets Layer (SSL) or Transport Layer Security (TLS)
- IP security (IPsec)
- Virtual private networking (VPN)
All of these encryption approaches use symmetric key bulk encryption combined with asymmetric public/private key pair encryption to exchange the bulk symmetric key between the sending and receiving parties. This approach ensures that encryption does not place too great a processing load on the hosts at either end of the encrypted link. If a private cloud implementation requires the service delivery layer to accept large numbers of simultaneous encrypted connections, specialized SSL offload processors are available to offload the initial handshake process that then sets up the symmetric bulk encryption session.
Although SSL 3.0 has been widely accepted as the basis for securing web browsing sessions, this version of the protocol is now seen as less secure than later implementations. The introduction of TLS 1.0 (SSL 3.1) in 1999 further improved security and the latest version of this protocol is now TLS 1.2 (SSL 3.3), which was implemented as RFC 5246 and published in 2008. TLS operates at the transport layer of the Internet protocol suite, which includes Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and Stream Control Transmission Protocol (SCTP).
In your private or hybrid cloud design, you will most probably use TLS to secure the client or provider to service delivery layer traffic and the service delivery layer to software layer connection.
IPsec is a protocol suite that provides encryption, mutual authentication, and cryptographic key exchange during a communication session between two hosts. IPsec operates at the Internet layer of the Internet protocol suite.
VPNs are most commonly used to create secure tunnels through public networks. The advantage with VPN technologies is that after the VPN is created, it acts like a physical network, directing outgoing traffic to the VPN default gateway regardless of the intervening network appliances. VPNs can use a range of technologies, including Point to Point Tunneling Protocol (PPTP) and a combination of IPsec and Layer 2 Tunneling Protocol (L2TP).
Applications that can’t connect over Internet protocols can typically connect quite happily by using VPNs. Hence you can use VPN connections as part of your private cloud implementation to connect consoles in the management stack to managed services in the software, platform, or infrastructure layers.
Key length has a direct effect both on the speed of encryption and the security of the resulting data exchange. For bulk encryption of SSL/TLS traffic, a 40-bit symmetric key length is woefully inadequate and even the 56-bit Data Encryption Standard (DES) key length is regarded as obsolete. 128-bit encryption is still judged strong enough for private cloud use but many implementations now use 192 or 256-bit symmetric key lengths specified in the more secure implementations of Advanced Encryption Standard (AES).
There are still restrictions in relation to the export of cryptographic technology. You should check if these restrictions apply to your location, as you may not be able to use longer key lengths.
To exchange the symmetric key, you need to use an asymmetric key pair with equivalent computational security. Because public-private key pairs can be broken by integer factorization as well as brute force, asymmetric keys must be considerably longer than the symmetric key that they are protecting. The following table shows the key lengths that provide equivalent protection for symmetric and asymmetric key types using the Rivest-Shamir-Aldeman (RSA) algorithm:
Symmetric Key Length
RSA Asymmetric Key Length
1024-bit asymmetric keys are now regarded as insecure.
For private cloud implementations, the best balance between security and key length is a combination of 128-bit symmetric keys protected by 2048 or 3072-bit asymmetric keys.
For the highest levels of transport security, you may need to consider data tokenization. This approach is typically used in Payment Card Industry (PCI) environments that must conform to the PCI Data Security Standard. Tokenization replaces the confidential data with values that are not confidential. The confidential data is not transmitted but the token can be used to reference that information from the tokenization data store. This approach is also widely used for medical records, bank transactions, vehicle registration details, and credit card payment information.
In a scenario where you have multiple datacenters and there is a need to physically transport the data from one location to another, ensure that the data is also encrypted while in transit. Another scenario where data will be moving from one datacenter to another is when you are moving data located in a service provider datacenter (for example a public cloud storage services) to the private cloud storage located on-premises.
Security Operations Punch List:
if you are using Microsoft Azure, you can use the Import/Export capability to move data from Azure storage to on-premises, for more information about this capability see http://blogs.msdn.com/b/windowsazurestorage/archive/2013/11/01/announcing-windows-azure-import-export-service-preview.aspx
In a private cloud implementation, you may need to use digital certificates for TLS/SSL encryption, for client authentication, server authentication, and for a range of other security-related purposes. In common with non-cloud implementations, you will most probably use X.509 v3 certificates for these activities. However, a significant difference with a private cloud environment will be the requirement to create large numbers of certificates as part of the provisioning process.
Learn by Example:
For example, if you want to provision a secure web server within a virtual machine, then that web server will require a host name and corresponding IP address. To implement TLS/SSL encryption, the provisioning process must also create an X.509 certificate with a common name that matches the host name of the default web site on the virtual machine.
In consequence, you will need to create certificates and bind them to web sites or applications as part of your provisioning process. This requirement is likely to mean that you need to implement an internal certification authority (CA) to generate these certificates or connect to a public CA and request a new certificate for each provisioning action that requires encryption support.
In public cloud implementations, the certificates that you provision would have to show a certificate chain relationship to a root certificate issued by a trusted CA. The client computer would have this root certificate installed in its trusted root certificate store.
With private cloud implementations, you have the option of using a private CA within your cloud infrastructure to respond to these provisioning requests. However, you would then have to ensure that your client computers trusted the issued certificates. This trust is automatically established with domain membership in Microsoft Windows if the root CA certificate is published in Active Directory, but with non-domain-joined computers and mobile devices you would have to arrange for the root certificate to be installed in each client’s trusted root store.
4.1.13 Security Monitoring and Response
Just as security needs to be pervasive in private cloud environments, so does your security monitoring. This security monitoring also needs to be tightly integrated with your overall monitoring environment.
Effective security monitoring must not only be integral to your private cloud operation but it must be able to cope with the rapid expansion and self-service elements of private cloud operation. For example, because consumers can provision and deprovision resources rapidly, you may not know exactly how many virtual machines are currently in operation or how many applications you are hosting. And if you don’t know what resources are online at any one time, you can’t secure those resources.
When a virtual machine is started up, you need to ensure that either agent-based or agentless monitoring starts as soon as the virtual machine is operational and that the results of that security monitoring are run through the intrusion detection system and collated in the auditing database. You also need to be able to confirm that the virtual machine conforms to your security policies and are not acting suspiciously.
Auditing and security monitoring by themselves are not enough; you also need to respond to incidents in a timely manner and take decisive action to contain a security breach. In cloud-based environments, this requirement may mean implementing tools that take automatic action rather than wait for human intervention. For example, if a user appears to be acting suspiciously, your security management software should be able to terminate that user’s session and disable his or her account before alerting an administrator.
4.1.14 Security Management
Security management is the overall capability that provides the ability to manage and control all security aspects of your private cloud implementation. You will need to consider the following factors when creating your private cloud service blueprint:
- Proactive and reactive management: In private cloud environments, security management needs to be both proactive and reactive. You must implement proactive security through risk assessments, threat modeling, data classification, security policies, preconfigured virtual machine templates, access rights, update procedures, reduced attack surfaces and so on. You will also need to implement reactive security management by using security monitoring and automated security responses.
- Risk assessments. By identifying the types of risk inherent to your operations, you can identify the major security threats to your private cloud infrastructure.
- Threat modeling. From your risk assessment, you can model the threats and classify them according to severity. As has been pointed out already, private cloud environments change the nature of the threat as well as provide new and different potential attack vectors.
- Data classification. Not all data requires classification at the same security level. For example, your organization’s published policy statement on sustainable IT does not require the same protection as a spreadsheet with financial projections for a new product launch. Your security management must allow for these differences and afford the correct level of protection for different data types.
- Security policies. Security policies need to be defined, created, applied, monitored, adjusted, and maintained. These policies should apply enough protection in a loosely-coupled environment to secure the data without hindering employees from carrying out their work.
- Attack Surface Reduction. Attach surface reduction is important at every level within the private cloud architecture. This principle also applies to virtual machines.
- Update procedures. Your security management environment must implement processes for updating host operating systems, virtual machines, development environments and applications rapidly and reliably. With private cloud environments, you have additional options which may involve provisioning or cloning a virtual machine, applying security updates to the clone, and then reverting back to the original if the updates cause malfunctioning.
that in certain situations, compliance or application requirements may dictate that the provider does not have access to the virtual machines before provisioning. Hence, the provisioning, updating, and mounting process must be fully automated.
Your security management environment must control these processes where possible and verify that any changes that take place do not affect the operational integrity of your environment. In addition, your security management environment must not introduce additional security vulnerabilities, so principles such as encrypted communications with IPsec, RBAC, two-factor authentication and attack profiling must apply to the management environment as well.
Now that you have examined the factors within the security wrapper, this paper presents the security issues that apply at the Infrastructure layer.
Infrastructure security is the most comprehensive aspect of private cloud security, as it starts with physical security and goes up as far as security of the hypervisor element of the virtualization environment. The layers of infrastructure security consist of:
- Physical security
- Supply security
- Facility security
- Hardware security
- Network security
- Compute security
- Storage security
- Host operating system security
- Hypervisor security
The figure below shows how these security elements apply at the corresponding levels of the infrastructure layer.
4.2.1 Physical Security
The physical security requirements of private cloud implementations tend not to differ substantially from those of an internal datacenter. However, as all security starts with good physical security, it is worthwhile highlighting two factors that are of importance.
- Physical access control. Effective security starts with restricting the people who have access to the physical hardware. Newer technologies such as smartcards, fingerprint scanners, and retina scanners are now in common use but all are equally ineffective if you haven’t put bars over the windows to the room that houses your private cloud hardware. Many organizations pay for extensive electronic penetration testing yet fail to implement any form of physical security assessment. A plate glass window rapidly transforms into a door after the application of the heavy end of a fire extinguisher. In consequence, doors to the data center need to be able to withstand physical attack using any item likely to be readily available to an intruder. Employees must participate in a security awareness program to avoid common mistakes such as allowing tailgating (following people through doors) and shoulder surfing.
- Physical data leakage. Because private cloud implementations tend to make greater use of simple arrays of hard disks and swap out failed disks on pre-determined maintenance schedules rather than every time a disk fails, you have the situation where multiple hard disks may be replaced at the same time. Unless there are effective security controls to ensure that these hard disks are wiped before they are removed from the premises, then there is the potential for data leakage.
4.2.2 Energy Supply Security
A private cloud implementation requires electrical power to operate. It may also require a reliable Internet connection. Unless the provision and security of these services is assessed and the risks to each supply evaluated, your overall security cannot be accurately estimated.
Backup electricity generator and/or dual power source (Dual Utility Feeds) are approaches to improving security of the electricity supply. Although significantly more expensive, mirrored or standby data centers in another location can provide equivalent security combined with resilience in the case of significant disruption at one site.
4.2.3 Facility Security
Facility security addresses issues with providing essential services to the infrastructure, such as cooling facilities, power supplies, cabling, and physical networking. In these areas, private cloud implementations are more closely aligned to dynamic data centers, where rapid provisioning and deprovisioning may result in large fluctuations in cooling and power requirements from powering up and down the physical servers. Additionally, in private cloud environments, the physical networking topology may differ to account for the change in trust level of the internal network.
4.2.4 Network Security
Many network architectures include a tiered design with three or more tiers such as core, distribution, and access. Designs are driven by the port bandwidth and quantity required at the edge, in addition to the ability of the distribution and core tiers to provide higher speed uplinks to aggregate traffic. A dedicated management network is a frequent feature of advanced data center virtualization solutions.
Most virtualization vendors recommend that hosts be managed via a dedicated network so that there is no competition with tenant traffic and to provide a degree of separation for security and ease of management purposes. This historically implied dedicating a network adapter per host and port per network device to the management network.
Private Clouds networking have to deal with networking multi-tenancy issues, but access controls to perimeter networks should follow established best practices such as deploying edge server, and devices with strict traffic flow shaping. With Multi-Tenancy, the Private Cloud needs to be able to support re-use of IP addresses with Port Based/Private VLANS and Remote Tunnels through Virtual Security Gateways. Managing the network environment in a private cloud can present challenges that must be addressed. Ideally, network settings and policies are defined centrally and applied universally by the management solution.
For VLAN-based network segmentation, several components including the host servers, host clusters, Virtual Machine Manager, and the network switches must be configured correctly to enable both rapid provisioning and network segmentation. The hypervisor and host clusters, virtual switches should be defined on all nodes in order for a virtual machine to be able to failover to any node and maintain its connection to the network. At large scale, this can be accomplished via automation.
Private Clouds do have the concept of internal trusted networks and external untrusted networks inherently. Below you have an example of how this can be distributed:
Trusted networks should be:
- Cluster communication (non-routable)
- Management network (non-routable)
- Virtual Machine Migration Network (non-routable)
- Storage Network (non-routable)
- External Untrusted networks:
- Guest networks – VLAN tagged or not
- Perimeter networks – controlled by firewall rules
- Secure networks – high controlled networks with IPSEC or other Authorization before communication is initiated
All of these networks are presented to the Private cloud hosts and the virtual switch is used to manage access and traffic flows.
However, this change does not mean that different firewall rules cannot apply to cloud to internal network connections compared to cloud to Internet connections. Instead of an all-encompassing network labeled “External”, you would create a new network called “Internal Network” and assign an IP address range to that network. For communications to take place, you then specify what ports are open and which protocols are allowed through the perimeter network into the cloud network. You can then configure additional firewall rules that govern communications between the internal network and the cloud network.
You will be monitoring your firewalls as part of your overall security monitoring and using this information to ensure that client access attempts follow prescribed paths to specific services. Any attempts to access a service or virtual machine from a session that should not be connecting to that resource should result in automatic termination of the session, locking out the account, and alerting security to carry out a forensic follow-up examination. As with all IT infrastructures, the routers, firewalls, and switches must be kept updated, otherwise these components can provide an open door for intruders.
A private cloud implementation will also make heavy use of network partitioning through virtual local area networks (VLANs). These VLANs can be implemented both through the physical network switches and as part of the virtualization environment. VLANs help ensure that packets can only travel along the network segments that they should be traversing..
The dynamic nature of virtualized environments may cause the location of a server and its corresponding storage to change, based on shared resource usage and dynamic virtual machine placement. These dynamic movements may increase network latency and reduce routing effectiveness. In addition, network topologies such as spanning tree layouts may not work in private cloud environments.
Shared network services such as DNS also need to be included in the security assessment of your private cloud implementation. As with most network designs, any service should provide redundancy and should not implement a single point of failure. Any known vulnerabilities in these services must be addressed and best security practice applied.
Private clouds can also take advantage of converged networks, where different types of network traffic share the same Ethernet network infrastructure. Figure 8 has an example of Converged network:
Security Infrastructure Note:
For an example of how to configure the converged network showed in figure 8 using Windows Server 2012, see the article Network Recommendations for a Hyper-V Cluster in Windows Server 2012.
Network virtualization is another capability that should be leveraged in a Private Cloud. Network Virtualization decouples the customer’s virtual networks (tenant) from the physical network infrastructure of the hoster (private cloud owner), providing freedom for workload placements inside the datacenters. Virtual machine workload placement is no longer limited by the IP address assignment or VLAN isolation requirements of the physical network because it is enforced within the hypervisor hosts based on software-defined, multitenant virtualization policies. Some other advantages of using Network Virtualization are:
- Enables easier management of decoupled server and network administration: Server workload placement is simplified because migration and placement of workloads are independent of the underlying physical network configurations. Server administrators can focus on managing services and servers, and network administrators can focus on overall network infrastructure and traffic management. This enables datacenter server administrators to deploy and migrate virtual machines without changing the IP addresses of the virtual machines.
- Simplifies the network and improves server/network resource utilization: The rigidity of VLANs and the dependency of virtual machine placement on a physical network infrastructure results in overprovisioning and underutilization. By breaking the dependency, the increased flexibility of virtual machine workload placement can simplify the network management and improve server and network resource utilization.
Security Infrastructure Alert:
For more information about Network Virtualization in Hyper-V, see the article Hyper-V Network Virtualization Gateway Architectural Guide.
4.2.5 Hardware Security
Hardware security in private cloud environments must take account of the differing features of private cloud implementations. Private cloud implementations typically have very high levels of commoditization, so any hardware security devices will need to be implemented on large numbers of host computers, hard disks, or network cards.
These hardware security devices can include hardware security modules (HSM) to protect cryptographic keys or offload cryptographic processing, most commonly for asymmetric key calculations. Note that symmetric key cryptographic calculations tend to be executed in software on the main processor, as the time to transfer the data to an external device and back again reduces the computational advantage of the dedicated processor in the HSM.
Hardware security devices can also include Trusted Platform Modules (TPM) for disk encryption. However, private cloud implementations with are more likely to use a combination of hardware full disk encryption alongside software file and folder encryption.
Host security can include setting the basic input/output system (BIOS) to require a password at boot time and in order to access the BIOS settings. Although there are known mechanisms for defeating BIOS security, this form of hardware security on the host computers still has a place in private cloud environments.
You should include firmware such as system BIOS updates as part of your maintenance cycles. Dynamic migration of virtual machines simplifies this process, as any virtual machines can be moved from a host computer, which is then updated, brought back online and the virtual machines reverted to that host.
In the BIOS ensure booting from unauthorized sources is disabled. Turn off all unused USB ports and disable CD/DVD ROM drives.
4.2.6 Compute Security
Private cloud environments consist of significant numbers of compute resources, implemented either as small form factor hardware compute units (for example, blade servers) or as virtual machines running on host computers (which may also use a blade format). It is essential that there is effective security on the processes that run within the associated memory of these compute resources.
Process isolation enables tight control of processes running within the operating system and constrains operations only to designated objects or targets. Authorization rules govern which initiators can access which targets. In private cloud implementations, it is important that processes that run on compute resources are tightly locked into ownership of those processes.
Memory segments should also be routinely wiped or set to zero when allocated to another process. In addition, areas containing data such as the page file should be wiped when the computer powers down.
Using industry standard capabilities such as secure boot to help ensure that your servers that are part of the compute note boots using only software that is trusted by the PC manufacturer is also important. When the server starts up, the firmware checks the signature of each piece of boot software, including firmware drivers (Option ROMs) and the operating system. If the signatures are good, the server boots, and the firmware gives control to the operating system.
Secure Boot requires a PC that meets the UEFI Specifications Version 2.3.1, Errata C or higher. Secure Boot is supported for UEFI Class 2 and Class 3 PCs. For UEFI Class 2 PCs, when Secure Boot is enabled, the compatibility support module (CSM) must be disabled so that the PC can only boot authorized, UEFI-based operating systems. Secure Boot does not require a Trusted Platform Module (TPM).
For more information about Secure Boot in Windows Operating Systems read Secured Boot and Measured Boot: Hardening Early Boot Components Against Malware.
Finally, memory dump files can contain sensitive application information, including user names and passwords. You should ensure that any memory dump files are secured against unauthorized access.
4.2.7 Storage Security
The encryption factors in storage security have already been discussed. However, there are other considerations that arise from the use of pooled storage in a multi-tenant environment.
As with compute resources, storage allocation should ensure effective partitioning between tenants and strictly enforce ownership of the storage space. When an IaaS compute resource is provisioned, it is allocated the dedicated compute and storage resources and access to those resources is restricted to the commissioning tenant. In operation, that storage space is kept isolated from other tenants. Encryption and ACLs prevent other users from accessing the data stored in those locations.
Ensure to use storage data classifications as part of security zones (PCI, FISMA, General). Private cloud security zone are created from Storage Partitioning, to create spate data classifications volume. For example Private cloud is designed to support both government and financial industry data.
The cloud should have separate security zones to manage that data. In storage create at least 3 data classification zones, one for government (FISMA), one for financial (like PCI), and one for general purpose. The provisioning process controls what data is allowed to be provisioned on that zone. For example the virtual machine for Government use should only be provisioned on that storage zone.
Finally, when a tenant or administrator deprovisions a storage resource, it is important that all the data on that volume is wiped. If the associated compute resource has local storage, any local and transient data on that compute resource should also be destroyed. Any resources that a tenant has used must return to the respective resource pools in a completely sanitized state and bear no recoverable imprint of the data that the tenant was using.
When reviewing storage security, ensure that you also consider data held in caching controllers. This information must be wiped as part of the deprovisioning process.
4.2.8 Operating System Security
Operating system security in the infrastructure layer generally involves configuring the host operating systems that support the virtualization environment. As with all operating system configurations, a key approach is reducing the attack surface to an acceptable level. The level to which you need to reduce the attack surface will depend on your overall risk management strategy and threat model.
Virtualization environments generally do not require graphical user interface (GUI) support from the operating systems on which they run, hence you should consider using a version of the operating system that does not include this component. Any services that are not absolutely essential to the virtualization environment should be disabled.
The provisioning process for host operating systems should include application of operating system security policies. These policies should set appropriate levels of operational security and include IPsec polices to control the servers to which each computer can connect. Provisioning at the infrastructure level also needs to include creation of certificates to match the host name of the provisioned host. The use of a Bare Metal OS Provisioning can alleviate/provide assurance that the OS is deployed consistently.
Before the host computer is connected to the network, it must have all relevant security updates applied. Only then is the new compute resource brought online and available to the requesting tenant. The deployment of the appropriate management tools for operations management, configuration management and capacity management should be part of the deployment.
4.2.9 Virtualization Security
Although virtualization is not a pre-requisite for private cloud implementations, the operational flexibility that this technology brings means that it is almost a requirement for meeting the need for rapid elasticity. Typically, virtualization is implemented as a part of a dedicated version of a server operating system without the GUI.
Virtualization requires hardware support from the chipset. However, most modern server designs provide the full range of virtualization support. From the security perspective, however, it is important to understand that if your private cloud environment is not using virtualization, then you should reduce the potential attack surface by disabling the hardware virtualization support within the system BIOS.
The virtualization environment should be updated as part of the operating system and this updating needs to be applied before any guest virtual machines are run on it.
In a private cloud environment, additional services or applications should not run on the host computer, with the possible exception of anti-virus applications. This anti-virus scanning should then be included in the comprehensive security monitoring of the whole environment.
Security Infrastructure Alert:
For more information about Hyper-V Security read Hyper-V Security Guide.
4.2.10 Update Security
The final component in infrastructure security covers applying security updates to the entire infrastructure layer. This process should also include updates to the switches, firewalls, and firmware.
As with most aspects of private cloud provision, the key attribute is high levels of automation. Updates need to be delivered in a timely manner and targeted correctly at each running host computer. Newly provisioned computers require updates to be applied before being brought online and the management interface must keep track of the update status of all running host computers and virtual machines.
Private cloud environments do significantly facilitate the process of applying security updates, as you can use the pooled resources feature to your advantage. Because no virtual machine is tied to any one host computer and no compute resource is tied to any physical host computer, you have the flexibility to move resources around while you update operating systems or carry out other maintenance tasks.
Learn by Example:
For example, if you need to update the hypervisor on your host computers, you can simply start with one host computer, live migrate the running virtual machines onto other host computers, apply updates to that host, reboot, test functionality, and then live migrate the running virtual machines from another host computer onto that updated computer. You continue this action until you have updated all host computers.
Update testing is also simplified, as you have the ability to provision hardware and software to carry out that testing. By taking snapshots of running virtual machines before updating, you have an immediate fallback position should the update fail. As long as your datasets for each application are independent of the application, then failure of the virtual machine failure should not corrupt the data and when the previous image of the virtual machine is restored, the application should function as normal. Hence, private cloud environments give significant benefits when testing, deploying, and rolling back security updates.
In a hosted or hybrid environment, the cloud service provider might not have permission to apply updates to virtual machines, particularly with IaaS provision. In this case, the provider must make the update tools available to the consumer and ensure that they are used properly to keep the consumer’s environment up-to-date.
Having applied effective security to your infrastructure, you can start to examine security at the platform level. Good platform security is essential for high levels of application security in the software layer. Unless you have addressed potential attack points at the platform level, you are potentially compromising security of all your applications. You also need to consider security between the platform and the infrastructure layers. The figure below summarizes these security considerations.
The first part of platform security is at the virtualization level; here you must protect the virtual machines from each other and from the host computers. The host computers must also be protected from the virtual machines.
To achieve high levels of security, you must consider each virtual machine as having its own defensive perimeter. These defenses will consist of a guest firewall, anti-virus, system policies, and IPsec-secured communications. In addition, you may also want to apply intrusion detection and security monitoring on the guest virtual machine, using either an agent-based or agentless mechanism. In effect, you are applying server security best practice to the virtual machines.
To support the rapid elasticity attribute of private cloud environments, virtual machines are typically provisioned from templates. As a virtual machine is provisioned, it must have the latest security updates applied, the anti-virus definitions updated, any policy changes implemented, and monitoring agents brought up to the latest release. A machine certificate needs to be installed and IPsec policies applied before bringing the virtual machine online in the production environment. Virtual networking simplifies this process, as the provisioning system can switch the virtual machine into a limited access security update virtual network to carry out this updating before switching it across to the production environment network.
If you are using System Center 2012 to manage your Private Cloud, cloud services are upgraded by selecting a new version of the service template. For more information on how to perform that read How to Upgrade a Service Deployed to a Private Cloud.
As mentioned in the infrastructure section, if the provider does not have access to the virtual machines in a PaaS environment, then there needs to be a mechanism for applying security updates to these virtual machines.
Security updates to the virtual machines should also address other platform components, such as application frameworks, user experience (UX) services, integration services, queuing services, and so on. If you are providing PaaS for your consumers, then at the end of this provisioning process, they should be able to connect to the virtual machines and start developing applications. If you are providing SaaS, you can start installing your applications and running services.
4.3.1 Data Security
The platform layer also includes access to data services, so you should consider security aspects of this storage as well. Because of the generalized increased threat levels (not just to private cloud implementations) it is important that you take the view that all data is accessible, wherever it is stored. The principle of security through obscurity is well and truly discredited, as attackers with administrator rights can gain access to all levels of a private cloud environment. If a data bit is stored, you must assume an attacker can access it. Only the combination of encryption, ACLs, monitoring and auditing can provide effective levels of security.
Other considerations with data security require you to consider the lifecycle of a data bit. Within private cloud environments, data bits are not written just to one location on a single hard drive. The requirement for resilience results in that information being replicated to multiple locations. In addition, this data may appear on caching disk controllers, in temporary files, or in other stores through application-level or operating system replication.
Finally, data at rest is always more vulnerable than data in transit. There are technologies that enable attackers to intercept data in transit between two hosts, but it may not be possible or practicable to reconstruct that data. In any event, data intercepted in transit can only compromise that individual transmission, whereas accessing data at rest can provide the entire data set.
Data security is a key factor that requires extensive investigation. Although the user perception of cloud services is that their data is “somewhere out there”, as an operator you cannot afford to take such a lax view. You must implement strict data security and review where your data resides from the moment of writing it to disk to the point at which it is scrubbed or encrypted beyond recovery.
4.3.2 Application Framework Security
Your choice of application framework will depend on the type of applications and the development environment that your cloud environment will support. Hence, you will need to ensure that you apply strict standards in terms of what application framework types and versions are available, how those frameworks can be used, and how you update them.
4.3.3 Development Environment Security
Your provision of development environment may result from your customer requirements or may be something that you impose as an organizational standard. However, the larger the number of development environments that you support, the greater the challenge of providing adequate security.
Whatever development environment you provide, it is important that your consumers implement best security practices into the applications that they create following the principles of SDL. Factors such as using appropriate class design to reduce attack surface area, developing robust exception management, avoiding threading vulnerabilities and so on apply even more in a private cloud environment. Providing consumers with a sandboxed environment can significantly reduce the threat from poorly secured code that your customers create.
When tenants deploy their applications, strict application partitioning is essential. Each tenant’s application must be completely bounded within its environment and not able to access other tenant applications or data. Any attempts to do so must be detected and that application instance suspended until you can complete your forensic analysis.
4.3.4 Update Security
Update security in the platform layer shares similar factors as the infrastructure layer. Updates need to be tested and deployed rapidly while minimizing downtime. Virtualization and virtual machine snapshots can assist in this process by creating fallback positions so that platform components can be updated. Private cloud environments simplify this process in that updates to development environments can be carried out when the development environment is not in use by the consumer. Again, you must consider the circumstances in which you might not have access to the virtual machines to make these updates.
As the highest level of the private cloud service provision layers, software security brings its own specific security challenges that are unique to an environment that hosts live applications. The figure below shows these areas.
4.4.1 Application Security
Application security in private cloud implementations has many commonalities with data center application hosting. All the usual best practices about making applications secure by design and secure by default apply equally in the private cloud. However, there are the following issues that are specific to the cloud.
- Application partitioning. The requirement for a multi-tenant support in private cloud environments requires strict application partitioning, where provisioned applications only service requests from users within the provisioning consumer’s organizational unit or virtual team. Supporting this multi-tenant model requires full integration between each running application and the authentication and authorization mechanisms. Typically, authentication would be carried out through federated identities, using an industry-standard federation model such as Security Assertion Markup Language (SAML) token exchange.
- Client trust levels. With private cloud implementations, you may not have the same level of control over client types, operating systems, browser types, update levels and anti-virus security as with a more tightly-controlled network, particularly if you are making use of the universal connectivity aspect of cloud provision. In consequence, applications that you create should validate and constrain all client input by checking it for type, range, length, and format.
4.4.2 Update Security
Update security in the software layer involves similar considerations to updates in the platform layer. Again, the deployment flexibility and virtualization features assist with installing application security updates and rolling back a complete application if an update fails.
The purpose of the service delivery layer is to make the services in the private cloud environment available to the consumer. The service delivery layer also provides the interface through which consumers can connect. Capabilities that the service delivery layer provides are:
- Service end-points – provides the connection points to the SaaS, PaaS or IaaS hosted services.
- Self-service portal – enables consumers to request cloud resources and to return those resources to the general pool when no longer required.
- Service catalog – lists the services to which consumers can connect.
- Service provisioning – enables consumers to provision virtual machines, development environments, or applications.
- Billing – converts service usage into cost values and dispatches bills automatically.
- Service contracts – publishes and maintains a register of SLAs and operating level agreements (OLAs) for each tenant.
- Metering – accounts for consumers’ usage of cloud resources and sends this information to the billing capability.
- Service reporting – reports on the service levels actually provided and compares these levels to SLAs.
As this layer provides the service connection to the consumer, security is a critical issue with all these capabilities. Figure 11 shows the elements of the service delivery layer that require this security.
As with the software, platform, and infrastructure layers, the security capabilities of IdAM, data protection, security monitoring, security management, authentication, authorization, RBAC and auditing all apply to the service delivery layer.
4.5.1 Connection Security
Connection security is a key element in securing the delivery of services, as consumers will always be accessing these services using a remote network connection. With private cloud environments, this paper has highlighted why you should consider the internal network as an untrusted network alongside the Internet. Hence, all client connections should be treated with the same level of minimal trust.
Establishing a secure connection to a client helps to ensure integrity of the data and makes it more difficult for an attacker to compromise the data stream. Hence, techniques such as TLS/SSL encryption using a minimum of 2048-bit public/private key pairs and 128-bit bulk encryption keys are essential.
Authentication is also a key requirement, as your private cloud environment should typically not be accepting unauthenticated requests. If you have a public web site that accepts anonymous requests, then this site should be hosted separately by a commercial hosting provider.
Certificates used for TLS/SSL traffic can be third-party or generated automatically by your internal CA. Regardless of the process that you use, clients should have the root certificate stored in their trusted root certification store to prevent error messages on connection. Users should be trained to be immediately suspicious if they receive a certificate error when connecting to a private cloud resource.
If you have implemented an SSL inspection mechanism, then this mechanism can assist by also providing other validations, such as CA authenticity, certificate revocation list (CRL) checking, chaining, and other security tests on the certificate.
Connections from the service delivery layer to the software, platform, or infrastructure layer also need encryption and mutual authentication, typically by use of TLS/SSL or IPsec encryption.
4.5.2 Service End-Point Security
Even though the role of the perimeter network (DMZ) has diminished in private cloud implementations, the point at which the consumer connects to the service delivery layer is still a significant security boundary. Hence, your security defenses should aim to prevent the most common forms of attack from succeeding.
The security techniques of port and protocol restrictions combined with packet inspections, traffic analysis, intrusion detection systems, and honey traps are not unique to private cloud environments; what changes is the degree of automation that is necessary to respond to attacks. Defenses of any kind are useless if they are not actively protected and the increasing threat profile from more sophisticated attacks makes passive defense no longer effective in protecting your environment. Hence your perimeter defenses need to be closely monitored, with immediate follow-up action on any intrusion.
Authentication, authorization, and audit controls must apply at the point of contact. Links to federated identity providers must be secure from tampering or interception.
The management stack contains a range of linked capabilities that provide the ability to manage the service delivery layers. Typically, these are capabilities to which the provider connects rather than the consumer. However, some of the reporting output from the management stack can appear in the service delivery layer and form the basis for information that the consumer can access.
The provider’s contact with the management layer goes through the same levels of authentication, authorization, and auditing as the consumer’s approach to the service delivery layer. Although you might expect that you should be able to trust your administrators more, their greater levels of control mean that you have to be more aware of what your administrators are up to and in consequence, can afford to trust them less.
4.6.1 Management Tools
The exact management tools that you use in a private cloud environment will depend on your organizational policy, operating system and virtualization platforms, training, and personal preference. Tools with specific security functionality cover the following capabilities:
- Deployment and Provisioning Management
- Capacity Management
- Change and Configuration Management
- Release and Deployment Management
- Network Management
- Fabric Management
- Incident and Problem Management
4.6.2 Authentication, Authorization, Auditing and Role-Based Access Control
The management stack must fully integrate with the highest levels of authentication available within your private cloud environment. Typically, you would implement two-factor authentication alongside federation to identity-enable individual management applications within the cloud.
4.6.3 Management Isolation from User Data
In a fully service-oriented private or hybrid cloud implementation, you treat your organization’s business units as separate tenants. In consequence, your administrators are a separate tenant and access rights to other tenants’ data should be restricted.
In consequence, auditing for administrators must look for unexpected behaviors, such as changing permissions to give access to tenant resources. This response to such incidents (whether concerning administrator accounts or not) should be gradated, in that an attempt to view a general document in a particular business unit does not necessarily need to be treated in the same way as an attempt to access a spreadsheet of company salaries and bonuses owned by the Finance department. As with any business asset, there should be a sanity check to establish if the administrator has valid reasons to change permissions on a particular file.
Although automation and data processing provides advanced capacity to analyze large data sets that auditing generates, a common-sense human-centric approach needs to apply to investigative follow-up. Any investigation needs to follow the contractual terms of the employee’s engagement and comply with local employment laws.
With private cloud environments, you have three options for client security:
- Secure trusted client. A secure trusted client one that exists on the internal network and has a security trust relationship with the cloud domain. You would provide appropriate levels of protection to these client computers by using anti-virus protection, two-factor authentication, hardware computer security, and integral data protection. Connections to the private cloud network must be made over a protected communications channel with a quarantine process to ensure that the client computer has the latest security updates, anti-virus definitions, personal firewall enabled and so on before being allowed to access the service endpoints.
- Insecure untrusted client. Here, you do not trust the client computers at all and assume that every input that you receive from the client is suspect. You then check the input for type, range, length, and format. Your application design ensures that no sensitive data is stored locally on the client, which can be a desktop, tablet, mobile device, or even a browser on a public kiosk computer.
- Secure untrusted client. With this option, you provide as much local security as possible to the client as with the secure trusted client example. However, you do not set up any form of inherent trust relationship between the client and the cloud environment, as would be explicit with domain membership. Authentication would be through federation and your cloud-based applications would treat all client input as suspect and thoroughly check this information before accepting it.
The option for insecure trusted client is not considered further for obvious reasons.
An example of a secure untrusted client would be a laptop with integrated disk encryption, either hardware or software-based. It would require two-factor authentication to log on, using either a smart card or fingerprint recognition and would not be domain-joined. Authentication to the cloud service would also be two-factor and the device might include a geographic locating device to assist with recovery in case of theft. The capability of an Endpoint Protection Scanning provides the ability to provide access to the Private Cloud, but limit access if the client doesn’t meet certain constraints like an updated AV, specific OS Version and other checks.
One area that may change with private and hybrid cloud implementations is domain membership, which is no longer a pre-requisite if the client uses federated authentication to identify themselves to cloud – based applications using the cloud directory service as their home realm. It should be noted that implementing federated authentication on a standalone computer rather than adding that computer to the domain changes the profile of network services available to the clients.
In reality, most organizations running private cloud environments will probably attempt to secure their client computers as much as possible. However, as previously discussed, if an attacker can gain physical possession of a hardware device, your attempts to protect the data on it must be extremely effective and render the stored data functionally inaccessible.
There must certainly be no inherent degradation of the security of your private cloud environment if a client computer is stolen and compromised. And if a client computer is stolen and compromised, the effects of this compromise on your environment must be carefully assessed.
Unfortunately, the only person who is likely to know the difference between a stolen laptop and a stolen and compromised one will be the attacker. In consequence, you must either be absolutely sure that a stolen client laptop is as close to an inert lump of plastic and metal from the attacker’s perspective or that you can rapidly make any changes to your own environment that may be required (for example, reissuing trusted root certificates and revoking ones on the stolen equipment) resulting from the possible compromise.
One area where IT decision-makers have considerable concerns with private and hybrid cloud implementations are the areas of legality, data protection, personally identifiable information (PII) and compliance. These requirements are particularly important in hybrid implementations, where you or business units within your organization may be in the position of the customer to a public cloud supplier.
Organizations looking at implementing a private cloud infrastructure are likely to need to ensure that effective governance of the new environment. The management stack of the private cloud architecture should enable management to view security aspects of the environment and show the current threat levels to the organization. Typically, governance oversight is provided through a web-based dashboard that translates the technical aspect of security issues into understandable business language.
Organizations in certain industry verticals such as health, financial operations, and the provision of public services fall under the auspices of a range of compliance requirements and regulations, such as the Health Insurance Portability and Accountability Act (HIPPA). With international organizations or hybrid implementations, it is possible that moving to a private cloud environment may result in users in one country with one set of regulations accessing data in another country with a different or even conflicting set of requirements.
Learn by Example:
The requirement for access to company data by law enforcement agencies is another area that must be examined carefully. For example, an organization may be presented with a subpoena to make its e-mail records made available. If this occurrence takes place, what is the effect on client confidentiality for data owned by a business unit from a different continent? Business units must be aware that these risks exist and that they may be exposed to the legal requirements of a different jurisdiction.
Ultimately, your organization needs to be aware of the compliance requirements of all the countries in which it operates. One conclusion may be that data from one country cannot be hosted in another, as can be the case with public cloud implementations.
4.8.3 Integrated Governance, Risk Management, and Compliance
The most effective approach to mitigating legal issues is to implement a fully integrated governance, risk management, and compliance framework. This framework would need to be defined at the highest level and then designed into the private cloud implementation.
4.8.4 Integrated Governance, Risk Management, and Compliance
Personally identifiable information (PII) is data that enables a living person to be identified. The US Office of Management and Budget identifies the following information as PII.
- Full name (unless a very common name)
- National identification number
- Vehicle registration plate number
- Driver's license number
- Date of birth
Protection of PII can be a significant issue with organizations that operate in multiple jurisdictions. For example, legislation such as the Data Protection Directive of the European Union (Directive 95/46/EC) governs the protection of PII in Europe. Among other requirements, this legislation requires data holders to give notice to users that their data is being stored and grants them access to correct inaccurate data. This data must also be protected from potential abuses. Hence, storing personal data can be a significant complication.
This complication arises not from the fact that the data might be insecure, as cloud environments can be made as secure as more traditional data centers. In this case, the issue is about granting access to the owner to amend the data. If your organization needs to store PII and you have a legal requirement to enable the owner of that data to change it, then you should consider how that information can be presented to the owner and amended if required.
Your organization must create a statement that covers its collection, collation, storage, management, transfer, and deletion of PII. This statement must address the process for releasing the information to the original owner and to any third parties, such as a hosted cloud provider.
The US Patriot act also introduces complications for multi-national organizations that are wholly-owned by US companies but operate in other parts of the world. If this situation applies to your organization, you should review the requirements of this act when planning data storage and PII.
4.8.5 Legal Agreements
The basis of the private cloud legal relationships between the IT department and the business units of the organization that subscribe to those services will be contained within a number of documents. These documents should align with the IT Infrastructure Library (ITIL) Security Management process and include:
- Service Level Agreement (SLA). The SLA is the key definition of the arrangement between the service provider and the consumer of the private cloud services. This document should clearly identify the security levels that the service provider applies and identify the risks so that the consumer can make an informed decision on the service offerings.
- Operating Level Agreement (OLA). This document defines the relationships between the groups within the organization that support the SLA. The OLA makes these support relationships clearly visible and helps the consumer identify responsibility for support functions. The OLA must clearly spell out who is responsible for security support, the boundaries of that support, and the contact details and follow-up information if there is a security issue.
- Terms of Usage (ToU). ToU agreements make the consumer aware of what is or is not deemed acceptable usage of the cloud-based service, particularly in relation to security. For example, running port scans or using other people’s identities to log on are areas which might be specifically prohibited by the ToU.
- User License Agreements (ULAs). ULAs specify the terms that the consumer must accept before accessing private cloud applications, platforms, or operating systems. Some of the ULAs may come from commercial off-the-shelf software hosted in the cloud environment or may be specifically created by the organization’s legal department for its in-house applications.
All of these documents must set out clearly the security considerations of using the private cloud service, what activities are prohibited, and any penalties for contravention of these prohibitions. It should highlight that security responses may be automated and that manual intervention may be required to undo those responses. The legal documentation must also set out the process for establishing the identity of the consumer in the case of activities such as password resets or account provisioning and deprovisioning.
In this section we discussed detailed options and issues that need to be considered when designing security for a private cloud solution. In the next section, we’ll discuss some significant challenges that are unique to private cloud deployments and how you can most effectively respond to these challenges.