Need assignment due on 01/06/2018 / Network Web Services – Data Security

  

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

Using the scenario from the Week One assignment, write a 2- to 3-page business memo to the CIO of your organization detailing security recommendations. Include a Visio® diagram illustrating your security model and any areas of concern. Be sure to follow APA standards for a business memo.

Referencing your prior week’s infrastructure (specific) choices, use the “Understanding Cloud Security” guidance in Ch. 12 of the Cloud Computing Bible to think about one or two additional recommendations that you would make to improve security.

Incorporate into your memo the inherent security concerns for each area:

  • Service:      What service to use (SaaS, PaaS, etc)? This is based on your Week One      decision.
  • Design      principles: To what standards do you need to adhere? (use Ch. 14, Amazon      Web Services for Dummies, as a reference) This is based on your Week      Two decision.
  • Security      concerns: What virtualization security concerns do you have? This is based      on your Week Three decision.

If you don’t have access to information about your organization’s network, try meeting with the network manager to come up with some ideas. If this is not possible, use the following scenario:

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

You are the IT Manager of a mid-size wholesale distribution business of 500 employees. The following are a few systems that are used within your business:

  • Internal      Exchange 2003 server – this is a physical server
  • Internal      CRM system – this is a virtual machine
  • Internal      ERP system using SAP ERP 6.0 – this is a physical server
  • Internal      File server using 1.2 TB of data – this is a virtual machine
  • (2)      Internal SQL Servers used for business intelligence – (1) server is      virtual and (1) server is physical

The network has the following characteristics:

  • A WAN      with (4) connected sites – (3) distribution centers, and a corporate      office.
  • Each      site is connected via a 100 Mbps MPLS WAN and has a single T-1 for a      failover connection.
  • The      datacenter is centralized at one of the distribution centers.
  • There      are (2) internet connections a 100 Mbps primary connection and a 10 Mbps      backup.
  • The      network has redundant firewalls that also provide VPN access for any      remote access that is needed.
  • Each      site has a LAN that is 1 Gbps Ethernet.

Using the scenario from the Week One assignment, write a 2- to 3-page business memo to the CIO of your organization detailing security recommendations. Include a Visio® diagram illustrating your security model and any areas of concern. Be sure to follow APA standards for a business memo.

Referencing your prior week’s infrastructure (specific) choices, use the “Understanding Cloud Security” guidance in Ch. 12 of the Cloud Computing Bible to think about one or two additional recommendations that you would make to improve security.

Incorporate into your memo the inherent security concerns for each area:

· Service: What service to use (SaaS, PaaS, etc)? This is based on your Week One decision.

· Design principles: To what standards do you need to adhere? (use Ch. 14, Amazon Web Services for Dummies, as a reference) This is based on your Week Two decision.

· Security concerns: What virtualization security concerns do you have? This is based on your Week Three decision.

If you don’t have access to information about your organization’s network, try meeting with the network manager to come up with some ideas. If this is not possible, use the following scenario:

You are the IT Manager of a mid-size wholesale distribution business of 500 employees. The following are a few systems that are used within your business:

· Internal Exchange 2003 server – this is a physical server

· Internal CRM system – this is a virtual machine

· Internal ERP system using SAP ERP 6.0 – this is a physical server

· Internal File server using 1.2 TB of data – this is a virtual machine

· (2) Internal SQL Servers used for business intelligence – (1) server is virtual and (1) server is physical

The network has the following characteristics:

· A WAN with (4) connected sites – (3) distribution centers, and a corporate office.

· Each site is connected via a 100 Mbps MPLS WAN and has a single T-1 for a failover connection.

· The datacenter is centralized at one of the distribution centers.

· There are (2) internet connections a 100 Mbps primary connection and a 10 Mbps backup.

· The network has redundant firewalls that also provide VPN access for any remote access that is needed.

· Each site has a LAN that is 1 Gbps Ethernet.

Chapter 12
Understanding Cloud Security

IN THIS CHAPTER

Reviewing cloud security concerns

Understanding how cloud data can be secured

Planning for security in your system

Learning how identity is used to allow secure cloud access

Cloud computing has lots of unique properties that make it very valuable. Unfortunately, many of those properties make security a singular concern. Many of the tools and techniques that you would use to protect your data, comply with regulations, and maintain the integrity of your systems are complicated by the fact that you are sharing your systems with others and many times outsourcing their operations as well. Cloud computing service providers are well aware of these concerns and have developed new technologies to address them.

Different types of cloud computing service models provide different levels of security services. You get the least amount of built in security with an Infrastructure as a Service provider, and the most with a Software as a Service provider. This chapter presents the concept of a security boundary separating the client’s and vendor’s responsibilities.

Adapting your on-premises systems to a cloud model requires that you determine what security mechanisms are required and mapping those to controls that exist in your chosen cloud service provider. When you identify missing security elements in the cloud, you can use that mapping to work to close the gap.

Storing data in the cloud is of particular concern. Data should be transferred and stored in an encrypted format. You can use proxy and brokerage services to separate clients from direct access to shared cloud storage.

Logging, auditing, and regulatory compliance are all features that require planning in cloud computing systems. They are among the services that need to be negotiated in Service Level Agreements.

Also in this chapter, you learn about identity and related protocols from a security standpoint. The concept of presence as it relates to identity is also introduced.

Securing the Cloud

The Internet was designed primarily to be resilient; it was not designed to be secure. Any distributed application has a much greater attack surface than an application that is closely held on a Local Area Network. Cloud computing has all the vulnerabilities associated with Internet applications, and additional vulnerabilities arise from pooled, virtualized, and outsourced resources.

In the report “Assessing the Security Risks of Cloud Computing,” Jay Heiser and Mark Nicolett of the Gartner Group (

http://www.gartner.com/DisplayDocument?id=685308

) highlighted the following areas of cloud computing that they felt were uniquely troublesome:

· Auditing

· Data integrity

· e-Discovery for legal compliance

· Privacy

· Recovery

· Regulatory compliance

Your risks in any cloud deployment are dependent upon the particular cloud service model chosen and the type of cloud on which you deploy your applications. In order to evaluate your risks, you need to perform the following analysis:

1. Determine which resources (data, services, or applications) you are planning to move to the cloud.

2. Determine the sensitivity of the resource to risk.

Risks that need to be evaluated are loss of privacy, unauthorized access by others, loss of data, and interruptions in availability.

3. Determine the risk associated with the particular cloud type for a resource.

Cloud types include public, private (both external and internal), hybrid, and shared community types. With each type, you need to consider where data and functionality will be maintained.

4. Take into account the particular cloud service model that you will be using.

Different models such as IaaS, SaaS, and PaaS require their customers to be responsible for security at different levels of the service stack.

5. If you have selected a particular cloud service provider, you need to evaluate its system to understand how data is transferred, where it is stored, and how to move data both in and out of the cloud.

You may want to consider building a flowchart that shows the overall mechanism of the system you are intending to use or are currently using.

One technique for maintaining security is to have “golden” system image references that you can return to when needed. The ability to take a system image off-line and analyze the image for vulnerabilities or compromise is invaluable. The compromised image is a primary forensics tool. Many cloud providers offer a snapshot feature that can create a copy of the client’s entire environment; this includes not only machine images, but applications and data, network interfaces, firewalls, and switch access. If you feel that a system has been compromised, you can replace that image with a known good version and contain the problem.

Many vendors maintain a security page where they list their various resources, certifications, and credentials. One of the more developed offerings is the AWS Security Center, shown in Figure 12.1, where you can download some backgrounders, white papers, and case studies related to the Amazon Web Service’s security controls and mechanisms.

FIGURE 12.1 The AWS Security Center (

http://aws.amazon.com/security/

) is a good place to start learning about how Amazon Web Services protects users of its IaaS service.

The security boundary

In order to concisely discuss security in cloud computing, you need to define the particular model of cloud computing that applies. This nomenclature provides a framework for understanding what security is already built into the system, who has responsibility for a particular security mechanism, and where the boundary between the responsibility of the service provider is separate from the responsibility of the customer.

All of Chapter 1 was concerned with defining what cloud computing is and defining the lexicon of cloud computing. There are many definitions and acronyms in the area of cloud computing that will probably not survive long. The most commonly used model based on U.S. National Institute of Standards and Technology (NIST; 

http://www.csrc.nist.gov/groups/SNS/cloud-computing/index.html

) separates deployment models from service models and assigns those models a set of service attributes. Deployment models are cloud types: community, hybrid, private, and public clouds. Service models follow the SPI Model for three forms of service delivery: Software, Platform, and Infrastructure as a Service. In the NIST model, as you may recall, it was not required that a cloud use virtualization to pool resources, nor did that model require that a cloud support multi-tenancy. It is just these factors that make security such a complicated proposition in cloud computing.

Chapter 1 also presented the Cloud Security Alliance (CSA; 

http://www.cloudsecurityalliance.org/

) cloud computing stack model, which shows how different functional units in a network stack relate to one another. As you may recall from Chapter 1, this model can be used to separate the different service models from one another. CSA is an industry working group that studies security issues in cloud computing and offers recommendations to its members. The work of the group is open and available, and you can download its guidance from its home page, shown in Figure 12.2.

FIGURE 12.2 The Cloud Security Alliance (CSA) home page at 
http://www.cloudsecurityalliance.org/
 offers a number of resources to anyone concerned with securing his cloud deployment.

The CSA partitions its guidance into a set of operational domains:

· Governance and enterprise risk management

· Legal and electronic discovery

· Compliance and audit

· Information lifecycle management

· Portability and interoperability

· Traditional security, business continuity, and disaster recovery

· Datacenter operations

· Incidence response, notification, and remediation

· Application security

· Encryption and key management

· Identity and access management

· Virtualization

You can download the group’s current work in these areas from the different sections of its Web site.

One key difference between the NIST model and the CSA is that the CSA considers multi-tenancy to be an essential element in cloud computing. Multi-tenancy adds a number of additional security concerns to cloud computing that need to be accounted for. In multi-tenancy, different customers must be isolated, their data segmented, and their service accounted for. To provide these features, the cloud service provider must provide a policy-based environment that is capable of supporting different levels and quality of service, usually using different pricing models. Multi-tenancy expresses itself in different ways in the different cloud deployment models and imposes security concerns in different places.

Security service boundary

The CSA functional cloud computing hardware/software stack is the Cloud Reference Model. This model, which was discussed in Chapter 1, is reproduced in Figure 12.3. IaaS is the lowest level service, with PaaS and SaaS the next two services above. As you move upward in the stack, each service model inherits the capabilities of the model beneath it, as well as all the inherent security concerns and risk factors. IaaS supplies the infrastructure; PaaS adds application development frameworks, transactions, and control structures; and SaaS is an operating environment with applications, management, and the user interface. As you ascend the stack, IaaS has the least levels of integrated functionality and the lowest levels of integrated security, and SaaS has the most.

The most important lesson from this discussion of architecture is that each different type of cloud service delivery model creates a security boundary at which the cloud service provider’s responsibilities end and the customer’s responsibilities begin. Any security mechanism below the security boundary must be built into the system, and any security mechanism above must be maintained by the customer. As you move up the stack, it becomes more important to make sure that the type and level of security is part of your Service Level Agreement.

FIGURE 12.3 The CSA Cloud Reference Model with security boundaries shown

In the SaaS model, the vendor provides security as part of the Service Level Agreement, with the compliance, governance, and liability levels stipulated under the contract for the entire stack. For the PaaS model, the security boundary may be defined for the vendor to include the software framework and middleware layer. In the PaaS model, the customer would be responsible for the security of the application and UI at the top of the stack. The model with the least built-in security is IaaS, where everything that involves software of any kind is the customer’s problem. Numerous definitions of services tend to muddy this picture by adding or removing elements of the various functions from any particular offering, thus blurring which party has responsibility for which features, but the overall analysis is still useful.

In thinking about the Cloud Security Reference Model in relationship to security needs, a fundamental distinction may be made between the nature of how services are provided versus where those services are located. A private cloud may be internal or external to an organization, and although a public cloud is most often external, there is no requirement that this mapping be made so. Cloud computing has a tendency to blur the location of the defined security perimeter in such a way that the previous notions of network firewalls and edge defenses often no longer apply.

This makes the location of trust boundaries in cloud computing rather ill defined, dynamic, and subject to change depending upon a number of factors. Establishing trust boundaries and creating a new perimeter defense that is consistent with your cloud computing network is an important consideration. The key to understanding where to place security mechanisms is to understand where physically in the cloud resources are deployed and consumed, what those resources are, who manages the resources, and what mechanisms are used to control them. Those factors help you gauge where systems are located and what areas of compliance you need to build into your system.

Table 12.1 lists some of the different service models and lists the parties responsible for security in the different instances.

Table 12.1 Security Responsibilities by Service Model

Both vendor and customer

Customer

Private/Community

Customer

Trusted

Private/Community

Vendor

Customer

On- or off-premises

Trusted

Private/Community

Vendor

Vendor

Off- or on-premises

Trusted

Vendor

Vendor

Model Type

Infrastructure Security Management

Infrastructure Owner

Infrastructure Location

Trust Condition

Hybrid

Both vendor and customer

Both on- and off-premises

Both trusted and untrusted

Private/Community

Customer

On- or off-premises

Trusted

Vendor

Off- or on-premises

Public

Off-premises

Untrusted

Security mapping

The cloud service model you choose determines where in the proposed deployment the variety of security features, compliance auditing, and other requirements must be placed. To determine the particular security mechanisms you need, you must perform a mapping of the particular cloud service model to the particular application you are deploying. These mechanisms must be supported by the various controls that are provided by your service provider, your organization, or a third party. It’s unlikely that you will be able to duplicate security routines that are possible on-premises, but this analysis allows you to determine what coverage you need.

A security control model includes the security that you normally use for your applications, data, management, network, and physical hardware. You may also need to account for any compliance standards that are required for your industry. A compliance standard can be any government regulatory framework such as Payment Card Industry Data Security Standards (PCI-DSS), Health Insurance Portability and Accountability Act (HIPPA), Gramm–Leach–Bliley Act (GLBA), or the Sarbanes–Oxley Act (SOX) that requires you operate in a certain way and keep records.

Essentially, you are looking to identify the missing features that would be required for an on-premises deployment and seek to find their replacements in the cloud computing model. As you assign accountability for different aspects of security and contract away the operational responsibility to others, you want to make sure they remain accountable for the security you need.

Securing Data

Securing data sent to, received from, and stored in the cloud is the single largest security concern that most organizations should have with cloud computing. As with any WAN traffic, you must assume that any data can be intercepted and modified. That’s why, as a matter of course, traffic to a cloud service provider and stored off-premises is encrypted. This is as true for general data as it is for any passwords or account IDs.

These are the key mechanisms for protecting data mechanisms:

· Access control

· Auditing

· Authentication

· Authorization

Whatever service model you choose should have mechanisms operating in all four areas that meet your security requirements, whether they are operating through the cloud service provider or your own local infrastructure.

Brokered cloud storage access

The problem with the data you store in the cloud is that it can be located anywhere in the cloud service provider’s system: in another datacenter, another state or province, and in many cases even in another country. With other types of system architectures, such as client/server, you could count on a firewall to serve as your network’s security perimeter; cloud computing has no physical system that serves this purpose. Therefore, to protect your cloud storage assets, you want to find a way to isolate data from direct client access.

One approach to isolating storage in the cloud from direct client access is to create layered access to the data. In one scheme, two services are created: a broker with full access to storage but no access to the client, and a proxy with no access to storage but access to both the client and broker. The location of the proxy and the broker is not important (they can be local or in the cloud); what is important is that these two services are in the direct data path between the client and data stored in the cloud.

Under this system, when a client makes a request for data, here’s what happens:

1. The request goes to the external service interface (or endpoint) of the proxy, which has only a partial trust.

2. The proxy, using its internal interface, forwards the request to the broker.

3. The broker requests the data from the cloud storage system.

4. The storage system returns the results to the broker.

5. The broker returns the results to the proxy.

6. The proxy completes the response by sending the data requested to the client.

Figure 12.4 shows this storage “proxy” system graphically.

Note

This discussion is based on a white paper called “Security Best Practices For Developing Windows Azure Applications,” by Andrew Marshall, Michael Howard, Grant Bugher, and Brian Harden that you can find at 

http://download.microsoft.com/download/7/3/E/73E4EE93-559F-4D0F-A6FC-7FEC5F1542D1/SecurityBestPracticesWindowsAzureApps x

. In their presentation, the proxy service is called the Gatekeeper and assigned a Windows Server Web Role, and the broker is called the KeyMaster and assigned a Worker Role.

This design relies on the proxy service to impose some rules that allow it to safely request data that is appropriate to that particular client based on the client’s identity and relay that request to the broker. The broker does not need full access to the cloud storage, but it may be configured to grant READ and QUERY operations, while not allowing APPEND or DELETE. The proxy has a limited trust role, while the broker can run with higher privileges or even as native code.

The use of multiple encryption keys can further separate the proxy service from the storage account. If you use two separate keys to create two different data zones—one for the untrusted communication between the proxy and broker services, and another a trusted zone between the broker and the cloud storage—you create a situation where there is further separation between the different service roles.

FIGURE 12.4 In this design, direct access to cloud storage is eliminated in favor of a proxy/broker service.

Even if the proxy service is compromised, that service does not have access to the trusted key necessary to access the cloud storage account. In the multi-key solution, shown in Figure 12.5, you have not only eliminated all internal service endpoints, but you also have eliminated the need to have the proxy service run at a reduced trust level.

FIGURE 12.5 The creation of storage zones with associated encryption keys can further protect cloud storage fromunauthorized access.

Storage location and tenancy

Some cloud service providers negotiate as part of their Service Level Agreements to contractually store and process data in locations that are predetermined by their contract. Not all do. If you can get the commitment for specific data site storage, then you also should make sure the cloud vendor is under contract to conform to local privacy laws.

Because data stored in the cloud is usually stored from multiple tenants, each vendor has its own unique method for segregating one customer’s data from another. It’s important to have some understanding of how your specific service provider maintains data segregation.

Another question to ask a cloud storage provider is who is provided privileged access to storage. The more you know about how the vendor hires its IT staff and the security mechanism put into place to protect storage, the better.

Most cloud service providers store data in an encrypted form. While encryption is important and effective, it does present its own set of problems. When there is a problem with encrypted data, the result is that the data may not be recoverable. It is worth considering what type of encryption the cloud provider uses and to check that the system has been planned and tested by security experts.

Regardless of where your data is located, you should know what impact a disaster or interruption will have on your service and your data. Any cloud provider that doesn’t offer the ability to replicate data and application infrastructure across multiple sites cannot recover your information in a timely manner. You should know how disaster recovery affects your data and how long it takes to do a complete restoration.

Encryption

Strong encryption technology is a core technology for protecting data in transit to and from the cloud as well as data stored in the cloud. It is or will be required by law. The goal of encrypted cloud storage is to create a virtual private storage system that maintains confidentiality and data integrity while maintaining the benefits of cloud storage: ubiquitous, reliable, shared data storage. Encryption should separate stored data (data at rest) from data in transit.

Depending upon the particular cloud provider, you can create multiple accounts with different keys as you saw in the example with Windows Azure Platform in the previous section. Microsoft allows up to five security accounts per client, and you can use these different accounts to create different zones. On Amazon Web Service, you can create multiple keys and rotate those keys during different sessions.

Although encryption protects your data from unauthorized access, it does nothing to prevent data loss. Indeed, a common means for losing encrypted data is to lose the keys that provide access to the data. Therefore, you need to approach key management seriously. Keys should have a defined lifecycle. Among the schemes used to protect keys are the creation of secure key stores that have restricted role-based access, automated key stores backup, and recovery techniques. It’s a good idea to separate key management from the cloud provider that hosts your data.

One standard for interoperable cloud-based key management is the OASIS Key Management Interoperability Protocol (KMIP; 

http://www.oasis-open.org/committees/kmip/

). IEEE 1619.3 (

https://siswg.net/index.php?option=com_docman

) also covers both storage encryption and key management for shared storage.

Auditing and compliance

Logging is the recording of events into a repository; auditing is the ability to monitor the events to understand performance. Logging and auditing is an important function because it is not only necessary for evaluation performance, but it is also used to investigate security and when illegal activity has been perpetrated. Logs should record system, application, and security events, at the very minimum.

Logging and auditing are unfortunately one of the weaker aspects of early cloud computing service offerings.

Cloud service providers often have proprietary log formats that you need to be aware of. Whatever monitoring and analysis tools you use need to be aware of these logs and able to work with them. Often, providers offer monitoring tools of their own, many in the form of a dashboard with the potential to customize the information you see through either the interface or programmatically using the vendor’s API. You want to make full use of those built-in services.

Because cloud services are both multitenant and multisite operations, the logging activity and data for different clients may not only be co-located, they may also be moving across a landscape of different hosts and sites. You can’t simply expect that an investigation will be provided with the necessary information at the time of discovery unless it is part of your Service Level Agreement. Even an SLA with the appropriate obligations contained in it may not be enough to guarantee you will get the information you need when the time comes. It is wise to determine whether the cloud service provider has been able to successfully support investigations in the past.

As it stands now, nearly all regulations were written without keeping cloud computing in mind. A regulator or auditor isn’t likely to be familiar with the nature of running applications and storing data in the cloud. Even so, laws are written to ensure compliance, and the client is held responsible for compliance under the laws of the governing bodies that apply to the location where the processing or storage takes place.

Therefore, you must understand the following:

· Which regulations apply to your use of a particular cloud computing service

· Which regulations apply to the cloud service provider and where the demarcation line falls for responsibilities

· How your cloud service provider will support your need for information associated with regulation

· How to work with the regulator to provide the information necessary regardless of who had the responsibility to collect the data

Traditional service providers are much more likely to be the subject of security certifications and external audits of their facilities and procedures than cloud service providers. That makes the willingness for a cloud service provider to subject its service to regulatory compliance scrutiny an important factor in your selection of that provider over another. In the case of a cloud service provider who shows reluctance to or limits the scrutiny of its operations, it is probably wise to use the service in ways that limit your exposure to risk. For example, although encrypting stored data is always a good policy, you also might want to consider not storing any sensitive information on that provider’s system.

As it stands now, clients must guarantee their own regulatory compliance, even when their data is in the care of the service provider. You must ensure that your data is secure and that its integrity has not been compromised. When multiple regulatory entities are involved, as there surely are between site locations and different countries, then that burden to satisfy the laws of those governments is also your responsibility.

For any company with clients in multiple countries, the burden of regulatory compliance is onerous. While organizations such as the EEC (European Economic Community) or Common Market provide some relief for European regulation, countries such as the United States, Japan, China, and others each have their own sets of requirements. This makes regulatory compliance one of the most actively developing and important areas of cloud computing technology.

This situation is likely to change. On March 1, 2010, Massachusetts passed a law that requires companies that provide sensitive personal information on Massachusetts residents to encrypt data transmitted and stored on their systems. Businesses are required to limit the amount of personal data collected, monitor data usage, keep a data inventory, and be able to present a security plan on how they will keep the data safe. The steps require that companies verify that any third-party services they use conform to these requirements and that there be language in all SLAs that enforce these protections. The law takes full effect in March 2012.

Going forward, you want to ensure the following:

· You have contracts reviewed by your legal staff.

· You have a right-to-audit clause in your SLA.

· You review any third parties who are service providers and assess their impact on security and regulatory compliance.

· You understand the scope of the regulations that apply to your cloud computing applications and services.

· You consider what steps you must take to comply with the demands of regulations that apply.

· You consider adjusting your procedures to comply with regulations.

· You collect and maintain the evidence of your compliance with regulations.

· You determine whether your cloud service provider can provide an audit statement that is SAS 70 Type II-compliant.

The ISO/IEC 27001/27002 standard for information security management systems has a roadmap for mission-critical services that you may want to discuss with your cloud service provider. Amazon Web Services supports SAS70 Type II Audits.

Becoming a cloud service provider requires a large investment, but as we all know, even large companies can fail. When a cloud service provider fails, it may close or more likely be acquired by another company. You likely wouldn’t use a service provider that you suspected of being in difficulty, but problems develop over years and cloud computing has a certain degree of vendor lock-in to it. That is, when you have created a cloud-based service, it can be difficult or often impossible to move it to another service provider. You should be aware of what happens to your data if the cloud service provider fails. At the very least, you would want to make sure your data could be obtained in a format that could be accessed by on-premise applications.

The various attributes of cloud computing make it difficult to respond to incidents, but that doesn’t mean you should consider drawing up security incidence response policies. Although cloud computing creates shared responsibilities, it is often up to the client to initiate the inquiry that gets the ball rolling. You should be prepared to provide clear information to your cloud service provider about what you consider to be an incident or a breach in security and what are simply suspicious events.

Establishing Identity and Presence

Chapter 4 introduced the concept of identities, some of the protocols that support them, and some of the services that can work with them. Identities also are tied to the concept of accounts and can be used for contacts or “ID cards.” Identities also are important from a security standpoint because they can be used to authenticate client requests for services in a distributed network system such as the Internet or, in this case, for cloud computing services.

Identity management is a primary mechanism for controlling access to data in the cloud, preventing unauthorized uses, maintaining user roles, and complying with regulations. The sections that follow describe some of the different security aspects of identity and the related concept of “presence.” For this conversation, you can consider presence to be the mapping of an authenticated identity to a known location. Presence is important in cloud computing because it adds context that can modify services and service delivery.

Cloud computing requires the following:

· That you establish an identity

· That the identity be authenticated

· That the authentication be portable

· That authentication provide access to cloud resources

When applied to a number of users in a cloud computing system, these requirements describe systems that must provision identities, provide mechanisms that manage credentials and authentication, allow identities to be federated, and support a variety of user profiles and access policies. Automating these processes can be a major management task, just as they are for on-premises operations.

Identity protocol standards

The protocols that provide identity services have been and are under active development, and several form the basis for efforts to create interoperability among services.

OpenID 2.0 (

OpenID

) is the standard associated with creating an identity and having a third-party service authenticate the use of that digital identity. It is the key to creating Single Sign-On (SSO) systems. Some cloud service providers have adopted OpenID as a service, and its use is growing.

In Chapter 4, you learned how OpenID is associated with contact cards such as vCards and InfoCards. In that chapter, I briefly discussed how OpenID provides access to important Web sites and how some Web sites allow you to use your logins based on OpenID from another site to gain access to their site.

OpenID doesn’t specify the means for authentication of an identity, and it is up to the particular system how the authentication process is executed. Authentication can be by a Challenge and Response Protocol (CHAP), through a physical smart card, or using a flying finger or evil eye through a biometric measurement. In OpenIDL, the authentication procedure has the following steps:

1. The end-user uses a program like a browser that is called a user agent to enter an OpenID identifier, which is in the form of a URL or XRI.

An OpenID might take the form of name.openid.provider.org.

2. The OpenID is presented to a service that provides access to the resource that is desired.

3. An entity called a relaying party queries the OpenID identity provider to authenticate the veracity of the OpenID credentials.

4. The authentication is sent back to the relaying party from the identity provider and access is either provided or denied.

According to a report by one of OpenID’s directors called “OpenID 2009 Year in Review” by Brian Kissel (

http://openid.net/2009/12/16/openid-2009-year-in-review/

), there were over 1 billion OpenID accounts accepted by 9 million sites on the Internet.

The second protocol used to present identity-based claims in cloud computing is a set of authorization markup languages that create files in the form of being XACML and SAML. These protocols were described in Chapter 4 in detail, so I only mention them in passing here. SAML (Security Assertion Markup Language; 

http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=security

) is gaining growing acceptance among cloud service providers. It is a standard of OASIS and an XML standard for passing authentication and authorization between an identity provider and the service provider. SAML is a complimentary mechanism to OpenID and is used to create SSO systems.

Taken as a unit, OpenID and SAML are being positioned to be the standard authentication mechanism for clients accessing cloud services. It is particularly important for services such as mashups that draw information from two or more data services.

An open standard called OAuth (

http://oauth.net/

) provides a token service that can be used to present validated access to resources. OAuth is similar to OpenID, but provides a different mechanism for shared access. The use of OAuth tokens allows clients to present credentials that contain no account information (userID or password) to a cloud service. The token comes with a defined period after which it can no longer be used. Several important cloud service providers have begun to make OAuth APIs available based on the OAuth 2.0 standard, most notably Facebook’s Graph API and the Google Data API.

The DataPortability Project (

http://dataportability.org/

) is an industry working group that promotes data interoperability between applications, and the group’s work touches on a number of the emerging standards mentioned in this section. The group’s Web site is shown in Figure 12.6.

FIGURE 12.6 The home page of the DataPortability Project, an industry working group that promotes open identity standards

A number of vendors have created server products, such as Identity and Access Managers (IAMs), to support these various standards.

Windows Azure identity standards

The Windows Azure Platform uses a claims-based identity based on open authentication and access protocols and is a good example of a service implementing the standards described in the previous section. These standards may be used without modification on a system that is running in the cloud or on-premises, in keeping with Microsoft’s S+S (software plus services) approach to cloud computing.

Windows Azure security draws on the following three services:

· Active Directory Federation Services 2.0

· Windows Azure AppFabric Access Control Service

· Windows Identity Foundation (WIF)

The Windows Identity Foundation offers .NET developers Visual Studio integration of WS-Federation and WS-Trust open standards. ASP.NET Web applications created with WIF integrate the Windows Communication Foundation SOAP service (WCF-SOAP) into a unified object model. This allows WIF to have full access to the features of WS-Security and to work with tokens in the SAML format.

WIF relies on third-party authentication and accepts authentication requests from these services in the form of a set of claims. Claims are independent of where a user account or application is located, thus allowing claims to be used in single sign-on systems (SSO). Claims support both simple resource access and the Role Based Access Control (RBAC) policies that can be enforced by Windows group policies.

Active Directory Federation Services 2.0 (AD FS) is a Security Token Service (STS) that allows users to authenticate their access to applications both locally and in the cloud with a claims-based identity. Anyone who has an account in the local Windows directory can access an application; AD FS creates and retains trust relationships with federated systems. AD FS uses WS-Federation, WS-Trust, and SAML, which allows users to access a system based on IBM, Novell, SAP, and many other vendors.

The final piece of the Windows Azure Platform claims-based identity system is built directly into the AppFabric Access Control (AC) service. You may recall from Chapter 10 that AppFabric is a service bus for Azure components that supports REST Web services. Included in AppFabric are authentication and claims-based authorization access. These can be simple logons or more complex schemes supported by AD FS. AC allows authorization to be located anywhere and allows developers to separate identity from their application.

The claims-based identity in AC is based on the OAuth Web Resource Authorization Protocol (OAuth WRAP), which works with various REST APIs. The Oauth 2.0 protocol seems to be gaining acceptance in the cloud computing industry, because SAML tokens can be accepted by many vendors.

Presence

Presence is a fundamental concept in computer science. It is used on networks to indicate the status of available parties and their location. Commands like the WHO command in Linux that list users logged into the network go all the way back to the first network operating systems.

Presence provides not only identity, but status and, as part of status, location. The status is referred to as the presence state, the identity is the presentity, and the service that manages presence is called the presence service. Many presence services rely on agents called watchers, which are small programs that relay a client’s ability to connect. Among the cloud computing services that rely on presence information are telephony systems such as VoIP, instant messaging services (IM), and geo-location-based systems such as GPS. Presence is playing an important role in cell phones, particularly smart phones.

When you access an application such as AroundMe on the Apple iPhone, which lists businesses, services, and restaurants in your vicinity, you are using an example of a presence service. Figure 12.7 shows the AroundMe app with some sample results. The presence service is provided by the GPS locator inside the phone, which provides a location through AT&T (the service provider) to the application. Presence is an essential and growing component of cloud-based services, and it adds a tremendous amount of value to the ubiquity that a cloud network offers.

FIGURE 12.7 The AroundMe iPhone app is an example of an application that makes use of a presence service.

As cloud computing becomes more pervasive and vendors attempt to create federated systems, emerging presence services will become more important. Microsoft’s Windows Identity Foundation (described in the previous section) created under the Geneva Framework project is one example of an attempt to create a claims-based presence system. WIF allows different systems to interoperate using a variety of authentication methods, including LDAP and Active Directory, OpenID, LiveID, Microsoft CardSpace, and Novell Digital Me.

The Internet Engineering Task Force (IETF) has developed a standard called the Extensible Messaging and Presence Protocol (XMPP) that can be used with a federation system called the Jabber Extensible Communications Platform (Jabber XCP) to provide presence information. Among the services that use Jabber XCP are the Defense Information Systems Agency (DISA), Google Talk, Earthlink, Facebook, the National Weather Service, Twitter, the U.S. Marine Corps, and the U.S. Joint Forces Command (USJFCOM).

Jabber XCP is popular because it is an extensible development platform that is platform-independent and supports several communications protocols, such as the Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions (SIMPLE) and Instant Messaging and Presence Service (IMPS).

The notion of applying presence services over the standard Service Oriented Architecture (SOA) protocols such as SOAP/REST/HTTP is that in SOA all these protocols support unidirectional data exchange. You request a service/data, and a response is supplied. SOA architectures don’t scale well and can’t supply high-speed data transfers required by the collaboration services that are based on presence service technologies. SOA also has the problem of services that have trouble penetrating firewalls. It was these barriers that Jabber and XMPP were created to solve, and you will find that protocol incorporated into a number of cloud computing SaaS services. AOL, Apple, Google, IBM, and others are using this technology in some of their applications today.

Summary

In this chapter, you learned about many of the issues concerning security and cloud computing. This is a rapidly changing area of great importance to anyone considering deploying systems or storing data in the cloud. Security may, in fact, be the single most important area of cloud computing that you need to plan for. Depending upon the type of service you use, you may find many security services already built into your system.

An issue you need to consider is how to protect data in transit to and stored in the cloud. Data encryption, access restrictions, and data protection services were described. Multi-tenancy, system virtualization, and other factors make monitoring, regulatory compliance, and incidence response more challenging than on-premises systems are. A key concept for controlling access to cloud resources is identity management.

In Chapter 13, the standards used to create cloud systems are described. These standards create what is called the Service Oriented Architecture. You also learn how cloud systems can be built from many different interoperable modular parts.

Chapter 14
Ten Design Principles for Cloud Applications

Those in the know will tell you that you have to use the right tool for the job. For the new generation of webscale applications like Pinterest, AWS is the right tool. Overlooked in that truism is the undeniable fact that using a tool effectively requires having the right skills. With respect to AWS, the right skills involve aligning your application design with AWS’s operational characteristics. It’s critical to get the application design right — so here are ten design principles to help you get your alignment straight.

Everything Fails All the Time

The truism “Everything fails all the time” is adapted from Werner Vogels, the chief technology officer of Amazon. IT departments have traditionally attempted to render both infrastructure and applications impervious to failure: A hardware resource or an application component that “fell down on the job” increased the urgency of the search for perfection in order to banish failure. Unfortunately, that search was never successful — the failure of resources and applications has been part of the IT world from the beginning.

Amazon starts from a different perspective, borne of its experience as the world’s largest online retailer and as one of the largest webscale companies worldwide. When you run data centers containing thousands of servers and tens of thousands of disk drives, resource failure is a daily occurrence. And when a hardware resource fails, the software or data residing on that resource suddenly stops working or becomes unavailable.

Neither can you count on the smooth and continuous functioning of software components or external services — they fail, too. An element of a software package configuration or an unforeseen program execution path or an excessive load on an external service means that, even if hardware continues operating properly, portions of an application can fail.

Therefore, the single most important cloud application design principle is to acknowledge that the perfect system doesn’t exist — that failure is a constant companion. Rather than become frustrated by this state of affairs, you should recognize this principle and embrace it. Having recognized that failure is inevitable, be sure to adopt cloud application measures to mitigate circumstances and insulate yourself from failure. The rest of this chapter is all about insulating yourself from failure — so read on!

Redundancy Protects Against Resource Failure

If you can’t count on individual resources to always work properly, what can you do? The best insurance against resource failure is to use redundant resources, managed in such a way that if a single resource fails, the remaining resource (or resources — you can have more than one additional resource in a redundant design) can seamlessly pick up the workload and continue operating with no interruption.

Amazon has adopted this principle in its AWS offering. Many of its services use redundant resources. For example, every S3 object has three copies, each stored on a single machine. Likewise, the AWS Queue service spreads user queues across multiple machines, using redundancy to maintain availability.

Design your applications to operate with two (or more!) instances at each tier in the application. Every tier should be cross-connected to all instances in any adjacent tier. In this way, if a resource (either hardware or software) becomes unavailable, the remaining resources can accept all of the application traffic and maintain application availability.

Of course, if resource failure brings your application to a state in which only a single resource is still operating at a given tier, redundancy is no longer protecting you — launch a new resource to ensure that redundancy is retained. I address this state of affairs later in this chapter.

Geographic Distribution Protects Against Infrastructure Failure

Okay, you recognize the need to protect yourself against resource failure, whether it’s hardware or software, and you resolve to use multiple instances to avoid application failure in the event of a server crash, disk breakdown, or even software or service unresponsiveness. But that still doesn’t help if a problem occurs at a higher level, such as the entire data center that your application runs going dark from a power outage or natural disaster.

Well, just as you use redundancy at the individual-component level, you use redundancy at the data-center level to avoid this problem. Rather than run your application on multiple instances within a single data center, you run those instances in different data centers. Fortunately, Amazon makes it easy with its Regional Availability Zone architecture. Every region has at least two availability zones, which are essentially separate data centers, to provide higher-level redundancy for applications.

Availability zones are located far enough apart to be resistant to natural disasters, so even if one is knocked off the air by a storm or an earthquake, another one remains operating so that you can continue to run your application. And availability zones are connected by high-speed network connections to ensure that your application’s performance doesn’t suffer if it spans multiple availability zones.

Monitoring Prevents Problems

Redundancy is good, and it’s important to avoid a situation in which your application, once neat and tidy with redundant resources, becomes nonredundant through the failure of a redundant resource. The question then is, how to know when the formerly neat-and-tidy redundant application is no longer so because of failure?

The answer is that you keep an eye on the application to determine when resource failure occurs. Now, one way to do this is to station someone at the AWS Management Console to click a mouse button continually in order to refresh the display. Of course, this method has two drawbacks:

· The button clicker will become incredibly bored.

· It’s a huge waste of money because you’ll pay an experienced (and expensive) operations person to mindlessly click mouse buttons.

A much more efficient mode of operation is to have the system itself tell you when something fails — a process known as monitoringYou set up an automated resource to take the place of a human, and whenever something important happens, it notifies you (alerts you, in other words). Automated monitoring has two virtues:

· Computers don’t get bored, so watching over systems endlessly doesn’t faze them in the least.

· It’s cheap! You don’t pay salaries to computers.

Fortunately, AWS offers two excellent services to support automated monitoring:

· CloudWatch: You can set it up to monitor many AWS resources, including EC2 instances, EBS volumes, SQS queues, and more. CloudWatch is free for certain capabilities, and it’s inexpensive for additional capabilities. (For more on CloudWatch, see 

Chapter 10

.)

· Simple Notification Service (SNS): It can deliver alerts to you via e-mail, SMS, and even HTTP so that you can publish alerts to a web page. You can easily wire CloudWatch into SNS so that alerts from CloudWatch are automatically and immediately delivered to you, thereby enabling you to take quick action to resolve system deficiencies, including resource failure resulting in a lack of redundancy. (For more on SNS, see 

Chapter 8

.)

Monitoring is a critical companion to redundant application design, and I encourage you to integrate it into your application from the get-go.

Utilization Review Prevents Waste

It’s an unfortunate fact that many, many AWS users fail to keep track of the resources they use, which can lead to underused, or even unused, resources running in AWS.

This problem is significant because AWS resources continue to run up charges, even if the resources aren’t performing useful work. An entire chapter of this book (

Chapter 11

) is devoted to discussing this problem and providing guidance about how to avoid it. Here’s the short version of my advice:

· Use the AWS Trusted Advisor service or a commercial utilization and cost tracking services like Cloudyn (which kindly provided the fascinating utilization information discussed in 
Chapter 11
).

· Design your application so that it can have individual resources added or subtracted so that resource utilization rates stay high and resources don’t sit around idle or lightly utilized.

· Use AWS EC2 reserved instances to reduce the cost of the computing side of your application.

· Regularly review your AWS bills to see if there are resources or applications being used that you don’t know about — and then go find out about them!

Again, for a more in depth look at how to effectively manage your resources, check out 
Chapter 11
.

Application Management Automates Administration

In the earlier section “Monitoring Prevents Problems,” I point out that, rather than dedicate a person’s efforts to monitoring an application 24/7, monitoring and alerts allow the system to track an application’s behavior and then notify a human that intervention is required.

The drawback to this setup is that you still need a human to implement the intervention.

Wouldn’t it be cool if no human was required in order to take action, based on the specific situation? The good news is that AWS management systems have this capability. Amazon offers three: CloudWatch, Auto Scaling, and Elastic Beanstalk, all discussed in 
Chapter 10
. And commercial offerings have management capability that extends beyond the type that Amazon itself offers.

Common to all these management systems is a set of monitoring capabilities, along with the ability to execute AWS instructions to perform tasks such as restarting resources when failure occurs or starting and adding resources to an application when the user load increases to the extent of requiring more computing capacity.

As applications become more complex, sophisticated management systems are practically a prerequisite. Trying to track and respond to application issues thrown up by a six- or seven-tiered application that uses a number of AWS services is quite challenging. It makes sense to seek out tools to help reduce the burden.

Security Design Prevents Breaches and Data Loss

The number-one concern expressed about cloud computing in general, and AWS in particular, is security. This area of concern focuses on AWS itself, with common questions raised about how well Amazon manages its data center security measures or to what extent Amazon can prevent its personnel from improperly accessing user systems. (Answers: very well, probably better than most IT organizations can do themselves, and nothing can prevent someone from improperly using her administrative permissions, although Amazon has measures in place to monitor improper access.)

Unfortunately, the focus on Amazon’s security is misplaced. First, as just noted, Amazon does a good job of securing its offering, at least as well as the best in the industry and certainly better than most. Second (and this point is crucial), users retain significant responsibility for their application’s security when using AWS, and user security shortcomings account for by far the largest percentage of security issues within the AWS environment.

You must recognize your security responsibility and take measures to implement and support it. Your application design can help prevent security breaches and potential access to critical data. Though 


Chapter 7

 covers these issues quite thoroughly (and I encourage you to read it at your earliest opportunity), here are some guidelines, boiled down to the basics:

· Use multiple security groups to partition your application. Doing so ensures that malicious actors cannot gain direct access to application logic and data. Methods to implement partitioned security groups are discussed in 
Chapter 7
, so look there for details.

· Use Amazon Virtual Private Cloud (VPC) to shield EC2 instances that don’t require external access from the Internet. VPC is an outstanding way to increase application security and will become the default operating environment for AWS, so learn how to use it.

· Implement the specific application security measures outlined in 

Chapter 7

. Patch software packages quickly, implement intrusion-prevention software, and manage security keys carefully.

Encryption Ensures Privacy

One concern that potential users of AWS often raise centers around what can be done to prevent inappropriate data access by AWS personnel. The answer is “nothing.” The best-designed security systems in the world have too often fallen vulnerable to malicious insiders. Amazon screens its employees, and methodically tracks all employee access to AWS infrastructure, but the simple truth is that it would always be possible — at least theoretically — for an Amazon employee to access your data, whether on disk or during transit across the AWS network.

Does this information imply that someone with a clear need to avoid even a theoretically possible access breach is out of luck when it comes to using AWS? No, not at all.

Rather than attempt to prevent access to the resources on which your important data is stored or transmitted, follow this approach: Recognize the potential for such access, and make it useless if it occurs. The way to do this is to make the data itself useless via encryption. With data privately encrypted by the user and available only to those with the private key associated with the data encryption, it doesn’t matter whether Amazon personnel attempt to access the data — it looks like meaningless gobbledygook from the perspective of the intruder.

This approach to security — data encryption — can be applied in two ways to protect data security:

· Encrypt network traffic. Network traffic — often referred to as “data in transit” — can easily be encrypted using the Secure Sockets Layer (SSL). SSL ensures that no one can gain useful information from accessing network traffic. This approach can also be used for network traffic across the Internet, preventing outside intruders from accessing network traffic.

· Encrypt data residing on storage. Data residing on storage is commonly called data at rest — it refers to data that’s written to and read from disk storage in encrypted fashion. The private keys to access disk data can be held secure on your own premises, preventing access to your data by any Amazon personnel.

With these two measures, along with the security-design measures mentioned in the previous section, you can make your application as secure as possible, and certainly as secure as it will be running in your own data center.

Tier-Based Design Increases Efficiency

I mention multi-tier application design several times throughout this book, noting, for example, that a tiered design makes it possible to improve security by partitioning security groups.

It may not be as obvious that using a tier-based application design, particularly one that uses redundant, scalable tiers (tiers that can grow and shrink by the addition or subtraction of instances to the tier) can also improve the efficiency of your application.

The reason is that tiered, scalable applications can adjust the number of computing resources assigned to an application, growing and shrinking dynamically in response to user load. This approach ensures that all running resources are being used to support user traffic and not sitting idle. The idea is that these resources should be available for use in case the application load grows sufficiently to require the processing of the idle instance.

Moreover, partitioning your application into tiers allows you to work on improving one portion of it while leaving the rest undisturbed. You can improve the efficiency of the entire application while methodically moving through the tiers, improving performance and reducing resource consumption one tier at a time.

Even if your application begins life as a single instance, with all software packages contained in a single, integrated code base, you should design it so that it may have portions removed and moved to other tiers. This approach supports incremental, gradual improvement to ensure that high resource consumption is reduced over time.

Good Application Architecture Prevents Technical Debt

Technical debt refers to a concept in computer science in which software code, having been implemented earlier in a project’s lifespan, ends up poorly written and not efficient. Technical debt, like its financial counterpart, imposes a cost and hampers efficiency.

The obvious way to reduce technical debt is to periodically revisit and rethink application design and implementation, with an eye toward updating the design and reimplementing important portions of the code.

The most effective method for completing this task is to have all portions of the application designed with an input-and-output interface that defines how an application portion (or package) is called by others and how it calls on other application portions to fulfill their responsibilities. When you use this design approach, different components or portions of an application can be updated or replaced without disturbing the other portions of the application or the overall application itself — as long as the interface “contracts” are adhered to (in other words, the interface operates as advertised).

Updating the functionality of an application as needed is easier when the section that the functionality resides in can be easily modified without disturbing other portions of the application. Without this approach, an application that consists of a large, mingled code base is nearly impossible to modify, if for no other reason than no single software engineer is likely to be able to understand all the different portions of the application design or code.

So when you move forward with your AWS applications (inspired and guided by this book, I hope), concentrate on partitioning the application into tiers and ensure that each partition has good input and output interfaces defined to enable you to avoid the dreaded technical debt.

Still stressed from student homework?
Get quality assistance from academic writers!

Order your essay today and save 25% with the discount code LAVENDER