Data Loss Prevention (DLP) is a solution for exposing sensitive data. DLP is used by organisations to safeguard and protect data as well as to adhere to legislation. Through their network, businesses transmit sensitive data to partners, clients, remote workers, and other authorised users, but occasionally an unauthorised user may be able to intercept it.
Organizations need to protect sensitive data due to multiple industry and government regulations such as HIPAA and PCI-DSS.
Why your organization needs data loss prevention?
A “borderless” network perimeter with numerous attack vectors has been produced by today’s digital transformation, which started with mobile devices and continued with embedded systems, social media applications, hypervisors, and the proliferation of connected devices.
Organizations need to make sure that their most sensitive data and assets are secured in order to adapt to this technological transformation. When implemented correctly, DLP offers visibility, granular control, and data security coverage to defend against human error-related data loss and external threats. The creation of a thorough data loss prevention strategy shouldn’t be put off; it may assist your business in safeguarding its “crown jewels,” ensuring compliance with the changing regulatory environment, and preventing the publication of the next data breach story.
You don’t know where the private information of your business is kept, where it is sent, or who is accessing it.
DLP technology gives IT and security employees a complete picture of where data is located, how it moves through the organisation, and how it is being used. It lets you to protect and maintain control over sensitive data, such as customer information, personally identifiable information (PII), financial information, and intellectual property. It does this by comparing network actions to your organization’s security regulations. Your firm will be able to develop the right rules to safeguard this data and decide which assets need to be protected and at what cost after having a complete grasp of this data.
Although your business has a plan in place to guard against external intrusion, it does not cover employee theft or the unintentional disclosure of sensitive data by partners and employees.
Data loss may not always occur as a result of outside, hostile attacks. One important factor is internal employees accidentally disclosing or improperly handling confidential information. In 28 percent of the attacks, insiders were involved, according to Verizon’s 2018 Data Breach Investigations Report. It can be particularly challenging to protect against insider threats because it’s difficult to tell when someone is abusing their rightful access to data. DLP has the ability to identify confidential information-containing files and stop them from leaving the network. It has the ability to implement policies that protect data on an as-needed basis and can stop sensitive data transfers to USB devices and other removable media.
For instance, access to a particular endpoint may be immediately barred in the event that a security event is discovered. In response to occurrences, policies may also quarantine or encrypt data
The responsibility, adverse exposure, penalties, and lost revenue linked to data breaches worry you.
Alarmingly frequently, data breaches have been in the news. Through fines, negative publicity, the loss of important clients, and legal action, they can wreak financial havoc on an organisation. The mean time to identify (MTTI) breaches have reportedly reached an average of 191 days, which equates to nearly six months of dwell time for attackers, according to the Ponemon Institute’s 2017 Cost of Data Breach Study. Lateral movement is made possible by dwell time, which is essential for boosting hackers’ chances of success.
You’re worried about your next audit and wish to continue adhering to the intricate laws.
Regulations like the GDPR and New York Cybersecurity Every regulated firm that collects, stores, and utilises sensitive customer data must raise the bar to meet new standards as a result of requirements, which are ushering in a new era of accountability. Failure to comply with regulations may result in fines of up to 4% of annual global turnover and orders to stop processing. Controls over technology are becoming important in some instances to achieve compliance. These controls are offered by DLP, together with policy templates and maps that cover certain requirements, streamline compliance, and permit the gathering and reporting of metrics.
Data must be safeguarded from security risks brought on by BYOD and IoT.
DLP assists in preventing the unintentional disclosure of sensitive data across all devices when used in conjunction with complementing safeguards. DLP can monitor data and dramatically lower the risk of data loss wherever it resides, whether it is in use, at rest in storage, or in transit over the network.
Types of DLP Solutions
An company might lose data in a number of ways. The numerous methods that sensitive data may be removed from an organisation should be able to be recognised by the DLP solution. The various DLP solution types include:
Data on the network’s devices is monitored by an endpoint DLP solution. To monitor and safeguard the data stored on endpoints such as laptops, servers, smartphones, printers, etc., this solution is installed. Even when the endpoint is online or linked to a public network, endpoint DLP safeguards the data on such endpoints. Additionally, this method stops sensitive data from being transferred to USBs
This DLP system is put into place on the network and keeps track of data transfer. Any device linked to the network may monitor, safeguard, and prevent all incoming and outgoing data. All of the network-connected devices can be subject to the DLP policies. Data on offline devices cannot be protected by this solution; it can only secure data on devices that are connected to the network.
The email DLP system keeps track of emails and filters them based on particular keywords. This remedy can lessen email-based data leaks.
A cloud DLP solution keeps an eye on and safeguards the data kept in the cloud. Emails, documents, and other forms of files may all be protected and monitored with the service.
Techniques needed for your data loss prevention
Determine the primary data protection objective in order to determine the appropriate DLP solution for the organization.
Implement a centralised DLP programme and collaborate with various departments and business units to define standard DLP rules that control data for the organisation. Data visibility will rise as a result throughout the organisation.
Make an evaluation of the different forms of data and their importance to the company. Determine the type of data, whether it is sensitive, and where it is stored. Consider the data exit points. Then assess the danger of each type of data being compromised to the organisation.
Make a method for classifying data that includes both structured and unstructured information. Internal, private, public, personally identifiable information (PII), intellectual property, and other types of data may exist.
Create policies for data processing and correction for various sorts of data. DLP software comes with pre-configured rules based on laws like GDPR and HIPAA. These guidelines can be altered to suit the requirements of the company. Create controls to lower the danger to the data. To lessen the unique data risks, organisations should build granular, fine-tuned controls.
Employee education can lower the possibility of insiders accidentally leaking data. A good data loss prevention programme depends heavily on employee knowledge and comprehension of security standards. Employee understanding and adherence to data security policies and best practises can be improved with the support of awareness campaigns and trainings such as posters, emails, online trainings, and seminars.
Utilize indicators like the number of events, the mean time to incident response, and the proportion of false positives to gauge how effective your DLP system is.
A company’s security depends heavily on having the right cyber security platforms and solutions in place. Any firm can utilise DLP to stay ahead of threat actors, whether they are internal or external. Any business, especially banks and healthcare companies, must prioritise protecting sensitive consumer and corporate data. At Encryption Consulting, we place the utmost importance on cyber security. We work with organizations to create the most secure environment possible using methods such as DLP, Public Key Infrastructure (PKI), and encryption assessments. We provide assessment, implementation, and development services for PKI, encryption, and Hardware Security Modules (HSMs). If you have any questions, visit our website at www.encryptionconsulting.com.
Data encryption, especially on the Cloud, is an extremely important part of any cybersecurity plan in today’s world. More companies migrate their data to the Cloud for its ease of use, reduced cost, and better security. The most prominent Cloud Service Providers (CSPs), like Google, Azure, and Amazon, all have different data encryption methods, but they are all secure and user-friendly.
How Data Encryption on the Cloud works
Cloud data resides in two places: in-transit and at-rest.
Data-in-transit encryption refers to using SSL or TLS to create a security “wrapper” around the data being moved. This ensures that it is more challenging to steal data-in-transit, but even if it were successfully stolen, it would be a confusing block of characters that would not make sense to the attacker. Most data-in-transit encryption is done through web browsers or FTP clients, so it does not need to be as managed as data-at-rest. Data-at-rest encryption is done when data is on a disk or another storage method. Similar to data-in-transit encryption, the data is jumbled into a random series of characters to stop attackers from stealing the plaintext.
CSPs have many different ways of providing data encryption to the user. Data can be encrypted by default if the user implements that option. Each section of a Cloud platform handles the encryption of data differently. Some may encrypt all data that they store and allow the CSP to manage the keys involved in encrypting the data, while others may give the user full control over what happens to the data, encryption-wise. Most services on the Cloud have a middle ground, allowing the user to select if the CSP should manage everything, or if they wish to control it all themselves, or something in between that. Many users create their methods of automatically encrypting data since platforms like Google Cloud Platform (GCP) provide so many tools for the creation of encryption methods.
GCP Provided Tools for Data Encryption
GCP uses AES-256 encryption by default when data is at-rest in Google Cloud Storage, and data-in-transit is encrypted with TLS by default. When encrypting data on the Cloud, GCP utilizes DEKs and KEKs, which are used and stored with Google’s Key Management Service (KMS) API. A DEK is a data encryption key, which is used to encrypt the data itself. A KEK, or key-encryption key, is then used to encrypt the data encryption key to ensure an extra security layer. The KMS API works closely with other Google Cloud services, such as Cloud security services, Google Cloud Functions, etc, to store keys used for encryption and decryption on the Cloud. When other APIs attempt to access DEKs and KEKs, the user must first have the necessary permissions to access the keys. Services like IAM provide roles for users to be able to access KMS.
IAM, or Identity and Access Management, creates essential roles for services that they will need to work with different APIs within GCP. IAM offers another layer on top of KMS when protecting encrypted data. Administrators may create their roles for services and users, giving them more control in what they want access to certain users or services. IAM can also connect other GSuite applications, such as Gmail or Google Drive, to applications and services within a user’s Google Cloud account, further authenticating users.
Another example of a GCP API that assists in encrypting data is the Data Loss Prevention (DLP) API. This API can be used within or outside of Google Cloud and helps the user identify potentially sensitive data, such as Personally Identifiable Information, and mask that data from attackers. Google Cloud Platform users can integrate the KMS and DLP APIs to do encryption methods like Format Preserving Encryption, which encrypts data to be misunderstood while keeping the same formatting as the plaintext, allowing the PII data to be used with false values.
These methods and more allow users the freedom to manage their data encryption methods on the Google Cloud Platform. KMS, IAM, and DLP can also be integrated with Google Cloud Functions to encrypt data when uploaded to Google Cloud Storage automatically. Google Cloud Dataflow can use DLP and KMS to encrypt data automatically from several different storage locations. This shows how users can create their own, potentially more robust data encryption methods to assist in the storage of sensitive data on the Cloud.
In this article, we will take a closer look at Google’s Cloud Key Management Services. When users store data into Google Cloud, the data is automatically encrypted at rest. We use Google’s Cloud Key Management service to gain better control over managing the encrypted data-at-rest and encryption keys.
Source and Control of cryptographic keys
Cloud KMS lets users manage cryptographic keys in a central cloud service for direct use or use with other resources and applications. The keys that have to be used must be from one of these sources:
Cloud KMS’s software backed key gives users the ability to encrypt data with either a symmetric or asymmetric key that the users control.
Figure: Cloud EKM providing bridge between KMS and External Key Manager
Cryptographic keys in Cloud KMS
This section describes keys, key versions, and the grouping of keys into keyrings. The following diagram illustrates key groupings.
Key: A named object which represents a cryptographic key. It is a pointer to a key, and the actual bits or the key may change as we rotate the keys or create newer versions of the keys.
CloudKMS supports both asymmetric keys and symmetric keys. A symmetric key is used for symmetric encryption to protect some corpus of data, such as using AES-256 in GCM mode to encrypt a block of plaintext. An asymmetric key can be used for asymmetric encryption or for creating digital signatures.
Keyring: Keys are grouped into one keyring to organize the keys better. A keyring belongs to a specific Google Cloud project and resides in a particular location. Key inherit IAM policies from the keyrings that contains them.Grouping keys with related permissions in a keyring lets you grant, revoke, or modify permissions to those keys at the keyring level, without needing to act on each key individually. Keyrings provide convenience and categorization, but if the grouping of keyrings is not useful to you, you can manage permissions directly on keys.Key metadata: Resource names, properties of KMS resources such as IAM policies, key type, key size, key state, and any data derived from the above. Key metadata can be managed differently than the key material.
In this section, we discuss a few points about additional parameters associated to Google CloudKMS resources such as keys and keyrings.
ProjectGoogle Cloud KMS resources belong to Google Cloud Project, like all other Google Cloud Resources. Users can host data in a project that is different from the project in which Cloud KMS keys reside. This capability supports the best practice of separation of duties between the key administrators and data administrators.
LocationsWithin a project, Cloud KMS resources are created in one location.
The following diagram illustrates the key hierarchy of Google’s internal Key Management Service. Cloud KMS leverages Google’s internal KMS in that Cloud KMS-encrypted keys are wrapped by Google KMS. Cloud KMS uses the same root of trust as Google KMS.
Data encryption key (DEK): A key used to encrypt data.
Key encryption key (KEK): A key used to encrypt, or wrap, a data encryption key. All Cloud KMS platform options (software, hardware, and external backends) let you control the key encryption key.
KMS Master Key: The key used to encrypt the key encryption keys (KEK). This key is distributed in memory. The KMS Master Key is backed up on hardware devices. This key is responsible for encrypting your keys.
The Cloud KMS platform supports multiple cryptographic algorithms and provides methods to encrypt and digitally sign using both hardware and software-backed keys.
The diagram shows the main components of the Cloud KMS platform.Administrators access key management services by using the Google Cloud Console, the gcloud command-line tool, or through applications implementing the REST or gRPC APIs.Applications access key management services using a REST API or gRPC.
Applications can use Google services that are enabled to use customer-managed encryption keys (CMEK). CMEK, in turn, uses the Cloud KMS API. The Cloud KMS API lets users use either software (Cloud KMS) or hardware (Cloud HSM) keys. Both software and hardware-based keys leverage Google’s redundant backup protections.
With the Cloud KMS platform, users can choose a protection level when creating a key to determine which key backend creates the key and performs all future cryptographic operations on that key.
The Cloud KMS platform provides two backends (excluding Cloud EKM), which are exposed in the Cloud KMS API as
Software Protection Level The protection level software applies to keys that may be unwrapped by a software security module to perform cryptographic operations.
HSM protection Level The protection level HSM applies to keys that can only be unwrapped by Hardware Security Modules that perform all cryptographic operations with the keys.
Google Cloud supports CMEK for several services, including
CMEK lets users use the Cloud KMS platform to manage the encryption keys that these services use to help protect their data.Cloud KMS cryptographic operations are performed by FIPS 140-2 validated modules.
Keys with protection level software, and the cryptographic operations performed with them, comply with FIPS 140-2 Level 1.
Keys with protection level HSM, and the cryptographic operations performed with them, comply with FIPS 140-2 Level 3.
Encryption key management software is used to handle the administration, distribution, and storage of encryption keys. Proper management will ensure encryption keys, and therefore the encryption and decryption of their sensitive information, are only accessible for approved parties. IT and security professionals use these solutions to ensure access to sensitive data remains secure.
Encryption key management software also provides tools to protect the keys in storage and backup functionality to prevent data loss. Additionally, encryption key management software includes functionality to securely distribute keys to approved parties and enforce key sharing policies.
Certain general encryption software provides key management capabilities. Still, those solutions will only offer limited features for key management, distribution, and policy enforcement.
To qualify for inclusion in the Encryption Key Management category, a product must:
Provide compliance management capabilities for encryption keys
Include key storage and backup functionality
Enforce security policies related to key storage and distribution
A software key management approach can be used instead of an HSM based SaaS approach or a cloud KMS approach. Also, secrets management is an efficient approach to manage secrets, passphrases, etc.
For organizations who do not use advanced hardware for key management on-premises but want to ensure their cloud providers do not own and cannot be compelled to turn over keys to decrypt their data, software-based key management is suitable.
Run the organization’s key management application in the cloud.
Lower cost than HSMs and full control of key services, rather than delegating them to your cloud provider
Can perform all core functions of an HSM -key generation, key storage, key rotation, and API interfaces to orchestrate encryption in the cloud
Need to handle failover and replication yourself
Not compliant with regulatory requirements that specify FIPS-certified hardware
The approach is only suitable for IaaS, as there is a need to install and configure your servers to perform key management
BYOE, or Bring your own Encryption, is also known as Hold your own Key, or HYOK. BYOE is used when a user implements BYOK, but does not wish to leave a copy of their key with the Cloud Service, so BYOE is implemented instead. In BYOE, the HSM acts as a proxy between the organization and the Cloud Provider’s storage systems. The HSM deals with all cryptographic processing as well.
Today more than ever, organizations have a need for high level security of their data and the keys that protect that data. The lifecycle of cryptographic keys also requires a high degree of management, thus automation of key lifecycle management is ideal for the majority of companies. This is where Hardware Security Modules, or HSMs, come in. HSMs provide a dedicated, secure, tamper-resistant environment to protect cryptographic keys and data, and to automate the lifecycle of those same keys. But what is an HSM, and how does an HSM work?
What is an HSM?
A Hardware Security Module is a specialized, highly trusted physical device which performs all major cryptographic operations, including encryption, decryption, authentication, key management, key exchange, and more. HSMs are specialized security devices, with the sole objective of hiding and protecting cryptographic materials. They have a robust OS and restricted network access protected via a firewall. HSMs are also tamper-resistant and tamper-evident devices. One of the reasons HSMs are so secure is because they have strictly controlled access, and are virtually impossible to compromise.
For these reasons and more, HSMs are considered the Root of Trust in many organizations. The Root of Trust is a source in a cryptographic system that can be relied upon at all times. The strict security measures used within an HSM allow it to be the perfect Root of Trust in any organization’s security infrastructure. Hardware Security Modules can generate, rotate, and protect keys, and those keys generated by the HSM are always random. HSMs contain a piece of hardware that makes it possible for its computer to generate truly random keys, as opposed to a regular computer which cannot create a truly random key. HSMs are also generally kept off the organization’s computer network, to further defend against breach. This means an attacker would need physical access to the HSM to even view the protected data.
Types of HSMs
There are two main types of Hardware Security Module:
General Purpose: General Purpose HSMs can utilize the most common encryption algorithms, such as PKCS#11, CAPI, CNG, and more, and are primarily used with Public Key Infrastructures, cryptowallets, and other basic sensitive data.
Payment and Transaction: The other type of HSM is a payment and transaction HSM. These types of HSM are created with the protection of payment card information and other types of sensitive transaction information in mind. These types of Hardware Security Module are narrower in the types of organizations they can work within, but they are ideal to help comply with Payment Card Industry Data Security Standards (PCI DSS).
As HSMs are used so often for security, many standards and regulations have been put in place to ensure Hardware Security Modules are properly protecting sensitive data. The first of these regulations is the Federal Information Processing Standard (FIPS) 140-2. This a standard that validates the effectiveness of hardware performing cryptographic operations. FIPS 140-2 is a federal standard in both the USA and Canada, is recognized around the world in both the public and private sectors, and has 4 different levels of compliance.
Level 1, the lowest level, focuses on ensuring the device has basic security methods, such as one cryptographic algorithm, and it allows the use of a general purpose model with any operating system. The requirements for FIPS 140-2 level 1 are extremely limited, just enough to provide some amount of security for sensitive data.
Level 2 builds off of level 1 by also requiring a tamper-evident device, role-based authentication, and an operating system that is Common Criteria EAL2 approved.
Level 3 requires everything that level 2 does along with tamper-resistance, tamper-response, and identity-based authentication. Private keys can only be imported or exported in their encrypted form, and a logical separation of interfaces where critical security parameters leave and enter the system. FIPS 140-2 level 3 is the most commonly sought compliance level, as it ensures the strength of the device, while not being as restrictive as FIPS 140-2 .
Level 4 is the most restrictive FIPS level, advanced intrusion protection hardware and is designed for products operating in physically unprotected environments. Another standard used to test the security of HSMs is Common Criteria (ISO/IEC 15408). Common Criteria is a certification standard for IT products and system security. It is recognized all around the world, and come in 7 levels. Like FIPS 140-2, level 1 is the lowest level, and level 7 is the highest level.
The final standard is the Payment Card Industry PTS HSM Security Requirements. This is a more in-depth standard, focusing on the management, shipment, creation, usage, and destruction of HSMs used with sensitive financial data and transactions.
The final standard is the Payment Card Industry PTS HSM Security Requirements. This is a more in-depth standard, focusing on the management, shipment, creation, usage, and destruction of HSMs used with sensitive financial data and transactions.
Advantages to HSMs
Hardware Security Modules have a number of benefits including:
Meeting security standards and regulations
High levels of trust and authentication
Tamper-resistant, tamper-evident, and tamper-proof systems to provide extremely secure physical systems
Providing the highest level of security for sensitive data and cryptographic keys on the market
Quick and efficient automated lifecycle tasks for cryptographic keys
Storage of cryptokeys in one place, as opposed to several different locations
The adoption of Public Key Infrastructure (PKI) has been going up steadily in enterprises across industry sectors and has been described in earlier articles1 . PKI mechanisms such as certificate-based authentication, encrypted communication, certificate management, code signing, and others all combine to ensure a secure enterprise. However, all the security benefits offered by PKI can come to naught if the private keys used for various purposes, are compromised. Therefore, the critical success factor (and biggest vulnerability) in PKI and any cryptography system is the safe storage and management of private keys. This is where a Hardware Security Module (HSM) comes in.
An HSM is a specialized, dedicated, physical cryptographic device or ‘appliance’ designed and built for key lifecycle management – generation, storage, management and exchange of cryptographic keys. HSMs are also used for offloading of cryptographic functionality from application servers – examples being authentication, encryption, decryption, and digital signing. HSMs offer certified mechanisms for physical and logical security, tamper resistance, intrusion prevention and detection, event logging, and secure APIs to access the HSM. HSMs allow for the segregation of the cryptographic tasks from application business logic, with unparalleled performance for any cryptographic function. For example, while software running on the best hardware might achieve a few thousand digital signatures per second, an HSM can achieve millions.
Traditionally, HSMs were set up “on-premise” or within the enterprise data center. The prevalence of cloud computing, especially over the last few years, has seen the emergence of “Cloud based HSMs” or “HSM as a Service”. Regardless of the type whether on-premise or cloud based, enterprises need to keep some of the following features in mind while selecting an HSM.
Any HSM needs to be certified to international security standards such as Common Criteria and FIPS (Federal Information Processing Standards). Certification provides the assurance that the design and build of the device meets certain basic criteria. While certification is necessary, it is not enough, and other criteria need to be considered while selecting an HSM.
User Interface (UI):
The UI for HSM administration is often command-line based. Some providers may have a centralized management portal with a graphical user interface (GUI) and dashboard, which can ease some of the administration tasks.
Any HSM should provide a number of cryptographic algorithms (both symmetric and asymmetric) that can be used for multiple functions such as authentication, encryption, decryption, signing, timestamping, and others. A related factor can be future readiness such as support for new technologies like quantum cryptography.
Once the HSM is deployed, ongoing maintenance and management tasks take up most of the administration work. Any automation features provided by the HSM vendor can be an advantage to reduce ongoing administration efforts and costs.
Backup of keys needs to be done to an environment that has similar security levels as provided by the HSM. Remote backup management and key replication are additional factors to be considered.
The HSM is not a standalone entity and needs to work in conjunction with other applications. An important feature to evaluate therefore is the integration capabilities. Over time, since the HSM will need to support multiple applications, out-of-the box and proven integration interfaces with multiple applications can be a significant advantage.
Total Cost of Ownership (TCO):
On-premise HSMs will have a higher upfront investment or Capital Expenditure (Capex) and possibly lower annual costs. Cloud HSMs will have much lower or no Capex but may have higher annual costs or Operational Expenditure (Opex). The decision factor therefore typically is the TCO over a period of time, say five years. The cost factors to compute TCO include hardware, tools needed, network and security infrastructure, data center, operational model, payment model, software licenses, support, service levels, training, compliance, and personnel (staffing) costs.
Random Number Generation:
It is important that the HSM vendor uses an approved or certified process for Random Number Generation, since this could be a critical factor from a regulatory and compliance perspective.
Once the basic features are evaluated, the next step is to decide whether to invest in an on-premise or cloud based HSM. Some of the scenarios that can help enterprises make this decision are indicated below.
HSMs originated decades ago as physical devices that were built grounds up especially for cryptographic operations and deployed on-premise. The hardware, firmware, operating system, network access and overall functionality of an HSM were all designed to ensure that the devices were tamper-resistant and intrusion proof.
An on-premise HSM is a good option for enterprises with one or more of the following scenarios:
Large organizations which require complete and isolated control over their key management mechanisms, and who have a clear business case for the high investments needed in an on-premise HSM.
Applications which require very low latency, where an HSM being in the same data center as the application can make a big difference.
Applications with intensive cryptographic operations and a need for high performance, where offloading the cryptographic functions from an application server to a local HSM can result in a significant performance improvement for the application.
Organizations which operate in countries with strict requirements on data localization, and where cloud providers may not have a local data center in that geographic location.
Organizations with predictable workloads, where it is unlikely that the business requirements and transaction volumes will exceed the capacity of the HSM in the near future.
Cloud based HSM
A recent research report from Flexera indicates that around 94% of organizations today leverage some form of cloud services. As workloads of all types move to the cloud, HSMs are no exception. The simplicity, flexibility and agility offered by Cloud based HSMs make them an attractive value proposition, especially when enterprises face one or more of the following scenarios:
Small and medium organizations who already use a lot of cloud services and the high investments for on-premise HSMs may not be feasible.
Organizations who want to test or pilot multiple HSM services with minimal upfront investments, before committing to a vendor.
Organizations where the workloads are less and application performance and latency requirements may not require a dedicated, on-premise HSM.
Organizations with highly variable workloads which might require elasticity i.e. scaling up and scaling down of the HSM infrastructure.
Organizations who prefer a predictable, operational expenditure (Opex) based financial model offered by the cloud rather than high upfront capital investments needed by an on-premise HSM.
There are two types of cloud based HSMs: public cloud based, and third party. Both types offer the HSM-as-a-Service model. Depending on the vendor, both types may also offer single tenant as well as multi-tenant solutions, and additional key management services apart from HSMs. The main difference between the two cloud based HSMs is vendor lock-in. Public cloud based HSMs are typically tied to that public cloud provider such as AWS or Azure and are therefore suitable for enterprises which leverage only one public cloud provider. Third party cloud based HSMs usually work across multiple public cloud providers and therefore are a good choice for enterprises which have multi-cloud scenarios2. Third party cloud HSMs, being specialized offerings, may also have more sophisticated features such as automation, scaling, back-ups, and better administration. In general, the choice of a cloud based HSM is closely linked with the enterprise cloud strategy.
The question “Which is a better option: an on-premise HSM or a cloud based HSM?” has no single answer. Enterprises will need to choose the best option depending on their use cases and business scenarios. One thing however remains clear: the benefits offered by Public Key Infrastructure (PKI) can be completely undermined if private keys are compromised. Protecting and managing those keys is therefore a critical requirement to ensure enterprise security. HSMs, whether on-premise or cloud based, are the best options today to fulfil that requirement.
A recent research report on cloud trends from Flexera indicates that more than 80% of organizations are moving to multi-cloud environments
In the previous article, we explored the importance of Public Key Infrastructure (PKI) from an enterprise architecture perspective. We also saw some of the typical enterprise application scenarios that need to use PKI, including public facing web sites and web applications, Virtual Private Network (VPN) services, mobile applications and software (code signing). This article illustrates a few of the other enterprise application scenarios where PKI needs to be an integral part of enterprise architecture.
Enterprise cloud applications: Cloud computing is truly mainstream today: a recent Rightscale research report from Flexera indicates that 94% of organizations leverage some type of cloud technology. The benefits of cloud including elasticity, location independent access, and usage-based pricing for infrastructure have overcome the earlier apprehensions of moving applications and data to the cloud. However, enterprises need to be as vigilant as ever to ensure appropriate identity and access management (IAM) solutions are in place for their cloud applications. PKI is one of the best IAM options for enterprise cloud applications and is much more secure than alternatives based on a username and password approach.
User authentication: Even within the enterprise, the use of PKI is going up rapidly. Verifying the identity of enterprise users and other entities such as devices using digital certificates makes a lot of sense today – especially considering the risk of insider threats and the need to have strict access controls as well as audit mechanisms in place. For example, code signing, as explained in the previous article, requires DevOps teams and developers to sign code as a part of the deployment and release cycles. PKI based user authentication can also help to restrict and monitor access to sensitive enterprise applications and data, such as personally identifiable information (PII) of customers, company IP and trade secrets, and company financials. In fact, the risk of data breach incidents where customer data is leaked or sold to the outside world by insiders, can be significantly reduced by leveraging PKI based credential management within the enterprise.
Email: For most enterprises today, email continues to be the primary communication mechanism between employees, and with customers and partners. With PKI, digital certificates can be used to sign and encrypt email. More specifically, you use your private key to sign an email you are sending, and you use the recipient’s public key to encrypt that email. The technology used is called Secure/Multipurpose Internet Mail Extensions (S/MIME), which is based on PKI principles. Most email providers and clients support S/MIME. Securing email communication is extremely important to address threats related to phishing. The recipient might get an email from somebody impersonating a known person (e.g. a colleague, customer or partner), requesting some confidential information or asking to click a link in the email. With PKI, the email client will alert the recipient that the sender’s identity could not be verified. Enterprises cannot afford to underestimate the importance of email security: cyber security research indicates that most cyber incidents and breaches start with a simple phishing email. Apart from the advantages of identity verification and protecting the privacy of the communication through encryption, non-repudiation is an additional advantage: with PKI in place, the email sender cannot deny later that an earlier email was sent by her/him.
Document management: Enterprises deal with a variety of documents that need to be formally signed by an authorized person – orders, contracts, petitions, agreements, forms, authorizations, and so on. Digital signatures provide a convenient way to sign these documents, since the enterprise is likely to be leveraging PKI in some form or the other anyway. The technology behind PKI ensures that the level of security available for digitally signed documents is much more than manually signed ones. Document management applications therefore need to be able to leverage PKI and digital signatures to ensure that documents are signed (and timestamped) as soon as they are generated, to ensure that any unauthorized changes can be immediately and automatically detected by security tools and platforms.
Summary In today’s world, enterprise application architecture needs to follow a “Security First” approach. For example, with cloud technology becoming mainstream, cloud security also needs to become a top priority for enterprises. Similarly, for application authentication, enterprises can no longer rely on just a username and password approach, since enterprise applications are accessed anytime and from anywhere. Threats like phishing have resulted in email security becoming a hygiene factor and not just a “good to have”. Digital signatures for documents have become the norm, replacing manual signatures. Overall, enterprise architecture today requires application security to keep three needs in mind: stronger authentication mechanisms, validation of the device or endpoint that is being used to access the application and securing the communication channel between the application and the endpoint. PKI through digital certificates, provides a way for enterprises to address all three of these needs. This also means that enterprises need to think about good certificate management practices, including the set up of a private certificate authority (CA) where needed. That, however, is a different subject and will be covered in a future article. Stay tuned!
In an earlier article, we defined Public key infrastructure (PKI) as a set of roles, policies, hardware, software and procedures needed to create, manage, distribute, use, store and revoke digital certificates and manage public-key encryption. We also identified some of the trends in digitalization which have been driving the adoption of PKI. As the adoption of PKI increases, enterprises need to understand which of their applications need to be “PKI enabled”. In other words, enterprise architectural blueprints should clearly identify the application scenarios to use PKI, and also flag non-conforming scenarios as potential security risks. In this article, we look at some of the typical application scenarios leveraging PKI in order to protect the enterprise and lower its risk profile.
Public facing websites and web applications: The padlock symbol in the URL bar of any web browser is now a de-facto requirement for any public facing website and application. The padlock represents an SSL/TLS (Secure Sockets Layer / Transport Layer Security) encrypted connection between the browser and the website that is set up using a digital certificate being sent from the website or web application server to the browser. The certificate confirms the identity of the website or web application to the browser. How does the browser know whether the certificate itself is valid and genuine? It verifies that the certificate has not yet expired and has been issued and ‘signed’ by a trusted third party, called a Certificate Authority (CA). Once the certificate is verified, the browser sends its public key across to the server, which uses this to encrypt the data that is sent back to the browser. The browser decrypts this data using its private key. All this magic happens seamlessly in the background, courtesy of PKI.
Virtual Private Network (VPN) services: The need to access an enterprise network remotely (for example from a different office location or a partner location) has existed for quite some time. The typical solution for this need is a leased line – a dedicated physical line between the remote location and the enterprise network. However, the leased line approach is not feasible for individual remote workers who need to access the enterprise network, such as employees who need to work from home. This is where VPN services, with the ability to provide a secure, encrypted communication channel for access to the enterprise network over the internet, have become extremely popular.
When a remote user accesses the enterprise network through a VPN, a single authentication mechanism for example with a username and password is not secure enough. Multi-factor authentication (MFA) is important – with one of the authentication mechanisms being a digital certificate. The VPN channel (or ‘tunnel’) is set up only once the certificate validation is completed, as described earlier in this article.
Mobile applications: With every member of the workforce carrying at least one mobile device (and often more than one) and distributed teams (“work from anywhere”) becoming increasingly common, Enterprise mobility is on the rise. Mobile Device Management (MDM) platforms address some of the requirements of enterprise mobility by providing the ability to provision devices, manage mobile applications, and enforce the enterprise security policy on the device. However, authentication of mobile devices and users remains a challenge. An approach solely based on username and password is just not secure enough and the best way to ensure user identity. Mobile device validation and encrypted communication between the device and the enterprise network, using digital certificates is a far superior approach. Both the main mobile operating systems today i.e. iOS and Android, offer native support for digital certificates.
Software applications:Code signing: Companies who need to distribute software and the associated upgrades and patches to their customers, partners or employees usually do so today over the internet1 . This however has brought new challenges to the forefront. Earlier, software that was distributed on physical media such as compact discs (CDs) was shrink-wrapped i.e. packaged in a box and physically sealed. Any tampering with the packaging and seal helped to alert the user. However, when the software is distributed over the internet, how does a user know whether the software (s)he is downloading is from the actual author? How does the user know that the software has not been tampered with and some malicious code or malware inserted into it? Code signing provides the answer to these questions. Any company that wishes to distribute software over the internet uses a code signing certificate to digitally sign the software. At the client side, the browser being used for the software download and the operating system (OS) on which the software is being installed, validate the code signing certificate and verify its authenticity. If the software has been tampered with, the user is alerted immediately. Without code signing, depending on the browser security settings, the user might get an alert, or the software may not get downloaded at all. If the user ignores the warning, downloads the software and attempts to install it, (s)he gets another warning, this time from the OS, that the publisher of the software could not be verified. As a result of these warnings, users may choose to abort the download, or abort the installation of the software – which is why it is important for companies to ensure that they sign the software that they publish. These are some of the enterprise application scenarios that need to use PKI. There are a few others – these will be covered in part 2. Stay tuned!
1 Over the years, the sales and distribution of software CDs (compact discs) has declined significantly, with the internet becoming the primary means for software to be distributed to end users