TheLuna PED is an authentication device to permit access to the administrative interface of the PED-authenticated HSM. Multi-factor (PED) authentication is only available with the Luna S series. The PED client and server are software components that allow the HSM PED to communicate via a Transmission Control Protocol/Internet Protocol (TCP/IP) network. The PED server resides on the host computer where a remote-capable Luna PED is USB connected. The PED server client resides on the system hosting the HSM, which can request PED services from the PED server through the network connection. Once the data path is established and the PED and HSM communicate, it creates a common data encryption key (DEK) used for PED protocol data encryption and authenticates each other. Sensitive data in transition between a PED and HSM is end-to-end encrypted.

Understanding What a PED Key is and Does

A PED is an electrically programmed device with a USB interface embedded in a molded plastic body for ease of handling. Specifically, a PED Key is a SafeNet iKey authentication device model 1000 with FIPS configuration. In conjunction with PED 2 or PED 2 Remote, a PED Key can be electronically imprinted with identifying information, which it retains until deliberately changed. A PED Key holds a generated secret that might unlock one or more HSMs.

That secret is created by initializing the first HSM. The secret can then be copied (using PED 2.x) to other PED Keys for backup purposes or to allow more than one person to access HSMs protected by that secret. The secret can also be copied to other HSMs (when those HSMs are initialized) so that one HSM secret can unlock multiple HSMs. The HSM-related secret might be the access control for one or more HSMs, the access control for Partitions within HSMs, or the Domain key that permits secure moving/copying/sharing of secrets among HSMs that share a domain.

The PED and PED Keys are the only means of authenticating and permitting access to the administrative interface of the PED-authenticated HSM. They are the first part of the two-part Client authentication of the FIPS 140-2 level 3 compliant SafeNet HSM with the Trusted Path Authentication. PED and PED Keys prevent key-logging exploits on the host HSM. The authentication information is delivered directly from the hand-held PED into the HSM via the independent, trusted-path interface. Users do not type the authentication information on a computer keyboard, and the authentication information does not pass through the computer’s internals, where malicious software could intercept it.

The HSM or Partition does not know PED Key PINs, the PIN and secret are stored on the PED key. The PIN is entered on the PED and unlocks or allows the secret, stored on the PED key, to be presented to the HSM or Partition for authentication. The PED does not hold the HSM authentication secrets. The PED facilitates the creation and communication of those secrets, but the secrets themselves reside on the portable PED Keys. An imprinted PED Key can be used only with HSMs that share a particular secret, but PEDs are interchangeable.

Types of PED Roles

PED Keys are generated with a specific role in mind. These are determined when certain events take place on the HSM. Using these roles, a quorum of M of N PED Keys can be created, where M is the number of keys necessary to complete run a command as that role and N is the total number of keys created. The following roles are the types of roles that can be created on a Luna HSM:

Security Officer – SO

The first actions with a new SafeNet HSM involve creating an SO PIN and imprinting an SO PED Key. A PED PIN (an additional, optional password typed on the PED touchpad) can be added. SO PED Keys can be duplicated for backup and shared among HSMs by imprinting subsequent HSMs with an SO PIN already on a PED Key. The SO identity is used for further administrative actions on the HSMs, such as creating HSM Partition users and changing passwords, backing up HSM objects, and controlling HSM Policy settings. It is recommended that a quorum of 3/7 be used with Blue keys.

Partition User or Crypto Officer

HSM Partition User key. This PED Key is required to login as the HSM Partition Owner or Crypto Officer. It is needed for Partition maintenance, creation, and destruction of key objects, etc. The local portion of the login is necessary to permit remote Client (or Crypto User) access to the partition. A PED Key Challenge (an additional, optional password typed in LunaCM) can be added. Black User PED Keys can be duplicated and shared among HSM Partitions using the “Group PED Key” option. It is recommended that a quorum of 3/7 be used with Black keys.

Crypto User

The Crypto User has restricted read-only administrative access to application partition objects. The challenge secret generated by the Crypto User can grant client applications restricted, sign-verify access to partition objects. It is recommended that a quorum of 3/7 be used with Gray keys.

Key Cloning Vector (KCV) or Domain ID key

This PED Key carries the domain identifier for any group of HSMs for which key-cloning/backup is used. The red PED Key is created/imprinted upon HSM initialization. Another is created/imprinted with each HSM Partition. A cloning domain key carries the domain (via PED) to other HSMs or HSM partitions to be initialized with the same domain, thus permitting backup and restore among (only) those containers and tokens. The red Domain PED Key receives a domain identifier the first time it is used, at which time a random domain is generated by the HSM and sent to both the red Domain key and the current HSM Partition. Once imprinted, that domain identifier is intended to be permanent on the red Domain PED Key – and on any HSM Partitions or tokens that share its domain. Any future operations with that red Domain PED Key shall copy that domain onto future HSM Partitions or backup tokens (via PED) so that they are able to participate in backup and restore operations. Red PED Keys can be duplicated for backup or multiple key copies. It is recommended that a quorum of 3/7 be used with Red keys.

Remote PED

Audit Key

The audit is an HSM role that takes care of audit logging under independent control. The audit role is initialized and imprints a white PED Key without the need for the SO or another role. The Auditor configures and maintains the audit logging feature, determining what HSM activity is logged and other logging parameters, such as rollover period, etc. The purpose of the separate Audit role is to satisfy certain security requirements while ensuring that no one else – including the HSM SO – can modify the logs or hide any actions on the HSM. The Audit role is optional until initialized.

PED Key Management Best Practices

Number of off-site full sets

Does the organization intend to use common authentication for many Luna HSMs? There is no limit. The authentication secret on a single blue SO PED Key, for example, can be used with as many HSMs as they like. However, if the organization wishes to limit the risk of compromising a common blue PED Key, they will need to have groups of HSMs with a distinct blue PED Key for each group. Each time the organization initializes, the HSM (via the PED) allows them to “Reuse an existing keyset” – make the current HSM part of an existing group that is unlocked by an already-imprinted PED Key (or an already-imprinted M of N keyset) – or to use a fresh, unique secret generated by the current HSM.

Number of HSMs per group

That will tell the organization the number of groups and how many different blue PED Keys they need. Now double that number, at least, to allow off-premises backup copies to be kept in secure storage if one is lost or damaged. In most cases, the contents of an HSM are of some value, so at least one backup per blue PED Key must exist. If the organization has only one blue PED Key for a group of HSMs, and that PED Key is lost or damaged, the HSMs of that group must be re-initialized (all contents lost) and a new blue PED Key imprinted.

One for One

The organization might prefer a separate blue SO PED Key containing a distinct/unique Security Officer authentication secret for each HSM in their system. No single blue PED Key can unlock more than one HSM in that scenario. The number of blue keys they need is the number of HSMs. Now double that number to have at least one backup of each blue key.

Many for One or M of N (Recommended)

Does the organization’s security policy allow them to trust its personnel? Perhaps the organization wishes to spread the responsibility – and reduce the possibility of unilateral action – by splitting the SO authentication secret and invoking multi-person authentication. Choose the M of N option so that no single blue PED Key is sufficient to unlock an HSM. Two or more blue PED Keys (their choice, up to a maximum of 16 splits of each SO secret) would be needed to access each HSM. Distribute each split to a different person, ensuring that no one person can unlock the HSM.

Partition BLACK PED Keys

Each HSM can have multiple partitions. The number depends upon their operational requirement and the number the organization purchased, up to the product maximum per unit per HSM. Each partition requires authentication – a black PED Key.

The organization has all the same options for the blue SO PED Key(s) – the organization shall have at least one backup per primary black PED Key. The organization might have multiple partitions with a unique authentication secret; therefore, each would have a unique PED Key. Or, the organization might elect to group their partitions under common ownership so that groups of partitions (on one or more HSMs) might share black PED Keys.

As with the SO secret, the organization can also elect to split the partition black PED Key secret by invoking the M of N option (when prompted by the PED for “M value” and “N value” – those prompts do not appear if the organization chose to “Reuse an existing keyset” at the beginning of the partition creation operation).

Domain RED PED Keys

Each HSM has a domain. Each HSM partition has a domain. That domain is carried on a red PED Key and must be shared with another HSM if the organization wishes to clone the HSM content from one to another, for example, when making a backup.

Domains must match across partitions for the organization to clone or back up their partitions or assemble HSM partitions into an HA group.

For the red PED Keys, the organization can make arrangements regarding uniqueness, grouping, M of N (or not), etc.

Other PED Keys

The organization might have orange PED Keys if they are using the Remote PED option (orange Remote PED Keys (RPK) containing the Remote PED Vector (RPV)). The organization might have white PED Keys if they invoke the Audit role Audit role option (white Audit PED Keys containing the authentication for the Auditor, who controls the audit logging feature). The organization can invoke M of N, or not, as they choose, which affects the number of orange or white PED keys that they must manage.

Orange Remote PED Keys and white Audit PED Keys can be shared/common among multiple HSMs and PED workstations, just like all other PED Key colors.

All other PED Key roles allow them to overwrite any key (any color) with a new secret. A warning is given if a key is not blank, but the organization can choose to overwrite or pause. At the same time, the organization finds an empty or outdated key (“outdated” in this case means a previously imprinted PED Key that they have made irrelevant by re-initializing an HSM or deleting/re-creating a partition, or other activities that make the secret contained on a particular PED Key no longer relevant; PED Keys do not “age” and become invalid during their service life – only deliberate action on an HSM would cause the secret on a PED Key to become invalid).

With all of the above in mind, it is not possible to suggest one “correct” number of PED Keys for their situation. It depends upon the choices that the organization makes at several stages. In all cases, we repeat the recommendation to have at least one backup in case a PED Key (any color) is lost or damaged.

To learn more about the Thales Luna HSM, visit Thales’ website: https://cpl.thalesgroup.com/encryption/hardware-security-modules

About the Author

Riley Dickens is a Consultant at Encryption Consulting, working with PKIs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Search any posts

A collection of Encryption related products and resources that every organization should have!

Cyber security experts conference 2022

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download

Read time: 6 minutes

Each year, more and more cyber attacks occur on organizations big and small. Ransomware attacks, supply chain attacks, and new types of attacks are created and used by threat actors to steal information and money. Without the proper safety precautions in place, even the biggest organizations have been affected, as has been seen in the recent months and years. That is why so many organizations are focusing their efforts on different cybersecurity tools and protection methods, such as Data Loss Prevention, or DLP. As organizations increase the amount of data they store and transmit, these types of tools become even more vital to the protection of an organization.

Data Loss Prevention, or DLP, protects and monitors data-in-transit, data-at-rest, and data-in-use. It tracks the data anywhere it is stored in the organization, thus alerting the security team or teams to any use of the data. These tools and methods work with the encryption policies and standards in an organization to ensure that the users within the organization, as well as applications and third-party solutions, are abiding by the rules set forth in these policies and standards. DLP tools work by creating a centralized location for managing, tracking, and remediating the improper use of an organization’s information. By supporting the standards and policies of an organization, those accessing and using information can be monitored to ensure that no confidential data leaves the organization and is used for improper purposes.

Why an Organization Should Use DLP

There are more reasons than just one as to why an organization should use DLP tools in their cybersecurity framework. Below are few other reasons to implement DLP safety measures in an organization:

  1. Some organizations do not know where all their data is stored and sent to.

    Many organizations do not have the proper insight into their organization, where data is related. Data discovery and classification should be the first step any organization takes to become cryptographically secure and meet regulations for things like the National Institute of Science and Technology (NIST). If this is done improperly, or not at all, data may go unnoticed or be classified incorrectly, thus allowing threat actors to take this data for their own uses. Using tools like DLP, an organization can get a better view into the data they store, the types of data they store, and they can keep a better eye on the data as it is stored, in transit, or in use.

  2. Most organizations need to maintain a certain level of security for state and country regulations.

    As mentioned previously, many regulations and standards exist in certain countries and states that detail how an organization stores and otherwise protects their data cryptographically. These regulations come from a number of different bodies, including the NIST, and they have different names, such as the Health Insurance Portability and Accountability Act, or HIPAA. The standards and regulations employed by these bodies focus on protecting customers’ Personally Identifiable Information, or PII. These standards are vital, as this information being stolen could cause a customer to lose their identity, their money, or their livelihood. Using tools like DLP, data can be tracked and protected to the levels that standards and regulations require.

  3. Outside threats are considered, but insider threats are not considered.

    Most organizations are on alert for outsider threats to the organization, such as lone-wolf hackers or hacker groups, but many fail to keep an eye on insider threats. DLP assists organizations with watching how data is accessed and transmitted, especially with employees of an organization. Keeping track of who accesses data when, and how that data is used, is the basis of what DLP does. This is why many organizations are recommended to put DLP products into their organization’s environment.

  4. An audit will be occurring in the near future.

    Although DLP should be an early step taken by organizations, some will put DLP implementations into place due to an audit occurring in the near future. Failed audits can lose organizations money, reputation, and compliance status if the proper encryption and cryptography steps are not in place. DLP goes a long way toward making organizations compliant with standards and regulations. They can use DLP to ensure proper encryption and cryptography practices are being followed and find out where they are lacking in security. This can lead to better practices being put in place across an organization, thus reaching compliance and passing an audit successfully.

  5. The organization may want to defend against threat actors before they appear, as opposed to fixing a problem after a data breach occurs.

    Many different organizations tend to focus on dealing with cyber attacks after they have already occurred. What organizations should instead be doing is putting mitigating factors in place before a threat occurs. This is the preferred method since protecting sensitive customer data before any threat actors can get near it assists in the process of saving the data from getting stolen in the first place, as well as deterring any attackers from going after that information. DLP is a great first line of defense, as using Data Loss Prevention tools helps track information and identify any gaps in security.

  6. Automation of data management and tracking is a high priority for many organizations.

    Organizations tend to begin their security systems with manual security processes. This means data is tracked and identified by personnel and teams within the organization manually when they choose to do it. Instead, automated processes can be used, which will automatically check and track data and users in the organization’s environment. DLP is an example of a tool many organizations use to automatically track data and users who use and transmit that data. This is why so many companies use Data Loss Prevention to create a strong cyber security presence.

  7. An organization may work with a number of outside organizations that can access their systems.

    Companies tend to work with a number of outside providers and customers, with a handful of those providers having access to the network and infrastructure provided by the company. If these users aren’t properly tracked and their access and use of data aren’t monitored, then data could be stolen or misused. DLP keeps an automated eye on data in use, data at rest, and data in motion, so anyone with access to the network will be noted if they use PII data. Another method organizations use to protect their data is to ensure only those people who need the data have access to it, and only for approved purposes. This is what is known as Enterprise Workflow Management. Approval is required to use data, and those requests for data are tracked.

Types of DLP Tools and Platforms

When talking about DLP, there are three different types that Data Loss Prevention comes in: Network DLP, Cloud DLP, and Endpoint DLP. Network DLP is the type of DLP I have talked about the most. This type of DLP deals with data moving inside the company. Network DLP sets up a defensive fence to track and monitor data within the organization. The idea behind this is that when data is attempted to be sent out, via email or any other method, automated actions take place, such as encryption, blocking, or auditing the data transfer. This can be set up within the organization ahead of time. Additionally, a message will usually alert administrators if sensitive data is attempting to leave the organization when it shouldn’t.

Endpoint DLP is more complicated to manage than network DLP, but it is usually considered stronger than network DLP. Endpoint DLP focuses on the devices that are part of the network, as opposed to the network itself. Each device that uses the network will have this endpoint DLP installed on it, tracking the data in motion and the data at rest on the device. Additionally, endpoint DLP tools can also detect if data is stored on the device unencrypted when it should actually be encrypted. As can be seen, installing and managing endpoint DLP on every device in a network is complicated and when done manually would take a lot of man hours to complete and keep up with. The final type of DLP is cloud DLP. This type of DLP is set up with certain cloud accounts, enforcing DLP rules and policies. Cloud tools, such as Office 365, integrate with cloud DLP tools to ensure these policies are met.

Conclusion

Having proper cyber security tools and platforms in place is extremely important to the safety of a company. Using DLP, any organization can get ahead of threat actors, whether they are inside or outside the organization. Protecting sensitive customer and organizational data is vital in any company, especially banks and health organizations. At Encryption Consulting, we make cyber security our highest priority. We work with organizations to create the most secure environment possible using methods such as DLP, Public Key Infrastructure (PKI), and encryption assessments. We provide assessment, implementation, and development services for PKI, encryption, and Hardware Security Modules (HSMs). If you have any questions, visit our website at www.encryptionconsulting.com.

About the Author

Riley Dickens is a Consultant at Encryption Consulting, working with PKIs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Search any posts

A collection of Encryption related products and resources that every organization should have!

Cyber security experts conference 2022

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download

Read time: 7 minutes

In an age where the idea of quantum cryptography is more than just a theory, organizations like the National Institute of Science and Technology (NIST) are looking for a way to standardize post-quantum cryptography algorithms. The NIST creates compliance standards, best practices, and regulations for cyber security. They work to provide a standardized framework for different encryption algorithms and methods to ensure the best possible security is in place within different organizations. Quantum cryptography is an asset that can be used in the coming years, but it can also cause a detriment to the security of organizations if they are not prepared. That is why the NIST has turned its sights to post-quantum cryptography standardization with its post-quantum cryptography (PQC) standardization project.

What is the PQC standardization project?

As previously mentioned, the NIST sets standards and best practices for cyber security that they suggest organizations follow. Quantum cryptography is a type of cryptography that has the potential to cause large issues in the cyber security community, as it will make the majority of cryptographic algorithms obsolete. The reason quantum cryptography can do this is that, with a powerful enough computer, algorithms that would usually take 10 years to crack could now take only weeks or days with quantum computing. This is the biggest reason the NIST has begun the PQC standardization project. The idea behind this project is to prepare organizations for quantum cryptography before it becomes a real threat. This would allow companies to have the proper encryption algorithms in place throughout the organization so that once quantum computing becomes possible to do, these attacks can be defended against. The types of encryption algorithms the PQC standardization project is working to standardize are quantum-safe algorithms. A quantum-safe algorithm is resistant to attacks from both classical computers, which are the types of computers we use today, and quantum computers. This allows private information stored on devices or in transit in organizations to be the most secure possible, as even a quantum computer will not be able to break a quantum-safe algorithm within hours or days.

Determining the most quantum-safe algorithms

Many times in the past, the NIST has done projects like the PQC standardization project where a number of algorithms are submitted to the project to see if they meet the criteria the best to be considered the standard for that type of cryptography. At the time of writing this, the NIST has just completed its 3rd round of selection for cryptographic algorithms. The finalists and alternative options are as follows:

  1. Key-Establishment Mechanism (KEM) Algorithms: Kyber, NTRU, SABER, and Classic McEliece
  2. Digital Signature Algorithms: Dilithium, Falcon, and Rainbow
  3. Alternative KEM Algorithms: BIKE, FrodoKEM, HQC, NTRUprime, and SIKE
  4. Alternative Digital Signature Algorithms: GeMSS, PICNIC, and SPHINCS+

Once the current round ends, one or two of the KEM algorithms and one or two of the Digital Signature algorithms will be selected as quantum-resistant algorithms strong enough for standardization across the cyber security landscape. After completing the third round, NIST mathematicians and researchers will continue to look at other algorithms and newly emerging algorithms to see if they are powerful enough to be considered a part of the standardized group of quantum-resistant algorithms.

How can Organizations Prepare for the Future?

Although the NIST has not yet released its list of recommended quantum-resistant cryptography algorithms, organizations can begin preparing themselves for quantum computers now. The following are a few different ways organizations can prepare for the future:

  1. Quantum Risk Assessment

    Performing a quantum risk assessment for your organization will give the security teams within your organization a good idea of where gaps exist in relation to quantum computing. A quantum risk assessment also helps create a list of applications that will be affected by the creation of quantum computers, thus providing the organization with a detailed list of applications that must be updated when moving to quantum-resistant algorithms. This will also help with the next step, identifying at-risk data.

  2. Identify at-risk data

    Identifying an organization’s data at risk is extremely important, even just relating to cyber security in general. Having data classification and identification systems in place in an organization is vital to keep track of data and ensure it is properly protected.

  3. Use cryptographically agile solutions

    The NIST has indicated that the use of crypto-agile solutions is a great way to begin the process of moving toward having quantum-safe security in place. Crypto-agility is the ability to switch between algorithms, primitives, and other encryption mechanisms without causing significant issues in the organization’s infrastructure.

  4. Develop an understanding of quantum computing and its risks

    By training employees on what to look out for in the future of quantum computing, and methods of becoming quantum-resistant, they will have a mindset that is already prepared for the post-quantum age.

  5. Track the NIST’s PQC Standardization Project

    By keeping track of the PQC Standardization Project, an organization can keep up to date on any changes to the quantum-resistant algorithms in the running and change to the selected algorithms when the time is right.

About the Author

Riley Dickens is a Consultant at Encryption Consulting, working with PKIs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Search any posts

A collection of Encryption related products and resources that every organization should have!

Cyber security experts conference 2022

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download

Read time: 6 minutes

In a time when keeping data and users secure on the Internet is so important, digital certificates have been used for a number of different purposes. Digital certificates are used for authentication, secure Internet connections, code signing, and more. This allows data-in-transit or data-at-rest to be protected from outside attackers. There are a number of different certificate types, but we will be taking a look at self-signed certificates today. The first question we have to answer is what exactly is a self-signed certificate?

What is a Self-Signed Certificate?

Depending on what the certificate is being used for, a self-signed certificate is a certificate signed by the same user or device using that certificate. This works for code being signed for internal use, or for an application being used by the creator, but not for software that is being used by external users. If an application or piece of software is being sent to external customers, then another type of certificate needs to be used. The reason for this is that a self-signed certificate will not be able to be verified when an external user attempts to use the software associated with that certificate.

When a user runs an application or piece of software, the certificate that signed that code is authenticated by the Operating System. You have likely seen that a pop-up occurs when a new software or application is downloaded and run on your computer. This pop-up checks the identity of the publisher of the certificate associated with the software. Additionally, the certification path is checked in the certificate as well. All of these details are checked to authenticate the publisher of the software. If the identity of the publisher cannot be properly verified, then the software would likely be deemed unsafe to run by the user.

Now, when using a self-signed certificate to sign software and applications, only the device which signed the certificate will be able to authenticate it. This is why self-signed certificates are not generally used for external software or applications. Instead, software publishers will use a Public Key Infrastructure with an external, and well-known, Root Certificate Authority which creates Certificate Authority generated (CA-generated) certificates. This allows the publisher to generate code signing certificates that can be recognized by any Operating System, thus authenticating the software sent to a customer and allowing them to trust that software or application enough to be downloaded and used on their device.

Pros and Cons

When dealing with self-signed certificates, there are many pros and cons that should be looked at before creating these types of certificates. Some of these we have already touched on, and others we will take a look at now.

Pros
Self-signed certificates are very easy to create, compared to other methods of certificate creation. There is no in-depth process, it is just the creation of a keypair and then the creation of the certificate itself. Other processes may require multiple steps and access to a Public Key Infrastructure (PKI) to create certificates.
Self-signed certificates are best utilized in test environments or for applications that just need to be privately recognized. Applications only for use within the organization they are created mainly use self-signed certificates.
Another reason that an organization may use self-signed certificates is that there is no reliance on another organization for the certificates to be issued or keys to be protected. Creating publicly verifiable certificates with a well-known PKI can require a lot of time for a certificate to be issued, causing hang-ups in the process of publishing and distributing software.
Cons
The most obvious issue with utilizing a self-signed certificate for code signing applications and software is that the certificate will not be recognized outside of the organization. A self-signed signature is only recognized by the device it was signed on, or within the organization it was generated in.
Self-signed certificates are not easily tracked within an organization. This causes a multitude of issues, especially in the case of the compromise of a self-signed certificate. Since self-signed certificates can be created at any time from any device, the certificate may not be known to be compromised for a long period of time, allowing the threat actor to misuse the certificate to suit their needs.
Self-signed certificates can’t be revoked. With CA-generated certificates, it is simple to lookup the private keypair associated with a certificate, but that is not possible with self-signed certificates.

Conclusion

There are a number of different types of certificates available to users for purposes ranging from authentication and communication to code signing. These types of certificates tend to fall into two different categories, self-signed and CA-generated certificates. As previously mentioned, self-signed certificates are mainly used in test environments or for software and applications used exclusively within an organization. For this reason, we at Encryption Consulting suggest organizations looking to use certificates for code signing setup a PKI for their organization or work with a public PKI for code signing certificates. To learn how Encryption Consulting can help your organization create a strong and secure Public Key Infrastructure for use within your organization, visit our website at www.encryptionconsulting.com.

About the Author

Riley Dickens is a Consultant at Encryption Consulting, working with PKIs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Search any posts

A collection of Encryption related products and resources that every organization should have!

Cyber security experts conference 2022

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download

Read time: 6 minutes

HTTP and HTTPS are seen everyday when using the Internet, whether you are in the cybersecurity field or not. You have likely seen a URL that looks like this:

https://www.google.com or http://www.fakewebsite.com.

These are vital parts of how searching a URL on the Internet works, but not everyone knows how HTTP and HTTPS work. So what are HTTP and HTTPS, and what is the difference between the two?

What is HTTP?

HTTP, or Hypertext Transfer Protocol, works to transfer data across a network. Data is put into a specified format and syntax to ensure it can be read and transferred correctly. HTTP is set up to send and receive both requests and responses. HTTP requests happen when a hyperlink is clicked, or a website URL is put into the browser. The HTTP request is sent using one of the different HTTP methods to retrieve or send information to a webpage. The webserver, in turn, provides an HTTP POST response, which is an HTTP response, and that gives the user access to the desired webpage.

The majority of web pages do not use HTTP but instead use HTTPS because HTTP is not a secure way to transfer data across a network.

What is HTTPS?

HTTPS, or Hypertext Transfer Protocol Secure, is the more secure way to transfer data between a web browser and a web server, that is why most websites use HTTPS. HTTPS utilizes a TLS/SSL connection to securely transfer data between your web browser and the server of the webpage.

Requests and responses sent with HTTPS are encrypted so that any Man in the Middle attacks that may occur will be thwarted since the data can’t be read. The encryption type HTTPS uses is asymmetric encryption and symmetric encryption. The way asymmetric encryption works is that the requested server generates a public and private key pair and the public key is stored in an SSL certificate. The private key, as the name suggests, is kept private to the webserver. When an HTTPS connection is made to the web server, the client and server complete a TLS Handshake. This Handshake provides a symmetric session key to the server, which then decrypts the session key with it’s private key. When an encrypted message is received, the message is encrypted by the session key, and the client can decrypt the session key using it’s private key. This allows the message to be encrypted in transit and authenticates that the message encrypted within is from the server, since the key pair is mathematically linked.

Comparing HTTP and HTTPS

Now that we know what HTTP and HTTPS are, let us look at the differences and similarities between the two.

  1. HTTP is insecure, whereas HTTPS is secure

    As we talked about in the HTTPS section, HTTPS is extremely secure because of its use of asymmetric encryption for data transferred over the network. Additionally, it requires that both itself and the requestor have a valid TLS/SSL certificate to identify each user and authenticate the messages sent by the user. HTTP, on the other hand, sends messages unencrypted to the requestor. This means attacks such as Man in the Middle Attacks will be successful, allowing the man in the middle to take the information transferred to the server, which could include credit card information or other Personally Identifiable Information (PII).

  2. Data sent via ports

    With HTTP, data is sent via port 80, which allows unencrypted data to be sent to requestors. HTTPS instead uses port 443, which allows encrypted communications to occur.

  3. OSI Layers and URLS

    One final difference between HTTP and HTTPS is the OSI layer they work in and how URLs are structured. The Open Systems Interconnection (OSI) model is a model that shows the seven different layers that computers communicate through.

    The seven layers are:

    • The Application Layer
    • The Presentation Layer
    • The Session Layer
    • The Transport Layer
    • The Network Layer
    • The Data Link Layer
    • The Physical Layer

HTTP works in the Application Layer, and HTTPS works in the Transport Layer.

URLs with HTTP start with http:// and have an unlocked padlock on the search bar next to the URL. Because it is secure, HTTPS URLs have a locked padlock next to the URL and start with https://.

Conclusion

Utilizing encryption and digital certificates is important for both connections across the Internet as well as within an organization’s internal network. Security systems like Public Key Infrastructures (PKIs) provide users and devices in an organization with certificates to identify them and allow encryption of messages. To learn how Encryption Consulting can help you with setting up a PKI within your organization, visit our website at www.encryptionconsulting.com.

About the Author

Riley Dickens is a Consultant at Encryption Consulting, working with PKIs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Search any posts

A collection of Encryption related products and resources that every organization should have!

Cyber security experts conference 2022

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download

Read time: 6 minutes

As humans learn more about Artificial Intelligence (AI) and develop what it can do, more and more organizations are implementing AI into their processes. The increase into the development of AI has helped it become affordable to use in most, if not all, organizations.Companies are learning that now is the perfect time to implement AI into their repetitive processes to create a level of automation that increases productivity and allows the individuals within organizations to focus on more detailed work that cannot be done by AI. But before we discuss how to implement AI into your organization, we should first look at the steps a company should take before beginning with AI.

Steps an Organization Should Take Before Implementing AI

  1. The first step any organization should take before implementing AI into their processes is to get familiar with Artificial Intelligence. Learning more about AI will help in later steps of the implementation, and help you determine the places in your organization that will benefit the most from an AI implementation.There are many free resources to learn about AI, including YouTube videos, university lectures, and open-source libraries and kits that will help you develop a better understanding of what goes into implementing an AI solution.
  2. Once you’ve familiarized yourself with the basics of AI, you should identify the problems you want Artificial Intelligence to solve and how much that may cost.AI can be implemented in an organization’s existing products and services, or it can be something as simple as a chatbot on the main website. If your organization is looking for something simple, a chatbot would be a good first step. They are very easy to set up, and they take care of users asking many of the same, repetitive, questions. This will free up your support teams to focus on other projects and only be needed if the chatbot does not already have an answer.
  3. After identifying current processes that would benefit from AI, or discovering gaps that AI would fill, it is time to start designing your solution. It is always best, for the first few AI projects, to utilize external and internal teams to complete these solutions. In this way, you have both experts who already know how to implement an AI solution, and internal team members who can learn for future AI projects.Starting small with your first project is important, as you don’t want to take on too much at once with an AI implementation. A simple month-long project could turn into a six-month project if careful planning is not done. Now that you know some of the steps an organization should take before implementing Artificial Intelligence, let us discuss the different models of AI implementation that an organization may take.

AI Implementation Models

There are three different implementation methods that an organization will take when deciding to implement AI into their organization.

  1. The first model is the “hub” model. The “hub” model, as the name suggests, focuses all AI and analytics systems into a central hub. A central hub for Artificial Intelligence is perfect when deploying new AI systems, as it provides a fully centralized team to handle every step of the implementation. A “hub” model should be gradually developed over time, as the task of developing such a large unit of the business would be very complicated and time consuming all at once.The way the “hub” model is set up is that the systems and teams involved in AI are in a centralized location, loaning out their experience to the different business units whenever necessary. The development of the hub should be driven by the different AI tasks that the organization has determined are needed within the company. This allows the hub to grow slowly over time, as opposed to all at once.
  2. Another implementation model of AI within an organization is the “spoke” model. This model is the opposite of the “hub” model, instead focusing on spreading the different AI team members and systems throughout the different business units of the company.This model offers the different business units the ability to have a support team on deck for any AI tools they have implemented into their section of the business. This also allows the different business units to develop their own AI tools and systems for their specific use, as opposed to deploying them organization-wide.
  3. The final model is a “hybrid” model, called a “hub-and-spoke” model. This takes the components of a “hub” model and the components of a “spoke” model and creates the ideal model. This method allows the central hub to handle a small handful of responsibilities with the AI team lead at the center.The spokes then work within the different departments to create business unit-specific tools that can help the business unit. The spokes focus on execution team oversight, adopting AI solutions, and performance tracking, while the hub deals with hiring for AI team members, performance management, and AI governance.

Ways of Using AI within your Organization

There are several different ways to use Artificial Intelligence within your organization that are simple to implement and don’t involve high costs. A centralized knowledge center is a great initial way to start using AI in your organization. Having a central knowledge base offers users the ability to quickly find and parse through documents relating to their questions, without having to use the time of an employee to answer the same questions over and over.

Like the centralized knowledge center, you can also setup an automated live chat, like a chatbot, that will answer questions for users. Additionally, you can integrate with popular applications, like Salesforce or Jira, and automate processes within those applications. This allows employees to save time and increase their productivity.

About the Author

Riley Dickens is a Consultant at Encryption Consulting, working with PKIs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Search any posts

A collection of Encryption related products and resources that every organization should have!

Cyber security experts conference 2022

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download

Read time: 5 minutes

Microsoft Azure is one of the three biggest Cloud Service Providers used by organizations today. The other two mainly used by organizations are Amazon Web Services (AWS) and Google Cloud Platform (GCP) . With the current state of the world, many companies are moving their services to a partially or fully cloud-based platform like Azure or AWS. The reason behind this is the large number of managed services that these Cloud Service Providers offer, as well as the more easily usable and accessible options available for web servers and the like. Recently, many Healthcare providers have been moving specifically to Microsoft Azure. They are doing this because Azure has been working to upgrade their security systems to help these healthcare providers be HIPAA compliant, among other compliance standards they are targeting.

What does being Compliant Mean?

When talking about compliance with organizations, each company has different standards and practices they must conform to. These cyber security compliance standards are written by an organization which specializes in online security and knows what types of protection should be in place for the specific types of organizations. The standards outline practices that should be in place, at a minimum, to be considered fully compliant All organizations do not follow the same standards either. There are some general cyber security standards, such as the NIST Cybersecurity Framework (CSF), which focus on critical infrastructures or compliance standards for organizations in specific countries, but there are also compliance standards for certain types of organizations.The types of organizations that tend to have their own set of standards are banks or companies holding customer banking/credit card information, and healthcare companies. Some of the biggest healthcare company standards, that you may have heard of, are the Health Insurance Portability and Accountability Act (HIPAA) and the Health Information Technology for Economic and Clinical Health (HITECH) Act. These are vital for a healthcare organization to follow to maintain proper security within their environment. If a healthcare organization is found to not be following these and other standards, they will face legal action and likely have to pay thousands in fines.

How does an Organization become Compliant?

An organization can become compliant in a number of different ways. Following these standards to the letter and ensuring they have at least the minimum amount of security outlined in these standards is the most crucial step. Organizations can also follow cyber security best practices, like those outlined in NIST SP 800-30 and other NIST recommendations, to better harden their security and ensure compliance. Additionally, security audits of an organization’s cyber security framework should be completed annually at a minimum. This will help ensure any updates to security standards are being followed, and if they are not then this can be noted and remedied in the audit. There are also a number of different security tools used in platforms like Microsoft Azure that help organizations maintain their compliance without having to implement as much work. We will touch on this in the next section of this blog.

What is Microsoft Azure doing to help with compliance?

Microsoft has worked to ensure that their databases, as well as each other part of their cloud system, can help a healthcare organization reach and stay in compliance with every healthcare compliance standard they must follow. Using something called the Azure Security Center, users can keep track of their different cloud systems in use and ensure it is up to the compliance standards necessary. This Security Center allows the organization to keep up-to-date on the status of their compliance within Microsoft Azure. This also allows Azure to recommend changes to their current practices to further comply with standards such as HIPAA. Microsoft also takes care of the deployment and maintenance of systems within Azure, taking the hassle and man-power needed from the organization away. Azure also offers the ability to complete third-party audits of the systems in place to check for proper compliance. This allows security audits to happen quickly and easily, offering organizations the ability to stay updated on security standards year-round. Organizations can also download compliance documentation via Microsoft Azure, further speeding up the audit process and providing easy access to the documentation for new hires. There are also different tools available in Microsoft Azure to use for compliance purposes .Azure Blueprints is a service that offers the ability to create frameworks for services developers are creating. These frameworks can be created by upper-level management, and pre-loaded into Azure Blueprints for developer use. Since a high-ranking member of the organization has created this framework, the developers using that framework know that it is approved for use where necessary in the organization. Azure Policy acts similarly to Azure Blueprints, but it deals with policy and governance instead. By setting business rules and policy definitions within Azure Blueprints, a user can ensure that compliance standards are being met. Azure Blueprints evaluates resources in Azure by comparing the properties of resources within Azure to the business rules set out in Azure Blueprints.

Conclusion

Tools and services within Cloud Service Providers are a great way to maintain integrity of your data within the Cloud. Azure Policy and Azure Blueprints work hand in hand to constantly ensure existing and new data entering the Cloud are being properly protected. As time goes on, I am certain the cyber security world will see more tools like Microsoft Azure provides begin to roll out and provide even more ways to ensure data security compliance is being followed. Another great way to ensure compliance within an organization is to have experts look over your systems and documentation.

At Encryption Consulting, we provide data security assessments to ensure that your security tools and methods are being used properly. Our team of experts will ensure that your Public Key Infrastructure, Hardware Security Modules, and data encryption in general are up to the proper compliance standards your organization requires. We can also help implement new data security practices if a company’s infrastructure seems to be lacking. Encryption Consulting can help organizations create new governance documentation as well. To inquire about the different services we offer, visit our website at www.encryptionconsulting.com.

About the Author

Riley Dickens is a Consultant at Encryption Consulting, working with PKIs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Search any posts

A collection of Encryption related products and resources that every organization should have!

Cyber security experts conference 2022

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download

Read time: 5 minutes

Hardening the security of an organization is extremely important as time goes on, since new techniques for infiltration are discovered often. Attacks can come from several different attack vectors, and one of the more common attacks executed today are code signing attacks. These attacks are exploited from several different means, but there are methods to harden security from these types of attacks. By following code signing best practices, you can harden your organization’s security against these attacks.

Why You Should Follow Code Signing Best Practices

As many organizations know, some of the most prevalent types of attacks today are supply chain attacks. Supply chain attacks are implemented on organizations that interact with a number of smaller organizations daily. What I mean by this is that supply chain attacks focus on organizations that provide software or tools to a number of smaller organizations. This allows threat actors to infect a tool or piece of software provided by a single organization, and in turn infect all the smaller organizations that use that tool. Some examples of supply chain attacks have been seen in recent news, such as the JBS Foods attack as well as the Colonial Pipeline attack.

Many of these supply chain attacks were done due to a lack of code signing best practices being in place. All it takes is a small gap that can be exploited by attackers to infect thousands of customers. Code signing is used as a common attack vector for supply chain attacks because with tools that are distributed to a number of different organizations, they must be updated regularly. These updates, as long as code signing is in place, will be known to be from a trusted source, meaning the organization who created the tool or software. Without code signing, anyone could send along an update to the tool that would then infect each person who used that update, and this is exactly what happened in a number of different supply chain attacks.

Code Signing in the Industry

Though code signing is not a new technology, as companies have used it for many years, there are still gaps found in code signing techniques regularly. Though not related to code signing, recently a flaw was found in the Java coding language, the Log4J vulnerability, which has been in Java code for years. This vulnerability, even though it was only recently discovered, is within the basis of the majority of Java code on the Internet. This recent flaw has sent the majority of the world’s companies into a panic attempting to patch this vulnerability. Many of these organizations will need to harden their security due to this flaw and keep up-to-date on updates from Java when an official patch does come out. This type of vulnerability is why it is so important to keep your systems updated with the best practices for code signing, as a large flaw like this may be found in the future.

Top Code Signing Best Practices

Below are some of the top code signing best practices that any organization can use to harden their existing security system.

  • Utilization of Hardware Security Modules : One of the most important code signing best practices to follow is using a Hardware Security Module, or HSM, to protect your private keys used for code signing certificates. Though software-based storage methods exist and are a viable key storage method, HSMs are ultimately a much safer method of private key storage. HSMs use tamper-evident tools to ensure that no one without proper access to the HSM can utilize the keys held within. To access an HSM without the proper permissions, a threat actor would essentially need to steal the HSM from the server rack it is installed on and then physically access the HSM and steal the keys. The strength of the security behind an HSM is why they are recommended so highly for code signing best practices.
  • Proper Access Control : Access to keys and HSMs in general must be carefully curated within an environment to ensure that unwanted users cannot use code signing for malicious intent. It must be assured that users within the environment only have access to the code signing processes and tools that they absolutely must have access to to complete their job. This method of access control is known as the Principle of Least Privilege. Least Privilege is commonly followed by most organizations, as it is a great method to control access to keys, files, and data in general within a secure environment.
  • Use of a Secure Public Key Infrastructure : Another important part of a strong code signing environment is the use of a strong and trusted Public Key Infrastructure. A Public Key Infrastructure (PKI) uses Certificate Authorities (CAs) to distribute certificates, whether they be for code signing, authentication, or other purposes, to users and devices within an organization. A PKI used by an organization can be external, where a trusted secondary organization manages all the components of the PKI, or it can be internal and run by the organization itself. When using external PKI systems, the primary organization must ensure that it is using a trusted external organization to run it is a secure PKI. If using an internal PKI, organizations should ensure that it is properly setup, with every security detail properly in place. The average internal PKI utilizes a two-tier hierarchy, with an offline Root CA and an online Issuing CA. The Issuing CA is the one which will actually distribute certificates to users and devices within the organization.
  • Proper Workflow Management : Another important part of code signing is ensuring that a proper workflow management is in place. Workflow management refers to the idea of having any code signing activity be logged and require approval from a secondary, trusted user. Logging of code signing activity is vital, as when a code signing breach occurs, the organization in question can audit the trail of the breach and ensure that the gap found in the environment is promptly fixed. Approvals are also important, as they ensure that if an insider threat were to attempt to send malware through a properly signed code update, a secondary user would look at the code to be signed and notice that it is not a proper update and stop the signing of that code.

Conclusion

As you can tell, hardening security whenever possible is very important to ensure the continued safety of an organization. Following best practice in all areas of computer security is very important, as a big flaw like the log4j vulnerability could be found at any time by any organization. Another great way to ensure an organization is following best practice is to monitor cybersecurity news and ensure that any patches or new methods of securing systems are updated when necessary. To learn more about how to implement our code signing product, visit our website at www.encryptionconsulting.com.

About the Author

Riley Dickens is a Consultant at Encryption Consulting, working with PKIs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Search any posts

A collection of Encryption related products and resources that every organization should have!

Cyber security experts conference 2022

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download

Read time: 6 minutes

Internet of Things, or IoT, devices are everywhere in the world, whether you are at home, in the office, or just on the Internet in general. An IoT device is any type of device that connects to a network to access the Internet, so Personal Computers, cellphones, some speakers, and even some outlets are considered IoT devices. Today, even cars and airplanes use IoT devices, meaning if these devices are attacked by threat actors, then cars or airplanes could be hijacked or stolen. With such a widespread use of IoT devices in place in our world, authenticating and authorizing IoT devices within your organization’s network has become vital. Allowing unauthorized IoT devices onto your network can lead to threat actors leveraging these unauthorized devices to perform malware attacks within your organization.

Software-Based IoT Authentication

Before talking about specific ways to give authorization to IoT devices, we should first take a look at some of the general, software-based authentication methods available to Internet of Things devices.

  • One-way authentication: When two devices are both attempting to communicate with each other, one-way authentication can be used to authenticate only one of the devices as opposed to both. This is similar to how a client-server relationship works, where the client is just authenticating itself with the server, not the other way around. An example of one-way authentication could be signing onto a server with a username and password.
  • Two-way authentication: Similar to one-way authentication is two-way authentication, where both parties authenticate themselves to each other. An example of two-way authentication could be a SSL/TLS handshake.
  • Three-way authentication: Three-way authentication is also another method of authentication used. Three-way authentication uses a central point, like a server, to authenticate both of the devices attempting to communicate, with the central point itself as well as with each other. An example of three-way communication could be using a server that is trusted by both communicators to trust each other.
  • Distributed authentication: Another method of authentication used with IoT devices is Distributed authentication. Distributed authentication uses a distributed system to authenticate the two communicating parties.
  • Centralized authentication: Similar to distributed authentication is centralized authentication. Instead of using a distributed system to authenticate the parties, a centralized location system is used for authentication. One final way to authenticate devices is one of the more common methods: two-factor authentication. When logging into a network, a user may use a username and password and two-factor authentication. Two-factor authentication can be verifying the user’s identity by sending an email or text message to the user, or scanning a QR code, thus authenticating that device.

These are commonly used methods of authentication for the most part, but the following hardware-based authorization methods are found more commonly in larger organizations.


Hardware-Based Authorization Methods

As I mentioned previously, hardware-based authorization methods are more commonly used within an organization, as they provide the most widespread and secure method of authenticating IoT devices within a network. One of these hardware-based methods is the use of Hardware Security Modules. Hardware Security Modules, or HSMs, are used to securely store private keys from asymmetric key pairs. An asymmetric keypair has a public and private key mathematically linked together. The private key, as the name suggests, is kept private while the public key can be viewed by anyone. When discussing IoT device authentication, devices within a network will have an asymmetric keypair, and a digital certificate associated with that keypair, connected to the device being authenticated. If the certificate provided to the HSM contains a public key linked to the private key stored within the HSM, then that device is allowed access to the network. If not, it’s access is denied. 

Another method, usually used in conjunction with HSMs, is the use of a Public Key Infrastructure. A Public Key Infrastructure, or PKI, is a connection of Certificate Authorities stemming from a Root Certificate Authority, which create and distribute certificates to authorized devices in a network. These certificates can be traced back to the trusted Root Certificate Authority (Root CA), authorizing the IoT device connected to that certificate to use the organization’s network. Most PKIs will integrate an HSM with their PKI systems, to provide the highest level of security. The HSM handles the storage of the private keys of the certificates generated by the CAs. If a valid certificate, with a valid certificate chain connecting the certificate to the Root CA, is not found, then the device will not have any access to the network utilizing the PKI.

Some organizations will set up a Trusted Execution Environment (TEE) to protect their network and any sensitive data stored within that network. TEE is set up within a device that connects to an organization and uses high level encryption to authorize that device to be able to connect to and use an organization’s network. TEE is used in many organizations because it does not overtax the systems in place in a device, but instead uses a minimal amount of computing power to function. One final authentication method that organizations will often use is a Trusted Platform Module. A Trusted Platform Module, or TPM, is a microchip that is put into an IoT device which completes the process of IoT device authentication due to the host-specific encryption keys stored within it. The chip, and the keys held within, are not accessible from software, so an attacker would not be able to leverage the chip to gain access to a network. When connecting to a network using TPMs, the chip provides a key and the network compares that key to known host keys. If they match with one of the known host keys, then access is granted.

Conclusion:

These are just a few of the many different solutions available for IoT device authentication available to organizations. Choosing the right solution is very important, as not every organization has the same needs and wants for their IoT device security. It is important to have a detailed discussion within your cybersecurity team to determine what important points this authentication method must deal with, and how vast it needs to be spread. If your organization is massive and has minimal sensitive information, a TPM would likely not be the way to go as security does not need to be so strict and putting a chip in every device on the network would be extremely expensive. Something to note with these systems is that many of them would need to be handled manually. IoT management platforms can help with this as they allow an organization to manage security tools and get health reports on hundreds of IoT devices in their life using that portal. For any consultation needs relating to PKI or HSM work, visit our website at www.encryptionconsulting.com.

About the Author

Riley Dickens is a Consultant at Encryption Consulting, working with PKIs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Search any posts

A collection of Encryption related products and resources that every organization should have!

Cyber security experts conference 2022

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download

Read time: 4 minutes

Protecting your online environment in today’s world has never been more necessary. COVID-19 has caused many organizations to rethink how they secure their network and Internet of Things (IoT) devices within that network. To begin the process of protecting IoT devices and Personal Computers in your network, you can start with Secure Boot. Much like the code signing process, Secure Boot verifies that the signatures and keys used by the boot hardware and the OS software are all valid and have not been tampered with.

What is Secure Boot exactly?

Secure Boot works by authenticating the code and boot images used by the operating system are authenticated against the hardware before being allowed the ability to boot-up the system. The reason they are authenticated against the hardware is due to the fact that the hardware is pre-configured to authenticate code using trusted credentials. This ensures that the images and code have not been tampered with or changed by threat actors attempting to utilize malware to infect your network or devices in your network. As you can tell, this makes enabling Secure Boot in devices on a network significant, as it thwarts many common malware attacks. When dealing with malicious threat actors, many malware attacks will change Operating System code, or install a new boot loader, so that when a system is rebooted, their malware will be launched and spread throughout the device. Enabling Secure Boot will ensure this does not occur, as the bootloader will not have a valid key and signature matching the hardware, thus Secure Boot will stop the boot-up process. If malware got through, in the case that Secure Boot was not enabled, then an organization could face massive repercussions, such as losing millions of dollars or vital information that they would otherwise not want public.

How does Secure Boot work?

The process behind Secure Boot is not as complicated as you may think it would be. When a device with Secure Boot enabled is turned on, the first step in the process is that the CPU Internal Bootloader verifies the authenticity of the bootloader. This is done by comparing the signature generated by the manufacturer’s private key to the public key embedded in the device. When working with code signing and Secure Boot, an asymmetric encryption process is used for validation of manufacturer and software authenticity. The process of asymmetric encryption works by first generating two mathematically linked keys, a public key and a private key. The private key is kept secret, known only to the keys’ creator, and the public key is known to anyone. Since these keys are mathematically linked, a piece of software can be signed by the private key and that signature can be verified by the public key. This identifies that the software in question was created by the key owner and has not been tampered with.


The next step in the Secure Boot process is verifying the authenticity of the Operating System and any applications that are begun at boot. Using the same process as the first step, the embedded public key is used to verify the Operating System and applications are valid. Once all these different parts of the boot-up process are verified for authenticity, the device can be booted-up and run normally. If, at any step in this process, the Operating System, bootloader, or any applications are found to not match the embedded public key, then they the boot-up process stops, and remediation steps are taken.

Roadblocks for Secure Boot

Since Secure Boot utilizes a process very similar to code signing, they face many of the same problems. The most pressing issue is protection of the asymmetric signing keys that are used in the Secure Boot process. I mentioned previously that part of the Secure Boot process is that the public key of the public/private key pair is embedded in the software, and what I mean by this is that there is a certificate that was generated through the use of that public key. This digital certificate, much like a code signing certificate, contains the public key’s information and is signed by the private key, thus allowing for the matching of key information between the public and private keys. Protecting these keys is the first major issue many organizations may face.

If the private key used to sign the digital certificate is compromised by a malicious threat actor, they can then use that certificate to pass bootloaders or Operating System code through the Secure Boot process successfully, thus allowing them to infect users with malware. Protecting these keys properly can be done with either hardware or software based key storage methods. Software based storage is not the strongest method of protecting encryption keys, as the keys can still be taken from the storage method. Hardware based key storage methods, like hardware security modules, protect keys with a much stronger method, as compared to software-based key storage methods. Hardware Security Modules, or HSMs, are tamper-evident and tamper-proof, thus protecting encryption keys much more reliably.

Other ways to protect data, other than Secure Boot, is by setting strong encryption policies within your organization. These policies provide a uniformity across your organization, thus allowing the different teams within your organization to follow similar protection methods. Additionally, implementing Intruder Protection Systems (IPS) and Intruder Detection Systems (IDS), securing your code at the source, and using organizations like Encryption Consulting to identify gaps in your security systems are other ways to protect data in your organization.

Conclusion

At the end of the day, enabling Secure Boot on all of the devices in your organization is a great way to start defending your network from malicious threat actors. Secure Boot provides a built-in method of checking your Operating System and bootloader for malicious code, thus allowing you to feel secure in the device you are using. Other methods, like setting up IPS and IDS or having a third-party assess your security plans, can work hand-in-hand with Secure Boot to provide you with the best possible security systems for your home or enterprise network. To learn more about how Encryption Consulting can help, visit www.encryptionconsulting.com.

About the Author

Riley Dickens is a Consultant at Encryption Consulting, working with PKIs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Search any posts

A collection of Encryption related products and resources that every organization should have!

Cyber security experts conference 2022

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Download

Let's talk