When dealing with cybersecurity and the security of an organization, it is vital to prepare for every eventuality. One of the most important forms this takes, especially with organizations creating and distributing software to customers, is code signing. At its core, code signing is a relatively simple process. A software publisher or distributor wants to create code to send out to be used by a customer, but first they must ensure that the code can be trusted by the user. This is where code signing comes in. First, a key pair is generated from a trusted, usually external, Public Key Infrastructure (PKI) using something called a Certificate Signing Request.

A Certificate Signing Request, or CSR, requests that a certificate be generated and signed by a trusted PKI and associated with a key pair. A key pair involves a public key and private key which, as the names imply, are kept publicly and privately available respectively. The CSR and keypair are then sent to the Certificate Authority of the PKI, and the user’s identity is verified by the Certificate Authority, or CA. After this verification, the CSR itself is authenticated and the public key is bundled with the requestor’s identity, thus creating a valid code signing certificate once the CA signs the bundle. The requestor then receives the certificate and can begin signing code.

Once the code signing certificate is created, the code signing process truly begins. First, the code to be signed is hashed via a hashing algorithm. A hashing algorithm takes a file as input, and creates a hash digest. A hash digest is a series of numbers and letters that is unique based on the contents of a file. If a file had a single letter changed, then the hash digest would be completely different. The hash and private key of the certificate are then passed into a signing algorithm and creates a signature. The signature is then attached to the code and sent to the user. Before we delve into the best practices of code signing, let us take a short look at why code signing is so important.

The Importance of Code Signing

Code signing is vital to many organizations’ security. If an attacker can steal the ability to sign code with malware in it, the reputation of your organization would be at risk. Additionally, the validity of the software and the developer of the software can be authenticated through code signing. This allows the door to big application stores to be opened as well. The biggest application stores require the use of code signing with applications used in these app stores. This will allow many smaller application owners to get their apps onto the big app stores with the simple process of code signing. Now that we know why code signing is so important, let us take a look at the code signing best practices.

Code Signing Best Practices

  • Virus Scanning

    An important part of any code signing should be that malware and virus scanning is in place. Virus scanning should be done when the file is uploaded to be hashed. The reason virus scanning is so vital is because if code has malware embedded into it and then is hashed, that malware may go undetected during the signing process. Once the code is signed and approved by your organization, with malware embedded, then a user would install that code onto their computer without hesitation since a trusted organization has approved the signed code. Once downloaded, the malware can begin infecting the user’s computer, thus giving your organization a bad name and harming users who trusted that organization. It is best to select a code signing tool that can integrate with your organization’s already in place virus and malware scanning.

  • Secure Private Key Storage

    One of the most vulnerable parts of the code signing process is the private key associated with the certificate. If this is improperly protected, then the whole process of code signing is for nothing. Once the key is stolen, an adversary can use that key to sign their own code under the organization’s name, making the users think it is a trusted piece of software when it is not. They can then send out code, said to be from your organization, filled with malware such that all users who download and use the software will become infected. In recent years, many supply chain attacks have occurred in this very way. With improper protections in place on private keys, adversaries were able to get ahold of these keys and sign code with malware in the code.

    The code was then sent to Service Providers, who each have thousands of users that use this software in multiple different companies. The malware was then able to infect all of these users, thus allowing the attackers to steal information, money, and more. Software-based storage of keys is possible, but it is much less secure than hardware based key storage. It is recommended that a code signing solution be selected based on its use of a hardware-based storage method, such as a Hardware Security Module. These store keys so securely that an attacker would need to steal the device itself before cracking into the device to steal the keys. By that point, the keys could be rotated or made useless, so that the keys are now useless.

  • Secondary Verification

    Other tools can be used with code signing to ensure that any signings are kept secure. Tools like Active Directory, Multi Factor Authentication, 2 Factor Authentication, and more are great ways to keep track of who can sign code and when. With these tools, a user would need to not only sign in with a password but also with a secondary layer of verification before accessing a webpage to create code signing key pairs and before the actual signing process. This will stop outsider attackers from signing when they shouldn’t and can also track any insider threats that may occur within an organization. Any code signing product in use by an organization should be utilizing some type of secondary verification to ensure the most security is in place to protect sent out by said organization.

  • Time Stamping

    The usage of time stamping is another important tool used in the process of code signing. Time stamping involves imprinting a specific time and date at the time of signing for the purposes of logging and tracking signing processes. This is usually done by contacting a timestamping server during the code signing process. This timestamp will help with the next best practice I will discuss, the process of logging signing operations.

  • File Logging

    One other important factor in code signing is logging each code signing operation and other related tasks. Operations like creating a key pair, creating a code signing certificate, actually signing code, and changing code signing certificates or key pairs should all be logged for future usage. Having logs stored in one location will allow auditors to view processes that have occurred to make note of what is occurring within the code signing tool.

    Additionally, if an insider or outsider threat occurs, then it is also important to have these logs. This will allow an organization to track the processes that have occurred to discover who has signed what. Logging can also be used to stop a malicious developer from signing mid process. If a signing process is not approved by the proper teams, then it can be stopped before it is too late.


If you are wondering where you can gain access to a code signing tool, look no further than Encryption Consulting. At Encryption Consulting, we have a code signing tool called Code Sign Secure. Our tool utilizes all the different best practices mentioned in this blog, and more. Our tool uses Hardware Security Modules to safely secure private keys relating to code signing certificates. We also utilize Active Directory to login to our website to create keys and certificates and Multi Factor Authentication which is necessary before signing any code. We also use role-based access control, so that only those with the proper permissions may use any part of our code signing product. Finally, we use virus scanning before a file is hashed, we timestamp every signature that is created, and we have centralized logging accessed via our webpage.

To find out more about Code Sign Secure, or to try our Proof of Concept, reach out to CodeSigning solution.

Free Downloads

Datasheet of Code Signing Solution

Code signing is a process to confirm the authenticity and originality of digital information such as a piece of software code.

secure and flexible code signing solution

About the Author

Riley Dickens is a Consultant at Encryption Consulting, working with PKIs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Reading time: 8 minutes, 0 seconds

When working with different security products, one of the many devices you run into is Hardware Security Modules or HSMs. HSMs are devices designed to securely store encryption keys for use by applications or users. The Thales Luna HSM can be purchased as an on-premises, cloud-based, or on-demand device, but we will be focusing on the on-demand version. Another option available for the Thales Luna HSM is a PED-authenticated version versus the password-authenticated version. The PED-authenticated Hardware Security Module uses a PED device with labeled keys for roles within the HSM, with PINs correlating to the PED keys. This is a way of creating role separation between the different roles on the HSM. The password-authenticated version only requires a typed-in password, and a PED device is not required. This guide can be used for both types of HSM.  

The CipherTrust Manager from Thales works as a centralized key management device, allowing users to generate, manage, destroy, export, and import encryption keys for client and application uses. Working alongside the CipherTrust Manager (CM) can be an HSM that will store the keys that the CM will use. The CM also has the same options available to it as the Luna HSM, however, if the HSM in use is a PED-authenticated HSM, then the CM must also be PED-authenticated. The same rule is in place for password-authenticated HSMs. Before we move into the integration steps, there are a few pre-integration steps that must be discussed.  

Pre-Integration Steps 

Before integrating an HSM and CM, a few steps must be done in the process beforehand. The most important task to complete beforehand is ensuring that both the HSM and CM have been fully configured. This includes the network configuration, SSH configuration, and the NTP server setup. When integrating a Hardware Security Module and a CipherTrust Manager, they must use the same NTP, or Network Time Protocol, server so that the times are set the same between the two devices. Additionally, the HSM must be reachable across the network by the CM. This means that the CM and HSM must be on the same subnetwork so that the pinging and accessing of the HSM can be done from the CM. Also, having remote access to the HSM and CM is important, as sitting in a data center with the CM and HSM can become overwhelming so being able to reach the devices remotely is vital. We will not go over how to configure an HSM and CM in this guide, as this is a long and involved process that you can reach out to www.encryptionconsulting.com for assistance in. Instead, we will focus on the major steps involved in the process of integrating a Hardware Security Module and a CipherTrust Manager.


 When working through these integration steps, you first want to ensure a number of different elements are in place and working properly. This begins with ensuring the CM can reach the HSM. This can be done through a number of steps, but the simplest is SSHing into the CM and pinging the IP address of the HSM. This will allow you to be certain that communication between the HSM and CM is in place properly. Another step is to ensure that both devices are updated to the latest software and firmware versions. This will ensure that all security patches are implemented that need to be in place. One final step is to ensure that the NTP server is working properly with both devices. Without the NTP servers running properly, the integration of the two devices will fail.  

HSM Integration Steps 

To begin integrating the HSM and CM, certificates must first be created. This can be done through the Lunaclient used with the HSM. This software is downloaded from the Thales Knowledge Center, when using a customer support portal login. Once the Lunaclient is downloaded you will need to run the following command: 

cd “C:\Program Files\Safenet\lunaclient” 

From here, you can run the command vtl createcert -n <cert name>. This will create a certificate and private key in the C:\Program Files\Safenet\lunaclient\cert\client directory. From the same lunaclient directory, the command pscp.exe admin@<HSM_IP>:server.pem server_<HSM hostname>.pem should be run for each HSM being integrated. This will transfer the server certificates of each HSM to the lunaclient directory under the name of server_<HSM hostname>.pem. We specify a different name than server.pem only because if there are multiple HSMs being integrated, the old server.pem certificates would be replaced with the new ones imported in. Now that you have all this information, we will need to get the partition label and the partition serial number of the partitions associated with the CM. This is done by sshing into the HSM via the command ssh admin@<HSM IP>. Once logged in, the command par list can be run to show all of the different partitions on the HSM, the serial numbers, and the partition labels. Once those are noted down, the final step is to make the CM a client of the HSM via the certificate we have created. This can be done by transferring the certificate file from the client device to the HSM with the following command from the lunaclient directory:  

pscp.exe ./cert/client/<client cert name>.pem admin@<HSM_IP>

Now that the client certificate is on the HSM, the client must be registered with the command client register -n <name of the client> -h <name of the client certificate without the .pem at the end>. Now that the client was successfully registered, a partition must be assigned to the client via the command

client assignpartition -c <client name> -p <partition name>.

These steps should be done for each HSM being integrated.  

CM Integration Steps 

Now that the CM has been set as a client of the HSM, we must directly connect these files with the CM. To do this, we need to log in to the GUI of the CipherTrust Manager via one of the IP addresses set up during network configuration. Once we go to the webpage https://<IP of the CM>, we can log in and go to the Admin Settings>HSM tab. From here, we follow the steps as they are prompted. We need to provide the HSM IP address, the HSM server certificate, both the client certificate and key, the partition label, and the partition serial number. For the certificates, you will need to open, copy, and paste the contents of those files to the GUI. Additionally, the Crypto Officer password for the partition will need to be provided as well.  

Once these are provided, the GUI will ask if you want to replicate the keys in the partition across the environment. You should select yes to this option, as the data in the partitions will be lost if you do not. Once this has been done, the CM will restart itself and its services, ensuring it can connect to the HSM properly. Once the reboot occurs, you can log back into the GUI and see that the HSM is now integrated. From here, you can add additional HSMs to the integration, requiring only the partition password, serial number, label, HSM IP, and the HSM server certificate. Now you have successfully integrated all of your HSMs with your Cipher Trust Manager! 

Known Issues and Workarounds 

Problem: HSM Seal not reachable from the CM GUI. 

Solution: If an error says the HSM seal is not able to be gotten when attempting to connect to the GUI, there is likely an issue with your client certificate. There is a rare bug when creating certificates with vtl createcert that causes a certificate to be created for 11 days instead of ten years. Change the extension of your client certificate to .cert and look at the expiration date of the certificate. You will need to SSH into the CM and run the command kscfg system reset. You will lose the HSM configurations in place, but you will keep the SSH and network configurations. From here, you will need to refollow the steps for creating a client of the HSM with a new certificate. 

Problem: IP Address unreachable for the GUI 

Solution: If after updating the software of the CM you cannot seem to reach your IP addresses, you will need to setup the same IP address on every interface until you find the appropriate one. The issue is that when updating from version 2.0 to version 2.9, along the way there is a bug that may change the MAC address of the network interfaces to the incorrect ones. They may say they are interface 1 or 0, but they are actually interface 2 or 3. This can be fixed by trying the same IP address on every interface until it works properly, and you can connect to it.  


As I mentioned, we did not go into detail here about the general configuration of the HSM and CM. These can be done on your own or with our assistance. To reach out to us for configuration help or assistance, go to our website www.encryptionconsulting.com. We can assist with assessments, planning, and implementation of HSMs, PKIs, CMs, and encryption advisory.

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Encryption Services

About the Author

Riley Dickens is a Consultant at Encryption Consulting, working with PKIs, creating Google Cloud applications, and working as a consultant with high-profile clients.

TheLuna PED is an authentication device to permit access to the administrative interface of the PED-authenticated HSM. Multi-factor (PED) authentication is only available with the Luna S series. The PED client and server are software components that allow the HSM PED to communicate via a Transmission Control Protocol/Internet Protocol (TCP/IP) network. The PED server resides on the host computer where a remote-capable Luna PED is USB connected. The PED server client resides on the system hosting the HSM, which can request PED services from the PED server through the network connection. Once the data path is established and the PED and HSM communicate, it creates a common data encryption key (DEK) used for PED protocol data encryption and authenticates each other. Sensitive data in transition between a PED and HSM is end-to-end encrypted.

Understanding What a PED Key is and Does

A PED is an electrically programmed device with a USB interface embedded in a molded plastic body for ease of handling. Specifically, a PED Key is a SafeNet iKey authentication device model 1000 with FIPS configuration. In conjunction with PED 2 or PED 2 Remote, a PED Key can be electronically imprinted with identifying information, which it retains until deliberately changed. A PED Key holds a generated secret that might unlock one or more HSMs.

That secret is created by initializing the first HSM. The secret can then be copied (using PED 2.x) to other PED Keys for backup purposes or to allow more than one person to access HSMs protected by that secret. The secret can also be copied to other HSMs (when those HSMs are initialized) so that one HSM secret can unlock multiple HSMs. The HSM-related secret might be the access control for one or more HSMs, the access control for Partitions within HSMs, or the Domain key that permits secure moving/copying/sharing of secrets among HSMs that share a domain.

The PED and PED Keys are the only means of authenticating and permitting access to the administrative interface of the PED-authenticated HSM. They are the first part of the two-part Client authentication of the FIPS 140-2 level 3 compliant SafeNet HSM with the Trusted Path Authentication. PED and PED Keys prevent key-logging exploits on the host HSM. The authentication information is delivered directly from the hand-held PED into the HSM via the independent, trusted-path interface. Users do not type the authentication information on a computer keyboard, and the authentication information does not pass through the computer’s internals, where malicious software could intercept it.

The HSM or Partition does not know PED Key PINs, the PIN and secret are stored on the PED key. The PIN is entered on the PED and unlocks or allows the secret, stored on the PED key, to be presented to the HSM or Partition for authentication. The PED does not hold the HSM authentication secrets. The PED facilitates the creation and communication of those secrets, but the secrets themselves reside on the portable PED Keys. An imprinted PED Key can be used only with HSMs that share a particular secret, but PEDs are interchangeable.

Types of PED Roles

PED Keys are generated with a specific role in mind. These are determined when certain events take place on the HSM. Using these roles, a quorum of M of N PED Keys can be created, where M is the number of keys necessary to complete run a command as that role and N is the total number of keys created. The following roles are the types of roles that can be created on a Luna HSM:

Security Officer – SO

The first actions with a new SafeNet HSM involve creating an SO PIN and imprinting an SO PED Key. A PED PIN (an additional, optional password typed on the PED touchpad) can be added. SO PED Keys can be duplicated for backup and shared among HSMs by imprinting subsequent HSMs with an SO PIN already on a PED Key. The SO identity is used for further administrative actions on the HSMs, such as creating HSM Partition users and changing passwords, backing up HSM objects, and controlling HSM Policy settings. It is recommended that a quorum of 3/7 be used with Blue keys.

Partition User or Crypto Officer

HSM Partition User key. This PED Key is required to login as the HSM Partition Owner or Crypto Officer. It is needed for Partition maintenance, creation, and destruction of key objects, etc. The local portion of the login is necessary to permit remote Client (or Crypto User) access to the partition. A PED Key Challenge (an additional, optional password typed in LunaCM) can be added. Black User PED Keys can be duplicated and shared among HSM Partitions using the “Group PED Key” option. It is recommended that a quorum of 3/7 be used with Black keys.

Crypto User

The Crypto User has restricted read-only administrative access to application partition objects. The challenge secret generated by the Crypto User can grant client applications restricted, sign-verify access to partition objects. It is recommended that a quorum of 3/7 be used with Gray keys.

Key Cloning Vector (KCV) or Domain ID key

This PED Key carries the domain identifier for any group of HSMs for which key-cloning/backup is used. The red PED Key is created/imprinted upon HSM initialization. Another is created/imprinted with each HSM Partition. A cloning domain key carries the domain (via PED) to other HSMs or HSM partitions to be initialized with the same domain, thus permitting backup and restore among (only) those containers and tokens. The red Domain PED Key receives a domain identifier the first time it is used, at which time a random domain is generated by the HSM and sent to both the red Domain key and the current HSM Partition. Once imprinted, that domain identifier is intended to be permanent on the red Domain PED Key – and on any HSM Partitions or tokens that share its domain. Any future operations with that red Domain PED Key shall copy that domain onto future HSM Partitions or backup tokens (via PED) so that they are able to participate in backup and restore operations. Red PED Keys can be duplicated for backup or multiple key copies. It is recommended that a quorum of 3/7 be used with Red keys.

Remote PED

Audit Key

The audit is an HSM role that takes care of audit logging under independent control. The audit role is initialized and imprints a white PED Key without the need for the SO or another role. The Auditor configures and maintains the audit logging feature, determining what HSM activity is logged and other logging parameters, such as rollover period, etc. The purpose of the separate Audit role is to satisfy certain security requirements while ensuring that no one else – including the HSM SO – can modify the logs or hide any actions on the HSM. The Audit role is optional until initialized.

PED Key Management Best Practices

Number of off-site full sets

Does the organization intend to use common authentication for many Luna HSMs? There is no limit. The authentication secret on a single blue SO PED Key, for example, can be used with as many HSMs as they like. However, if the organization wishes to limit the risk of compromising a common blue PED Key, they will need to have groups of HSMs with a distinct blue PED Key for each group. Each time the organization initializes, the HSM (via the PED) allows them to “Reuse an existing keyset” – make the current HSM part of an existing group that is unlocked by an already-imprinted PED Key (or an already-imprinted M of N keyset) – or to use a fresh, unique secret generated by the current HSM.

Number of HSMs per group

That will tell the organization the number of groups and how many different blue PED Keys they need. Now double that number, at least, to allow off-premises backup copies to be kept in secure storage if one is lost or damaged. In most cases, the contents of an HSM are of some value, so at least one backup per blue PED Key must exist. If the organization has only one blue PED Key for a group of HSMs, and that PED Key is lost or damaged, the HSMs of that group must be re-initialized (all contents lost) and a new blue PED Key imprinted.

One for One

The organization might prefer a separate blue SO PED Key containing a distinct/unique Security Officer authentication secret for each HSM in their system. No single blue PED Key can unlock more than one HSM in that scenario. The number of blue keys they need is the number of HSMs. Now double that number to have at least one backup of each blue key.

Many for One or M of N (Recommended)

Does the organization’s security policy allow them to trust its personnel? Perhaps the organization wishes to spread the responsibility – and reduce the possibility of unilateral action – by splitting the SO authentication secret and invoking multi-person authentication. Choose the M of N option so that no single blue PED Key is sufficient to unlock an HSM. Two or more blue PED Keys (their choice, up to a maximum of 16 splits of each SO secret) would be needed to access each HSM. Distribute each split to a different person, ensuring that no one person can unlock the HSM.

Partition BLACK PED Keys

Each HSM can have multiple partitions. The number depends upon their operational requirement and the number the organization purchased, up to the product maximum per unit per HSM. Each partition requires authentication – a black PED Key.

The organization has all the same options for the blue SO PED Key(s) – the organization shall have at least one backup per primary black PED Key. The organization might have multiple partitions with a unique authentication secret; therefore, each would have a unique PED Key. Or, the organization might elect to group their partitions under common ownership so that groups of partitions (on one or more HSMs) might share black PED Keys.

As with the SO secret, the organization can also elect to split the partition black PED Key secret by invoking the M of N option (when prompted by the PED for “M value” and “N value” – those prompts do not appear if the organization chose to “Reuse an existing keyset” at the beginning of the partition creation operation).

Domain RED PED Keys

Each HSM has a domain. Each HSM partition has a domain. That domain is carried on a red PED Key and must be shared with another HSM if the organization wishes to clone the HSM content from one to another, for example, when making a backup.

Domains must match across partitions for the organization to clone or back up their partitions or assemble HSM partitions into an HA group.

For the red PED Keys, the organization can make arrangements regarding uniqueness, grouping, M of N (or not), etc.

Other PED Keys

The organization might have orange PED Keys if they are using the Remote PED option (orange Remote PED Keys (RPK) containing the Remote PED Vector (RPV)). The organization might have white PED Keys if they invoke the Audit role Audit role option (white Audit PED Keys containing the authentication for the Auditor, who controls the audit logging feature). The organization can invoke M of N, or not, as they choose, which affects the number of orange or white PED keys that they must manage.

Orange Remote PED Keys and white Audit PED Keys can be shared/common among multiple HSMs and PED workstations, just like all other PED Key colors.

All other PED Key roles allow them to overwrite any key (any color) with a new secret. A warning is given if a key is not blank, but the organization can choose to overwrite or pause. At the same time, the organization finds an empty or outdated key (“outdated” in this case means a previously imprinted PED Key that they have made irrelevant by re-initializing an HSM or deleting/re-creating a partition, or other activities that make the secret contained on a particular PED Key no longer relevant; PED Keys do not “age” and become invalid during their service life – only deliberate action on an HSM would cause the secret on a PED Key to become invalid).

With all of the above in mind, it is not possible to suggest one “correct” number of PED Keys for their situation. It depends upon the choices that the organization makes at several stages. In all cases, we repeat the recommendation to have at least one backup in case a PED Key (any color) is lost or damaged.

To learn more about the Thales Luna HSM, visit Thales’ website: https://cpl.thalesgroup.com/encryption/hardware-security-modules

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Encryption Services

About the Author

Riley Dickens is a Consultant at Encryption Consulting, working with PKIs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Data Loss Prevention

What, When & How Of Data Loss Prevention Essential?

Read time: 6 minutes

Each year, more and more cyber attacks occur on organizations big and small. Ransomware attacks, supply chain attacks, and new types of attacks are created and used by threat actors to steal information and money. Without the proper safety precautions in place, even the biggest organizations have been affected, as has been seen in the recent months and years. That is why so many organizations are focusing their efforts on different cybersecurity tools and protection methods, such as Data Loss Prevention, or DLP. As organizations increase the amount of data they store and transmit, these types of tools become even more vital to the protection of an organization.

Data Loss Prevention, or DLP, protects and monitors data-in-transit, data-at-rest, and data-in-use. It tracks the data anywhere it is stored in the organization, thus alerting the security team or teams to any use of the data. These tools and methods work with the encryption policies and standards in an organization to ensure that the users within the organization, as well as applications and third-party solutions, are abiding by the rules set forth in these policies and standards. DLP tools work by creating a centralized location for managing, tracking, and remediating the improper use of an organization’s information. By supporting the standards and policies of an organization, those accessing and using information can be monitored to ensure that no confidential data leaves the organization and is used for improper purposes.

Why an Organization Should Use DLP

There are more reasons than just one as to why an organization should use DLP tools in their cybersecurity framework. Below are few other reasons to implement DLP safety measures in an organization:

  1. Some organizations do not know where all their data is stored and sent to.

    Many organizations do not have the proper insight into their organization, where data is related. Data discovery and classification should be the first step any organization takes to become cryptographically secure and meet regulations for things like the National Institute of Science and Technology (NIST). If this is done improperly, or not at all, data may go unnoticed or be classified incorrectly, thus allowing threat actors to take this data for their own uses. Using tools like DLP, an organization can get a better view into the data they store, the types of data they store, and they can keep a better eye on the data as it is stored, in transit, or in use.

  2. Most organizations need to maintain a certain level of security for state and country regulations.

    As mentioned previously, many regulations and standards exist in certain countries and states that detail how an organization stores and otherwise protects their data cryptographically. These regulations come from a number of different bodies, including the NIST, and they have different names, such as the Health Insurance Portability and Accountability Act, or HIPAA. The standards and regulations employed by these bodies focus on protecting customers’ Personally Identifiable Information, or PII. These standards are vital, as this information being stolen could cause a customer to lose their identity, their money, or their livelihood. Using tools like DLP, data can be tracked and protected to the levels that standards and regulations require.

  3. Outside threats are considered, but insider threats are not considered.

    Most organizations are on alert for outsider threats to the organization, such as lone-wolf hackers or hacker groups, but many fail to keep an eye on insider threats. DLP assists organizations with watching how data is accessed and transmitted, especially with employees of an organization. Keeping track of who accesses data when, and how that data is used, is the basis of what DLP does. This is why many organizations are recommended to put DLP products into their organization’s environment.

  4. An audit will be occurring in the near future.

    Although DLP should be an early step taken by organizations, some will put DLP implementations into place due to an audit occurring in the near future. Failed audits can lose organizations money, reputation, and compliance status if the proper encryption and cryptography steps are not in place. DLP goes a long way toward making organizations compliant with standards and regulations. They can use DLP to ensure proper encryption and cryptography practices are being followed and find out where they are lacking in security. This can lead to better practices being put in place across an organization, thus reaching compliance and passing an audit successfully.

  5. The organization may want to defend against threat actors before they appear, as opposed to fixing a problem after a data breach occurs.

    Many different organizations tend to focus on dealing with cyber attacks after they have already occurred. What organizations should instead be doing is putting mitigating factors in place before a threat occurs. This is the preferred method since protecting sensitive customer data before any threat actors can get near it assists in the process of saving the data from getting stolen in the first place, as well as deterring any attackers from going after that information. DLP is a great first line of defense, as using Data Loss Prevention tools helps track information and identify any gaps in security.

  6. Automation of data management and tracking is a high priority for many organizations.

    Organizations tend to begin their security systems with manual security processes. This means data is tracked and identified by personnel and teams within the organization manually when they choose to do it. Instead, automated processes can be used, which will automatically check and track data and users in the organization’s environment. DLP is an example of a tool many organizations use to automatically track data and users who use and transmit that data. This is why so many companies use Data Loss Prevention to create a strong cyber security presence.

  7. An organization may work with a number of outside organizations that can access their systems.

    Companies tend to work with a number of outside providers and customers, with a handful of those providers having access to the network and infrastructure provided by the company. If these users aren’t properly tracked and their access and use of data aren’t monitored, then data could be stolen or misused. DLP keeps an automated eye on data in use, data at rest, and data in motion, so anyone with access to the network will be noted if they use PII data. Another method organizations use to protect their data is to ensure only those people who need the data have access to it, and only for approved purposes. This is what is known as Enterprise Workflow Management. Approval is required to use data, and those requests for data are tracked.

Types of DLP Tools and Platforms

When talking about DLP, there are three different types that Data Loss Prevention comes in: Network DLP, Cloud DLP, and Endpoint DLP. Network DLP is the type of DLP I have talked about the most. This type of DLP deals with data moving inside the company. Network DLP sets up a defensive fence to track and monitor data within the organization. The idea behind this is that when data is attempted to be sent out, via email or any other method, automated actions take place, such as encryption, blocking, or auditing the data transfer. This can be set up within the organization ahead of time. Additionally, a message will usually alert administrators if sensitive data is attempting to leave the organization when it shouldn’t.

Endpoint DLP is more complicated to manage than network DLP, but it is usually considered stronger than network DLP. Endpoint DLP focuses on the devices that are part of the network, as opposed to the network itself. Each device that uses the network will have this endpoint DLP installed on it, tracking the data in motion and the data at rest on the device. Additionally, endpoint DLP tools can also detect if data is stored on the device unencrypted when it should actually be encrypted. As can be seen, installing and managing endpoint DLP on every device in a network is complicated and when done manually would take a lot of man hours to complete and keep up with. The final type of DLP is cloud DLP. This type of DLP is set up with certain cloud accounts, enforcing DLP rules and policies. Cloud tools, such as Office 365, integrate with cloud DLP tools to ensure these policies are met.


Having proper cyber security tools and platforms in place is extremely important to the safety of a company. Using DLP, any organization can get ahead of threat actors, whether they are inside or outside the organization. Protecting sensitive customer and organizational data is vital in any company, especially banks and health organizations. At Encryption Consulting, we make cyber security our highest priority. We work with organizations to create the most secure environment possible using methods such as DLP, Public Key Infrastructure (PKI), and encryption assessments. We provide assessment, implementation, and development services for PKI, encryption, and Hardware Security Modules (HSMs). If you have any questions, visit our website at www.encryptionconsulting.com.

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Encryption Services

About the Author

Riley Dickens is a Consultant at Encryption Consulting, working with PKIs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Read time: 7 minutes

In an age where the idea of quantum cryptography is more than just a theory, organizations like the National Institute of Science and Technology (NIST) are looking for a way to standardize post-quantum cryptography algorithms. The NIST creates compliance standards, best practices, and regulations for cyber security. They work to provide a standardized framework for different encryption algorithms and methods to ensure the best possible security is in place within different organizations. Quantum cryptography is an asset that can be used in the coming years, but it can also cause a detriment to the security of organizations if they are not prepared. That is why the NIST has turned its sights to post-quantum cryptography standardization with its post-quantum cryptography (PQC) standardization project.

What is the PQC standardization project?

As previously mentioned, the NIST sets standards and best practices for cyber security that they suggest organizations follow. Quantum cryptography is a type of cryptography that has the potential to cause large issues in the cyber security community, as it will make the majority of cryptographic algorithms obsolete. The reason quantum cryptography can do this is that, with a powerful enough computer, algorithms that would usually take 10 years to crack could now take only weeks or days with quantum computing. This is the biggest reason the NIST has begun the PQC standardization project. The idea behind this project is to prepare organizations for quantum cryptography before it becomes a real threat. This would allow companies to have the proper encryption algorithms in place throughout the organization so that once quantum computing becomes possible to do, these attacks can be defended against. The types of encryption algorithms the PQC standardization project is working to standardize are quantum-safe algorithms. A quantum-safe algorithm is resistant to attacks from both classical computers, which are the types of computers we use today, and quantum computers. This allows private information stored on devices or in transit in organizations to be the most secure possible, as even a quantum computer will not be able to break a quantum-safe algorithm within hours or days.

Determining the most quantum-safe algorithms

Many times in the past, the NIST has done projects like the PQC standardization project where a number of algorithms are submitted to the project to see if they meet the criteria the best to be considered the standard for that type of cryptography. At the time of writing this, the NIST has just completed its 3rd round of selection for cryptographic algorithms. The finalists and alternative options are as follows:

  1. Key-Establishment Mechanism (KEM) Algorithms: Kyber, NTRU, SABER, and Classic McEliece
  2. Digital Signature Algorithms: Dilithium, Falcon, and Rainbow
  3. Alternative KEM Algorithms: BIKE, FrodoKEM, HQC, NTRUprime, and SIKE
  4. Alternative Digital Signature Algorithms: GeMSS, PICNIC, and SPHINCS+

Once the current round ends, one or two of the KEM algorithms and one or two of the Digital Signature algorithms will be selected as quantum-resistant algorithms strong enough for standardization across the cyber security landscape. After completing the third round, NIST mathematicians and researchers will continue to look at other algorithms and newly emerging algorithms to see if they are powerful enough to be considered a part of the standardized group of quantum-resistant algorithms.

How can Organizations Prepare for the Future?

Although the NIST has not yet released its list of recommended quantum-resistant cryptography algorithms, organizations can begin preparing themselves for quantum computers now. The following are a few different ways organizations can prepare for the future:

  1. Quantum Risk Assessment

    Performing a quantum risk assessment for your organization will give the security teams within your organization a good idea of where gaps exist in relation to quantum computing. A quantum risk assessment also helps create a list of applications that will be affected by the creation of quantum computers, thus providing the organization with a detailed list of applications that must be updated when moving to quantum-resistant algorithms. This will also help with the next step, identifying at-risk data.

  2. Identify at-risk data

    Identifying an organization’s data at risk is extremely important, even just relating to cyber security in general. Having data classification and identification systems in place in an organization is vital to keep track of data and ensure it is properly protected.

  3. Use cryptographically agile solutions

    The NIST has indicated that the use of crypto-agile solutions is a great way to begin the process of moving toward having quantum-safe security in place. Crypto-agility is the ability to switch between algorithms, primitives, and other encryption mechanisms without causing significant issues in the organization’s infrastructure.

  4. Develop an understanding of quantum computing and its risks

    By training employees on what to look out for in the future of quantum computing, and methods of becoming quantum-resistant, they will have a mindset that is already prepared for the post-quantum age.

  5. Track the NIST’s PQC Standardization Project

    By keeping track of the PQC Standardization Project, an organization can keep up to date on any changes to the quantum-resistant algorithms in the running and change to the selected algorithms when the time is right.

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Encryption Services

About the Author

Riley Dickens is a Consultant at Encryption Consulting, working with PKIs, creating Google Cloud applications, and working as a consultant with high-profile clients.


Self-Signed Certificates: What are they and why should you use them?

Read time: 6 minutes

In a time when keeping data and users secure on the Internet is so important, digital certificates have been used for a number of different purposes. Digital certificates are used for authentication, secure Internet connections, code signing, and more. This allows data-in-transit or data-at-rest to be protected from outside attackers. There are a number of different certificate types, but we will be taking a look at self-signed certificates today. The first question we have to answer is what exactly is a self-signed certificate?

What is a Self-Signed Certificate?

Depending on what the certificate is being used for, a self-signed certificate is a certificate signed by the same user or device using that certificate. This works for code being signed for internal use, or for an application being used by the creator, but not for software that is being used by external users. If an application or piece of software is being sent to external customers, then another type of certificate needs to be used. The reason for this is that a self-signed certificate will not be able to be verified when an external user attempts to use the software associated with that certificate.

When a user runs an application or piece of software, the certificate that signed that code is authenticated by the Operating System. You have likely seen that a pop-up occurs when a new software or application is downloaded and run on your computer. This pop-up checks the identity of the publisher of the certificate associated with the software. Additionally, the certification path is checked in the certificate as well. All of these details are checked to authenticate the publisher of the software. If the identity of the publisher cannot be properly verified, then the software would likely be deemed unsafe to run by the user.

Now, when using a self-signed certificate to sign software and applications, only the device which signed the certificate will be able to authenticate it. This is why self-signed certificates are not generally used for external software or applications. Instead, software publishers will use a Public Key Infrastructure with an external, and well-known, Root Certificate Authority which creates Certificate Authority generated (CA-generated) certificates. This allows the publisher to generate code signing certificates that can be recognized by any Operating System, thus authenticating the software sent to a customer and allowing them to trust that software or application enough to be downloaded and used on their device.

Pros and Cons

When dealing with self-signed certificates, there are many pros and cons that should be looked at before creating these types of certificates. Some of these we have already touched on, and others we will take a look at now.

Self-signed certificates are very easy to create, compared to other methods of certificate creation. There is no in-depth process, it is just the creation of a keypair and then the creation of the certificate itself. Other processes may require multiple steps and access to a Public Key Infrastructure (PKI) to create certificates.
Self-signed certificates are best utilized in test environments or for applications that just need to be privately recognized. Applications only for use within the organization they are created mainly use self-signed certificates.
Another reason that an organization may use self-signed certificates is that there is no reliance on another organization for the certificates to be issued or keys to be protected. Creating publicly verifiable certificates with a well-known PKI can require a lot of time for a certificate to be issued, causing hang-ups in the process of publishing and distributing software.
The most obvious issue with utilizing a self-signed certificate for code signing applications and software is that the certificate will not be recognized outside of the organization. A self-signed signature is only recognized by the device it was signed on, or within the organization it was generated in.
Self-signed certificates are not easily tracked within an organization. This causes a multitude of issues, especially in the case of the compromise of a self-signed certificate. Since self-signed certificates can be created at any time from any device, the certificate may not be known to be compromised for a long period of time, allowing the threat actor to misuse the certificate to suit their needs.
Self-signed certificates can’t be revoked. With CA-generated certificates, it is simple to lookup the private keypair associated with a certificate, but that is not possible with self-signed certificates.


There are a number of different types of certificates available to users for purposes ranging from authentication and communication to code signing. These types of certificates tend to fall into two different categories, self-signed and CA-generated certificates. As previously mentioned, self-signed certificates are mainly used in test environments or for software and applications used exclusively within an organization. For this reason, we at Encryption Consulting suggest organizations looking to use certificates for code signing setup a PKI for their organization or work with a public PKI for code signing certificates. To learn how Encryption Consulting can help your organization create a strong and secure Public Key Infrastructure for use within your organization, visit our website at www.encryptionconsulting.com.

Free Downloads

Datasheet of Public Key Infrastructure

We have years of experience in consulting, designing, implementing & migrating PKI solutions for enterprises across the country.

Implementing & migrating PKI solutions for enterprises

About the Author

Riley Dickens is a Consultant at Encryption Consulting, working with PKIs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Read time: 6 minutes

HTTP and HTTPS are seen everyday when using the Internet, whether you are in the cybersecurity field or not. You have likely seen a URL that looks like this:

https://www.google.com or http://www.fakewebsite.com.

These are vital parts of how searching a URL on the Internet works, but not everyone knows how HTTP and HTTPS work. So what are HTTP and HTTPS, and what is the difference between the two?

What is HTTP?

HTTP, or Hypertext Transfer Protocol, works to transfer data across a network. Data is put into a specified format and syntax to ensure it can be read and transferred correctly. HTTP is set up to send and receive both requests and responses. HTTP requests happen when a hyperlink is clicked, or a website URL is put into the browser. The HTTP request is sent using one of the different HTTP methods to retrieve or send information to a webpage. The webserver, in turn, provides an HTTP POST response, which is an HTTP response, and that gives the user access to the desired webpage.

The majority of web pages do not use HTTP but instead use HTTPS because HTTP is not a secure way to transfer data across a network.

What is HTTPS?

HTTPS, or Hypertext Transfer Protocol Secure, is the more secure way to transfer data between a web browser and a web server, that is why most websites use HTTPS. HTTPS utilizes a TLS/SSL connection to securely transfer data between your web browser and the server of the webpage.

Requests and responses sent with HTTPS are encrypted so that any Man in the Middle attacks that may occur will be thwarted since the data can’t be read. The encryption type HTTPS uses is asymmetric encryption and symmetric encryption. The way asymmetric encryption works is that the requested server generates a public and private key pair and the public key is stored in an SSL certificate. The private key, as the name suggests, is kept private to the webserver. When an HTTPS connection is made to the web server, the client and server complete a TLS Handshake. This Handshake provides a symmetric session key to the server, which then decrypts the session key with it’s private key. When an encrypted message is received, the message is encrypted by the session key, and the client can decrypt the session key using it’s private key. This allows the message to be encrypted in transit and authenticates that the message encrypted within is from the server, since the key pair is mathematically linked.

Comparing HTTP and HTTPS

Now that we know what HTTP and HTTPS are, let us look at the differences and similarities between the two.

  1. HTTP is insecure, whereas HTTPS is secure

    As we talked about in the HTTPS section, HTTPS is extremely secure because of its use of asymmetric encryption for data transferred over the network. Additionally, it requires that both itself and the requestor have a valid TLS/SSL certificate to identify each user and authenticate the messages sent by the user. HTTP, on the other hand, sends messages unencrypted to the requestor. This means attacks such as Man in the Middle Attacks will be successful, allowing the man in the middle to take the information transferred to the server, which could include credit card information or other Personally Identifiable Information (PII).

  2. Data sent via ports

    With HTTP, data is sent via port 80, which allows unencrypted data to be sent to requestors. HTTPS instead uses port 443, which allows encrypted communications to occur.

  3. OSI Layers and URLS

    One final difference between HTTP and HTTPS is the OSI layer they work in and how URLs are structured. The Open Systems Interconnection (OSI) model is a model that shows the seven different layers that computers communicate through.

    The seven layers are:

    • The Application Layer
    • The Presentation Layer
    • The Session Layer
    • The Transport Layer
    • The Network Layer
    • The Data Link Layer
    • The Physical Layer

HTTP works in the Application Layer, and HTTPS works in the Transport Layer.

URLs with HTTP start with http:// and have an unlocked padlock on the search bar next to the URL. Because it is secure, HTTPS URLs have a locked padlock next to the URL and start with https://.


Utilizing encryption and digital certificates is important for both connections across the Internet as well as within an organization’s internal network. Security systems like Public Key Infrastructures (PKIs) provide users and devices in an organization with certificates to identify them and allow encryption of messages. To learn how Encryption Consulting can help you with setting up a PKI within your organization, visit our website at www.encryptionconsulting.com.

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Encryption Services

About the Author

Riley Dickens is a Consultant at Encryption Consulting, working with PKIs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Security News

Why does Every Organization Needs Artificial Intelligence?

Read time: 6 minutes

As humans learn more about Artificial Intelligence (AI) and develop what it can do, more and more organizations are implementing AI into their processes. The increase into the development of AI has helped it become affordable to use in most, if not all, organizations.Companies are learning that now is the perfect time to implement AI into their repetitive processes to create a level of automation that increases productivity and allows the individuals within organizations to focus on more detailed work that cannot be done by AI. But before we discuss how to implement AI into your organization, we should first look at the steps a company should take before beginning with AI.

Steps an Organization Should Take Before Implementing AI

  1. The first step any organization should take before implementing AI into their processes is to get familiar with Artificial Intelligence. Learning more about AI will help in later steps of the implementation, and help you determine the places in your organization that will benefit the most from an AI implementation.There are many free resources to learn about AI, including YouTube videos, university lectures, and open-source libraries and kits that will help you develop a better understanding of what goes into implementing an AI solution.
  2. Once you’ve familiarized yourself with the basics of AI, you should identify the problems you want Artificial Intelligence to solve and how much that may cost.AI can be implemented in an organization’s existing products and services, or it can be something as simple as a chatbot on the main website. If your organization is looking for something simple, a chatbot would be a good first step. They are very easy to set up, and they take care of users asking many of the same, repetitive, questions. This will free up your support teams to focus on other projects and only be needed if the chatbot does not already have an answer.
  3. After identifying current processes that would benefit from AI, or discovering gaps that AI would fill, it is time to start designing your solution. It is always best, for the first few AI projects, to utilize external and internal teams to complete these solutions. In this way, you have both experts who already know how to implement an AI solution, and internal team members who can learn for future AI projects.Starting small with your first project is important, as you don’t want to take on too much at once with an AI implementation. A simple month-long project could turn into a six-month project if careful planning is not done. Now that you know some of the steps an organization should take before implementing Artificial Intelligence, let us discuss the different models of AI implementation that an organization may take.

AI Implementation Models

There are three different implementation methods that an organization will take when deciding to implement AI into their organization.

  1. The first model is the “hub” model. The “hub” model, as the name suggests, focuses all AI and analytics systems into a central hub. A central hub for Artificial Intelligence is perfect when deploying new AI systems, as it provides a fully centralized team to handle every step of the implementation. A “hub” model should be gradually developed over time, as the task of developing such a large unit of the business would be very complicated and time consuming all at once.The way the “hub” model is set up is that the systems and teams involved in AI are in a centralized location, loaning out their experience to the different business units whenever necessary. The development of the hub should be driven by the different AI tasks that the organization has determined are needed within the company. This allows the hub to grow slowly over time, as opposed to all at once.
  2. Another implementation model of AI within an organization is the “spoke” model. This model is the opposite of the “hub” model, instead focusing on spreading the different AI team members and systems throughout the different business units of the company.This model offers the different business units the ability to have a support team on deck for any AI tools they have implemented into their section of the business. This also allows the different business units to develop their own AI tools and systems for their specific use, as opposed to deploying them organization-wide.
  3. The final model is a “hybrid” model, called a “hub-and-spoke” model. This takes the components of a “hub” model and the components of a “spoke” model and creates the ideal model. This method allows the central hub to handle a small handful of responsibilities with the AI team lead at the center.The spokes then work within the different departments to create business unit-specific tools that can help the business unit. The spokes focus on execution team oversight, adopting AI solutions, and performance tracking, while the hub deals with hiring for AI team members, performance management, and AI governance.

Ways of Using AI within your Organization

There are several different ways to use Artificial Intelligence within your organization that are simple to implement and don’t involve high costs. A centralized knowledge center is a great initial way to start using AI in your organization. Having a central knowledge base offers users the ability to quickly find and parse through documents relating to their questions, without having to use the time of an employee to answer the same questions over and over.

Like the centralized knowledge center, you can also setup an automated live chat, like a chatbot, that will answer questions for users. Additionally, you can integrate with popular applications, like Salesforce or Jira, and automate processes within those applications. This allows employees to save time and increase their productivity.

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Encryption Services

About the Author

Riley Dickens is a Consultant at Encryption Consulting, working with PKIs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Read time: 5 minutes

Microsoft Azure is one of the three biggest Cloud Service Providers used by organizations today. The other two mainly used by organizations are Amazon Web Services (AWS) and Google Cloud Platform (GCP) . With the current state of the world, many companies are moving their services to a partially or fully cloud-based platform like Azure or AWS. The reason behind this is the large number of managed services that these Cloud Service Providers offer, as well as the more easily usable and accessible options available for web servers and the like.

Recently, many Healthcare providers have been moving specifically to Microsoft Azure. They are doing this because Azure has been working to upgrade their security systems to help these healthcare providers be HIPAA compliant, among other compliance standards they are targeting.

What does being Compliant Mean?

When talking about compliance with organizations, each company has different standards and practices they must conform to. These cyber security compliance standards are written by an organization which specializes in online security and knows what types of protection should be in place for the specific types of organizations. The standards outline practices that should be in place, at a minimum, to be considered fully compliant All organizations do not follow the same standards either.

There are some general cyber security standards, such as the NIST Cybersecurity Framework (CSF), which focus on critical infrastructures or compliance standards for organizations in specific countries, but there are also compliance standards for certain types of organizations. The types of organizations that tend to have their own set of standards are banks or companies holding customer banking/credit card information, and healthcare companies. Some of the biggest healthcare company standards, that you may have heard of, are the Health Insurance Portability and Accountability Act (HIPAA) and the Health Information Technology for Economic and Clinical Health (HITECH) Act. These are vital for a healthcare organization to follow to maintain proper security within their environment. If a healthcare organization is found to not be following these and other standards, they will face legal action and likely have to pay thousands in fines.

How does an Organization become Compliant?

An organization can become compliant in a number of different ways. Following these standards to the letter and ensuring they have at least the minimum amount of security outlined in these standards is the most crucial step.

Organizations can also follow cyber security best practices, like those outlined in NIST SP 800-30 and other NIST recommendations, to better harden their security and ensure compliance. Additionally, security audits of an organization’s cyber security framework should be completed annually at a minimum. This will help ensure any updates to security standards are being followed, and if they are not then this can be noted and remedied in the audit. There are also a number of different security tools used in platforms like Microsoft Azure that help organizations maintain their compliance without having to implement as much work. We will touch on this in the next section of this blog.

What is Microsoft Azure doing to help with compliance?

Microsoft has worked to ensure that their databases, as well as each other part of their cloud system, can help a healthcare organization reach and stay in compliance with every healthcare compliance standard they must follow. Using something called the Azure Security Center, users can keep track of their different cloud systems in use and ensure it is up to the compliance standards necessary. This Security Center allows the organization to keep up-to-date on the status of their compliance within Microsoft Azure. This also allows Azure to recommend changes to their current practices to further comply with standards such as HIPAA.

Microsoft also takes care of the deployment and maintenance of systems within Azure, taking the hassle and man-power needed from the organization away. Azure also offers the ability to complete third-party audits of the systems in place to check for proper compliance. This allows security audits to happen quickly and easily, offering organizations the ability to stay updated on security standards year-round.

Organizations can also download compliance documentation via Microsoft Azure, further speeding up the audit process and providing easy access to the documentation for new hires. There are also different tools available in Microsoft Azure to use for compliance purposes. Azure Blueprints is a service that offers the ability to create frameworks for services developers are creating. These frameworks can be created by upper-level management, and pre-loaded into Azure Blueprints for developer use. Since a high-ranking member of the organization has created this framework, the developers using that framework know that it is approved for use where necessary in the organization.

Azure Policy acts similarly to Azure Blueprints, but it deals with policy and governance instead. By setting business rules and policy definitions within Azure Blueprints, a user can ensure that compliance standards are being met. Azure Blueprints evaluates resources in Azure by comparing the properties of resources within Azure to the business rules set out in Azure Blueprints.


Tools and services within Cloud Service Providers are a great way to maintain integrity of your data within the Cloud. Azure Policy and Azure Blueprints work hand in hand to constantly ensure existing and new data entering the Cloud are being properly protected. As time goes on, I am certain the cyber security world will see more tools like Microsoft Azure provides begin to roll out and provide even more ways to ensure data security compliance is being followed. Another great way to ensure compliance within an organization is to have experts look over your systems and documentation.

At Encryption Consulting, we provide data security assessments to ensure that your security tools and methods are being used properly. Our team of experts will ensure that your Public Key Infrastructure, Hardware Security Modules, and data encryption in general are up to the proper compliance standards your organization requires. We can also help implement new data security practices if a company’s infrastructure seems to be lacking. Encryption Consulting can help organizations create new governance documentation as well. To inquire about the different services we offer, visit our website at www.encryptionconsulting.com.

Free Downloads

Datasheet of Encryption Consulting Services

Encryption Consulting is a customer focused cybersecurity firm that provides a multitude of services in all aspects of encryption for our clients.

Encryption Services

About the Author

Riley Dickens is a Consultant at Encryption Consulting, working with PKIs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Code Signing

Why Every Organization Needs To Follow Code Signing Best Practices

Read time: 5 minutes

Hardening the security of an organization is extremely important as time goes on, since new techniques for infiltration are discovered often. Attacks can come from several different attack vectors, and one of the more common attacks executed today are code signing attacks. These attacks are exploited from several different means, but there are methods to harden security from these types of attacks. By following code signing best practices, you can harden your organization’s security against these attacks.

Why You Should Follow Code Signing Best Practices

As many organizations know, some of the most prevalent types of attacks today are supply chain attacks. Supply chain attacks are implemented on organizations that interact with a number of smaller organizations daily. What I mean by this is that supply chain attacks focus on organizations that provide software or tools to a number of smaller organizations. This allows threat actors to infect a tool or piece of software provided by a single organization, and in turn infect all the smaller organizations that use that tool. Some examples of supply chain attacks have been seen in recent news, such as the JBS Foods attack as well as the Colonial Pipeline attack.

Many of these supply chain attacks were done due to a lack of code signing best practices being in place. All it takes is a small gap that can be exploited by attackers to infect thousands of customers. Code signing is used as a common attack vector for supply chain attacks because with tools that are distributed to a number of different organizations, they must be updated regularly. These updates, as long as code signing is in place, will be known to be from a trusted source, meaning the organization who created the tool or software. Without code signing, anyone could send along an update to the tool that would then infect each person who used that update, and this is exactly what happened in a number of different supply chain attacks.

Code Signing in the Industry

Though code signing is not a new technology, as companies have used it for many years, there are still gaps found in code signing techniques regularly. Though not related to code signing, recently a flaw was found in the Java coding language, the Log4J vulnerability, which has been in Java code for years. This vulnerability, even though it was only recently discovered, is within the basis of the majority of Java code on the Internet. This recent flaw has sent the majority of the world’s companies into a panic attempting to patch this vulnerability. Many of these organizations will need to harden their security due to this flaw and keep up-to-date on updates from Java when an official patch does come out. This type of vulnerability is why it is so important to keep your systems updated with the best practices for code signing, as a large flaw like this may be found in the future.

Top Code Signing Best Practices

Below are some of the top code signing best practices that any organization can use to harden their existing security system.

  • Utilization of Hardware Security Modules : One of the most important code signing best practices to follow is using a Hardware Security Module, or HSM, to protect your private keys used for code signing certificates. Though software-based storage methods exist and are a viable key storage method, HSMs are ultimately a much safer method of private key storage. HSMs use tamper-evident tools to ensure that no one without proper access to the HSM can utilize the keys held within. To access an HSM without the proper permissions, a threat actor would essentially need to steal the HSM from the server rack it is installed on and then physically access the HSM and steal the keys. The strength of the security behind an HSM is why they are recommended so highly for code signing best practices.
  • Proper Access Control : Access to keys and HSMs in general must be carefully curated within an environment to ensure that unwanted users cannot use code signing for malicious intent. It must be assured that users within the environment only have access to the code signing processes and tools that they absolutely must have access to to complete their job. This method of access control is known as the Principle of Least Privilege. Least Privilege is commonly followed by most organizations, as it is a great method to control access to keys, files, and data in general within a secure environment.
  • Use of a Secure Public Key Infrastructure : Another important part of a strong code signing environment is the use of a strong and trusted Public Key Infrastructure. A Public Key Infrastructure (PKI) uses Certificate Authorities (CAs) to distribute certificates, whether they be for code signing, authentication, or other purposes, to users and devices within an organization. A PKI used by an organization can be external, where a trusted secondary organization manages all the components of the PKI, or it can be internal and run by the organization itself. When using external PKI systems, the primary organization must ensure that it is using a trusted external organization to run it is a secure PKI. If using an internal PKI, organizations should ensure that it is properly setup, with every security detail properly in place. The average internal PKI utilizes a two-tier hierarchy, with an offline Root CA and an online Issuing CA. The Issuing CA is the one which will actually distribute certificates to users and devices within the organization.
  • Proper Workflow Management : Another important part of code signing is ensuring that a proper workflow management is in place. Workflow management refers to the idea of having any code signing activity be logged and require approval from a secondary, trusted user. Logging of code signing activity is vital, as when a code signing breach occurs, the organization in question can audit the trail of the breach and ensure that the gap found in the environment is promptly fixed. Approvals are also important, as they ensure that if an insider threat were to attempt to send malware through a properly signed code update, a secondary user would look at the code to be signed and notice that it is not a proper update and stop the signing of that code.


As you can tell, hardening security whenever possible is very important to ensure the continued safety of an organization. Following best practice in all areas of computer security is very important, as a big flaw like the log4j vulnerability could be found at any time by any organization. Another great way to ensure an organization is following best practice is to monitor cybersecurity news and ensure that any patches or new methods of securing systems are updated when necessary. To learn more about how to implement our code signing product, visit our website at www.encryptionconsulting.com.

Free Downloads

Datasheet of Code Signing Solution

Code signing is a process to confirm the authenticity and originality of digital information such as a piece of software code.

secure and flexible code signing solution

About the Author

Riley Dickens is a Consultant at Encryption Consulting, working with PKIs, creating Google Cloud applications, and working as a consultant with high-profile clients.

Let's talk