Question

I inherited some projects in which secrets were in source control in App.config and similar files. Fortunately it's not a public repository so the risk isn't as serious as it could have been. I'm looking into better ways to manage this, such as Azure KeyVault. (But I don't intend this question to be specific to that solution.)

In general, aren't we just moving the problem? For example, with Azure KeyVault, the app ID and app secret become the things that you need to keep out of source control. Here's an unfortunately typical example of how this tends to go wrong. Other approaches end up being similar, with API keys or keystore files that you have to protect.

It seems to me that products like Azure KeyVault are no better, and pointlessly more complicated, than keeping your secrets in a separate config file and making sure it's in your .gitignore or equivalent. This file would have to be shared as needed via a side channel. Of course, people will probably insecurely email it to each other...

Is there a way to manage secrets that doesn't just move the problem? I believe this question has a single clearly defined answer. By analogy, if I were to ask how HTTPS doesn't just move the problem, the answer would be that CA keys are distributed with your OS, and we trust them because we trust the distribution method of the OS. (Whether we should is a separate question...)

Was it helpful?

Solution

You could say you are just moving the problem. Ultimately, there will have to be a secret stored somewhere that your app has access to in order to have passwords, ssh keys, whatever.

But, if done right, you are moving the problem from somewhere that is hard to secure properly to one you can guard better. For example, putting secrets in a public github repo is pretty much like leaving your house keys taped to your front door. Anybody who wants them won't have trouble finding them. But if you move to, say a key store on an internal network with no outside connectivity you stand a much better chance of keeping your passwords secure. That's more like keeping your keys in your pocket. It's not impossible to lose them (the equivalent of giving out your Azure password for instance), but it limits your exposure vs. taping your keys to your door.

OTHER TIPS

Secrets like encryption keys and credentials should not be checked into source control for a few reasons. The first is obviously that encryption keys and credentials should always be on a need to know basis, and source control isn't a reliable way to protect information from disclosure. The other reason why you would not want such secrets in your source control is typically because secrets are usually (but not always) specific to a certain attribute of the environment your application will run in. (Eg. Acquiring a private key to create a digital signature needed for a web service authorization, the specific endpoint of that web service may be running in a QA environment requiring a QA signature).

The correct way to treat an environmental (or global secret) is to treat it like you would any other environmental configuration attribute, with additional security controls for good measure. A well designed, independent and versionable code module should be identical across environments such that the environment they are deployed to informs the application about its attributes (Eg. database connection details, credentials, web service endpoints, file paths, etc...) Now the configuration details crucial for your application are externalized and become configuration parameters of your environment.

Now to address some of your arguments:

In general, aren't we just moving the problem?

There is no such thing as perfect security, however "moving" the problem to an area where additional measures and controls can be placed will improve the difficulty and the reduce the likelihood of accidental or malicious disclosure of secrets. A good rule of thumb to follow when designing a system where confidential data must be protected is to always put controls in twos. What I mean by that is to ensure that for an accidental or malicious disclosure of confidential information or security incident to occur then there would have to be a failure or in two or more controls.

A good of example of this might be storing an encrypted file on a server. I also have a secret decryption key that I must keep confidential in another file.

  • Store the key and the encrypted file in the same server (0 Controls, anybody with access to the server can trivially acquire the confidential information)
  • Perform steps above and protect file access to both files so it can only be readable by the application runtime user of the OS (1 Control, compromising the password of the root user or the application runtime user will allow an attacker to acquire confidential information)
  • Store key in external key vault, acquire the key with multiple security measures like IP address white listing, certificate authentication and other measures to the application that can access the encrypted file on its filesystem. (Multiple Controls, multiple failures of security controls must occur for confidential data to be compromised)

Again, there is no such thing as perfect security but the goal of having multiple controls ensures that multiple failures need to occur for disclosure to occur.

It seems to me that products like Azure KeyVault are no better, and pointlessly more complicated,

Complicated yes, pointless is completely subjective. We can't have a discussion about the pointlessness of additional security without realistically taking account of how serious it would be for the confidential data to be exposed. If one could use the confidential data to send illicit wire transfers out of your financial institution then something like a key vault is about the furthest thing from pointless there would be.

than keeping your secrets in a separate config file and making sure it's in your .gitignore or equivalent.

Until someone accidentally checks it into source control, now the secret is embedded in source control history forever.

Of course, people will probably insecurely email it to each other...

Security isn't just a technical issue, it is a people issue as well. That would be off topic however I feel at this point you are trying to talk yourself out of doing the needful.

Is there a way to manage secrets that doesn't just move the problem? I believe this question has a single clearly defined answer. By analogy, if I were to ask how HTTPS doesn't just move the problem, the answer would be that CA keys are distributed with your OS, and we trust them because we trust the distribution method of the OS.

Security doesn't always make problems go away, much of the time it puts controls around them. Your analogy is succinct because that is in fact exactly what public-private key cryptography does. We are "moving the problem" to the CA by placing unmitigated and complete trust in our CA to vouch for the identity of the entity owning the public cert. Basically, nothing short of a catastrophic string of failures (Eg. losing trust in the CA) would have to occur to lead to a security issue.

As with many things, security is a line you have to draw based on subjective criteria, the nature of the data, the consequences of disclosure, the risk appetite, budget, etc...

It's worth to consider git secret project (http://git-secret.io/) in order to keep secret files in git repo encrypted with asymetric key. Public keys are also in repo, private key isn't.

Features of this approach is:

  • when new user/machine needs to decrypt secured files - he publishes/sends his public gpg (gnu privacy guard aka pgp) key. The user who already can decrypt secured files can reencrypt them with new additional key - after that new user can decrypt secured files with his private key.
  • obviously you need to control who can commit/push to the git repository. Otherwise attacker can commit his own public key and wait until somebody authorized will reencrypt the secured files with this new compromised public key.
Licensed under: CC-BY-SA with attribution
scroll top