Question

At my company we are developing a large system, comprised of several servers. The system is comprised from about 5 logical components. Data is stored in XMLs, MS SQL, and SQLite. It's a .Net system(mostly) ,the components communicate using WCF, and some custom UDP. Clients access the system mostly through the custom UDP or WEB(ASP.NET & Silverlight).

Protecting the communication is easy, some SSL, and some security on the WCF and we're done.

The main problem we are facing is that the system needs to be deployed on a client's site, a client that we dont necessarily trust. We need to defend the data on the servers, and the software itself from reverse engineering. Both are crucially important to us.

Also we need a kill switch, i would like something that destroys the data and the software, upon command, or if unable to call home for a certain period of time.

The direction that i was thinking of is using TPM, or something alike - some hardware encryption solution, in combination with another service that we could keep internally to encrypt all the software and data on the servers, so that the key's will come from our server safely in our site, and maybe memory curtaining from the TPM.

How do you suggest solving such a problem?


UPDATE 04/02 I'm looking for practical suggestions, or advise on products that could help me, so I'm starting a bounty...

Look guys we're basically putting our machine in the client's site (for business and practicality reasons), we own that machine, and the client receives everything he's paying for within hours, and he can do with the data whatever he wants. But i the algorithms running on that machine, and some of the data stored there is our trade secrets, that we want to protect. Ideally i would want the machine not to work at all not even boot if i dont say it's OK, and without my OK for everything on the machine to remain encrypted. Memory curtaining also looks like a nice way to protect the machine while executing.

Also ideally I would want the HD's and the storage on all the machines to explode as soon as someone gets near them with a screwdriver... :-) but i think that would be taking it too far ...


UPDATE 10/02 O.K. after doing some research, I think we are going to try something in the same direction as the PS3 encryption system, except we're going to bring in the keys for decrypting the software and the data from our servers. doing so we can decide on our machines whether we trust the server requesting the keys, we can get a kill switch just by reseating the machine. this is probably be based on TPM or something similar, maybe intel's TXT... I'm also really interested in memory curtaining as an important security feature...

BTW, we cant solve this by moving the valuable parts of our system to our site, both because of business requirements and because its not technologically feasible - we would need a huge bandwidth....

Was it helpful?

Solution

What you're asking for, in effect, is the holy grail. This is roughly equivalent to what's done for game consoles, where you have a trusted platform running in an untrusted environment.

Consider whether or not you can treat the machine as compromised from day 1. If you can work under that assumption, then things become considerably easier for you, but that doesn't sound terribly viable here.

In terms of actually securing it, there are a few concerns:

  • You must encrypt the filesystem and use hardware decryption
  • You must isolate your applications from each other, so that security issues in one don't compromise others
  • You must plan for security issues to occur, which means putting mitigation strategies like a secure hypervisor in place

I know these are fairly vague, but this is really the history of game console protections over the last couple of years -- if you're curious as to how this has been solved (and broken) over and over, look to the console manufacturers.

It's never been done completely successfully, but you can raise the barrier to entry significantly.

OTHER TIPS

... To be honest it sounds like you're asking how to write a virus into your application which makes me think your client probably has more reason to not trust you than the other way around.

That being said this is a terrible idea for a number of reasons:

  1. What happens if their internet connection dies or they move offices and disconnect the machine for a bit?
  2. What if you code it wrong and it misfires? Deleting data even if the customer is using it correctly?
  3. I can only assume your request implies that your application offers no backup capabilities. Am I correct? Sounds exactly like a product I wouldn't buy.
  4. How valuable is the data your application manages? If it is deleted what kind of financial losses would this result in for the client? Has your legal department signed off on this and verified you can't be held liable?

This question is asked on SO 2-3 times a week, and the answer is always the same - whatever you have given to the user is not yours anymore.

You can make it harder for the user to get to the data, but you can't prevent him from getting there completely. You can encrypt the data, you can keep the decryption key on USB cryptotoken (which doesn't expose the secret key), but in theory if the code can call cryptotoken and ask it to decrypt the chunk of data, then the hacker can duplicate your code (in theory) and make this code call cryptotoken to decrypt all data.

Practically the task can be made complicated enough to make it unfeasible to get the data. At this point you should check, how much important the decrypted data really is for the user.

About kill switch: this doesn't work. Never. The user can make a copy and restore it from backup if needed. HE can change computer clock. He can probably even slow down the clock of the computer (if the data is so valuable that investing into custom emulation hardware is feasible).

About critical data: sometimes it turns out that your valuable asset is really of little value to anybody else [and some other aspect of your solution is]. Example: we ship source code of our driver products. It's the most valuable asset for us, but the users pay not for lines of code, but for support, updates and other benefits. The user will not be able to effectively use the [stolen] source code without investing the sum, comparable to the cost of our license.

About obfuscation: virtualization of pieces of code (eg. VMProtect product) seems to be quite effective, however, it can also be bypassed with certain effort.

In general I can think of some custom hardware with custom-built operating system, sealed like a cash machine (so that the client can't get in without breaking the seal), with regular inspections etc. This might works. So the task is not just technical but mostly organizational - you will need to arrange regular inspections of the machine etc.

To summarize: if the data is that valuable, keep it on your servers and offer Internet connection only. Otherwise you can only minimize the risks, not avoid them completely.

As everyone else said, there is no magic bullet. The user could turn off the machine, get the HD as a slave to other machine, backup everything, reverse engine your code and then crack it sucessfully. Once the user has physic access to the executable, it is potentially compromised and there is nothing to do to stop it in 100% of the cases.

The best you can do is to make the work of a potential cracker hard as hell, but no matter what you does, it would not be unbreakable.

Using a self destruction in case of something wrong can be worked around by a cracker which backup'd everything.

Using a key in a USB driver helps to make the cracker life harder, but can be ultimately defeated by a competent determined cracker: The code that unencrypt things can't be in a encrypted state (including the part that gets the key), so it is the big weak point. Hacking that part of the code to save the key somewhere else defeats the key.

If the software does authentication in a remote server, this can be worked by attacking the client and circunventing the authentication. If it gets a key from the server, sniffing the network could be used to intercept the server data which contains the key. If the server data is encrypted, the cracker can unencrypt it by analysing the software that unencrypts it and fishing the unencrypted data.

In special, everything there would be much easier to a cracker if he uses an emulator to run your software that is capable to save snapshots of the memory (including a uncrypted version of the algorithm). Still easier if he can manipulate and pin the memory direct while running your software.

If you don't expect that your untrusted client are very determined, you can just complicate things and hope that they will never get the enough energy and skill to be worthing to break it.

The better solution, in my opinion, is to get all the software in your trusted server, and make their server just ask your server to do the job, and keep your algorithms in your server. This is much safer and simpler than everything else, because it removes the fundamental problem: The user no more has physical access to the algorithm. You should really, really think about a way to do this by eliminating your needs to keep the code in the client. However even this is not unbreakable, a hacker can deduce what the algorithm does by analyzing what is the output in function of the input. In most scenarios (does not looks like that this is your case), the algorithm is not the most important in the system, but instead the data is.

So, if you really can't avoiding running the algorithm in the untrusted party, you can't do much more than what you already said to do: encrypt everything (preferencially in hardware), authenticate and check everything, destroy important data before someone thinks about backuping it if you suspect that something is wrong, and make it hard as hell to someone crack it.


BUT, IF YOU REALLY WANT SOME IDEAS, AND REALLY WANT TO DO THIS, HERE WE GO:

I could suggest for you to make your program mutant. I.E: When you decrypt your code, encrypt it with a different key and throw away the old key. Get a new key from the server and assert that the key is itself coded in a way that it would be very hard to mock the server with something that gives compromised new keys. Make some guarantee that the key is unique and is never reused. Again, this is not unbreakable (and the first thing a cracker would do is to attack this very feature).

One more thing: Put a lot of non-obvious red herrings that does non-sense strange consistency checks, which a lot of non-functional bogus version of your algorithm and add a lot of complex overbloat that effectively does nothing and asserts that it runs as expected from real code. Make the real code do some things that looks strange and non-sense too. This makes debugging and reverse-enginering even harder, bacause the cracker will need a lot of effort trying to separate what is useful from what is junk.

EDIT: And obviously, make a part of the junk code looking better than the correct one, so a cracker would look there firstly, effectively losing time and patience. Needless to say, obfuscate everything, so even if the cracker get the plain unencrypted running code, it still looks confusing and very strange.

I know others will probably poke holes in this solution - and feel free to do so as I do this sort of thing for a living and would welcome the challenge! - but why not do this:

  1. Since you are clearly using windows, enable bit-locker drive protection on the hard drive with the max security settings. This will help mitigate people cloning the drive as my understanding - If I am wrong, say so! - is its contents will be encrypted based on that systems hardware settings.

  2. Enable TPM on the hardware and configure it correctly for your software. This will help stop hardware sniffing.

  3. Disable any accounts not used by you and lock down the system accounts and groups to use only what you need. Bonus points for setting up Active Directory and a secured VPN so you can access their network remotely via a back door to check the system without making an official on-site visit.

  4. To rise the technical bar required to get into this, Write the software in C++ or some other non-.Net language since MSIL byte-code is easily de-compilable into source code by publicly available free tools and it takes more technical skill to decompile something in assembly even if it is still very doable with the right tools. Make sure you enable all cpu instructions for the hardware you will be using, to further complicate matters.

  5. Have your software validate the hardware profile (Unique Hardware ID's) of the deployed system every so often. If this fails (as in the hardware has changed) have it self destruct.

  6. Once the hardware has been validated load your software from an encrypted binary image loaded into a encrypted RAM disk that is then itself de-crypted in (non-pinned!) memory. Don't pin it, or use a constant memory address as that is a bad idea.

  7. Be very careful that once the decryption is done, the keys are removed from RAM as some compilers will stupidly optimize out a non-secured bzero/memset0 calls and leave your key in memory.

  8. Remember that security keys can be detected in memory by their randomness in relation to other blocks of memory. To help mitigate this make sure you use multiple "dummy" keys that if used, trigger an intrusion detection and explode scenario. Since you should not be pinning memory used by the keys, this will allow people to trigger the same dummy keys multiple times. Bonus points if you can have all dummy keys randomly generated, and the real key different each time due to the #12 below, so that they cannot simply look for the key that doesn't change.. because they all do.

  9. Make use of polymorphic assembly code. Remember that assembly is really just numbers that can be made self modifying based on the instructions and state of the stack/what was called before. For example in a simple i386 system 0x0F97 (Set byte if above) can easily be the exact opposite (Set byte if below) instruction by simply subtracting 5. Use your keys to initialize the stack and leverage the CPU's L1/L2 cache if you really want to go hard core.

  10. Make sure your system understands the current date/time and validates the current date/time is within acceptable ranges. Starting the day before the deployment and giving it a limit of 4 years would be compatible with the bell curve of hardware failure for hard drives under warranty/support so you can take advantage of such protection AND allow you good time between hardware updates. On failure of this validation, make it kill itself.

  11. You can help mitigate people screwing with the clock by making sure your pid file is updated with the current time every so often; Comparing its last modified time (as both encrypted data and its file attributes on the file system) to the current time will be a early warning system for if people have screwed with the clock. On problem detected, explode.

  12. All data files should be encrypted with a key that updates itself on your command. Set your system to update it at least once a week, and on every reboot. Add this to the software's update-from-your-servers feature that you should have.

  13. All cryptography should follow FIPS guidelines. So use strong crypto, use HMACS, etc. You should try to hit FIPS-140-2-level-4 specs given your current situation, but understandably some of the requirements may not be feasible from an economic standpoint and realistically, FIPS-140-2-level-2 may be your limit.

  14. In all self destruct cases, have it phone home to you first so you know immediately what happened.

And finally some non-software solutions:

  1. If it cant phone home.. as a last ditch effort a custom hardware device connected to an internal serial/usb port that is set to activate a relay that then sets off a block of Thermite if it detects either case, hardware, or software tampering. Putting it on top of the hard drives and placing these over the motherboard will do the job best. You will however need to check with your legal department for the permits, etc required if this is not a US military approved situation as I am assuming you are in the USA.

  2. To make sure the hardware is not tampered with, See FIPS physical security requirements for more details on making sure the system is physically secure. Bonus points if if you can see about bolting/welding the modern racks you are using into an old AS400 case as camouflage to help mitigate movement/tampering of the hardware. Younger guys will not know what to do and be worried about breaking "suck old stuff", older guys will wonder "wtf?", and most everybody will leave blood behind that can be used later as evidence of tampering if they tamper with the often sharp edged case, at least based on my own experience.

  3. In the case of an intrusion notification, nuke it from orbit.. its the only way to be sure. ;) Just make sure you have all legal forms and requirements for access filled out so legal is happy with the mitigation of risk or liability... Or you can set up your notification system to email/text/phone people automatically once you get a notification telling you it exploded.

"The only way to have a totally secure system is to smash it with a hammer"

That said, it is possible to screw with the would-be hackers enough to make it more trouble than it is worth. If the machine is a 'black box' where they can't truly access it directly, but instead have programs that deal with it, then your greatest threat to it is physical access. You can lock cases down, and even install a small, breakable item into the case that will be snapped if the case is opened...make sure your service people always replace this item...it will let you know if someone has opened it without authorization (yes, it's an old teenager trick, but it works). As for the box itself, physically disable any bits of hardware (like USB ports) that you don't absolutely need.

If you are dealing with a machine that isn't a black-box, encrypt the hell out of everything...256bit encryption is effectively impossible to crack without the key...then the trick becomes getting the key.

In theory, you could potentially have the key change (by re-encrypting the data) and only be retrievable by a process that directly communicates with your (safe) servers.

Additionally, track everything that happens to the box, especially anything that occurs in the software that is outside of normal use. Much of this can't protect you from someone who is really, really determined...but it can alert you that your system has been compromised. (upon which you can sue the heck out of whoever broke in)

As for the kill switch...well, sleeper viruses are out there, but as has been said, they can be fooled or set off by accident. I would suggest that rather than wiping itself clean, if you suspect a breach, have the system encrypt everything it can with a randomly generated key, send the key to your servers (so you can undo the damage), and then 'shred' the file that used to contain the key. (many file shredders out there can destroy data well enough that it is (almost) impossible to recover.)

Summarizing the answers, yes. There are no 'perfectly safe' solutions to this problem, as it would need homomorphic encryption (which exist now only in the form of limited prototypes which require ridiculous amounts of computation).

In practice, what you need is the combination of proper requirements engineering and security engineering (evaluate stakeholders, interests, valuable assets within the deployed system, possible attacks and damages from each successful attack scenario VS costs to defend from it.)

After that, you'll either see that the protection is not really needed, OR you can deploy some reasonable measures and cover other 'holes' with legal stuff, OR re-engineer the system altogether, starting with the business model (Unlikely, but possible, too).

Generally, security is the systems engineering problem, and you should not limit yourself to technical approaches only.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top