Question

I've been trying my hand at building apps with Flutter and Dart. I noticed in my apps that if someone decompiled my app they could access a whole lot of things I didn't want them to access.

For example, if I am calling my database to set the users 'active' status to False when they cancel their plan they could just comment out that bit of code and they get access to the entire app again despite having cancelled their plan.

Since this is my first app, my backend is Firebase. The app handles everything and calls Firestore when it needs to read or write data.

  1. Is this something to really worry about?

  2. If so should I be using something like Firebase Cloud Functions?

  3. Should I be creating a proper backend? If so what would its structure be? Would my app just be a client for the backend?

Was it helpful?

Solution

I used to be a full-time binary reverse engineer, and I still spend about 80% of my time reverse-engineering software (legally).

There are some good answers here already, but I wanted to add a few touches.

Legal Aspect

I'm not a lawyer. But as far as I'm concerned (and many others too), reverse-engineering doesn't really become legally enforceable until you've done something with the knowledge. Think about this situation:

Say I'm a reverse engineer and I download your app. I disconnect my "lab" machine from the network. Now, I decompile, disassemble, and debug your app, taking detailed notes of how it works. After doing all of this, I wipe out my lab machine and it never sees the network.

Then, I do nothing with that knowledge, because it's a weekend hobby and I just enjoy decompiling things.

It's debatable whether or not this is illegal and more importantly, it's unenforceable. There is no way you, your lawyer, or anyone else will ever know that I did this unless I'm already suspected of copyright violations, patent violations, or some other crime. Even if you sued me, what would you sue me for? I never published, distributed, advertised, told anyone, or did any kind of monetary damage to your business whatsoever. What would your "damages" be? For this reason, the vast majority of the time (view that EFF page that was linked in a comment earlier), real prosecution stems from cause of some (usually major) perceived loss by the software development firm or copyright/patent holder.

The trick is that a reverse engineer may actually use some of the knowledge that he/she/they learned from your app code, and do things that will be hard for you to detect or prove. If a reverse engineer copied your code word-for-word, and then sold it in another app, this would be easier to detect. However, if they write code that does the same thing but is structured entirely different, this would be difficult to detect or prove, etc...

Learn about who would target your app and why?

What are the type of people who would like to reverse engineer your app? Why? What would they get out of it?

Are they hobbyists who enjoy your app and could potentially even be helping your business by fostering a community of hacker enthusiasts? Are they business competitors? If so, who? What is their motive? How much would they gain?

These questions are all very important to ask because at the end of the day, the more time you invest in locking down your code, the more costly it is to you and the more costly it is to the adversary to reverse engineer. You must find the sweet spot between spending some time on application hardening to the point where it makes most technical people not want to bother spending time trying to thwart your app's defenses.

Five Suggestions

  1. Create a so-called "Threat Model." This is where you sit down and think about your application's modules and components, and do research on which areas would most likely be compromised and how. You map these out, many times in a diagram, and then use that threat model to address them as best as you can in the implementation. Perhaps you model out 10 threats, but decide that only 3 are most likely, and address those 3 in the code or architecture.

  2. Adopt an architecture which trusts the client application as little as possible. While the device owner can always view the app's code and the network traffic, they cannot always access the server. There are certain things you can store on the server, such as sensitive API keys, that cannot be accessed by the attacker. Look into "AWS Secrets Manager," or "HashiCorp Vault," for example. For every client module, ask yourself "Would it be ok if an attacker could see the inner workings of this?" "Why not?" and make necessary adjustments.

  3. Apply obfuscation if your threat model requires it. With obfuscation, the sky is the limit. The reality is, it is an effective protection mechanism in many cases. I hear people bashing on obfuscation a lot. They say things like

    Obfuscation will never stop a determined attacker because it can always be reversed, the CPU needs to see the code, and so on.

    The reality is, as a reverse engineer, if whatever you've done has made cracking into your app take 2-3 weeks instead of an hour (or even 3 hours instead of 5 minutes), I'm only cracking into your app if I really really want something. Most peoples' apps are frankly not that popular or interesting. Sectors which need to take extra measures would include financial, government, video game anti-hacking/anti-cheat, and so on...

    Furthermore, the above argument is nonsensical. Cryptography doesn't stop people from getting your data, it just slows them... Yet you're viewing this page right now over TLS. Most door locks are easily picked by a skilled lockpicker in seconds, people can be shot through bullet-proof vests, and people sometimes die in a car accident when wearing a seatbelt... So should we not lock doors, wear vests, and wear our seatbelts? No, that would be silly, as these devices reduce the likelihood of a problem, just like obfuscation, symbol stripping, developing a more secure architecture, using a Secrets Manager service to store your API secrets, and other hardening techniques that help prevent reverse engineering.

    Say I'm a competitor and I want to learn how to make an app like yours. I'm going to the app store and searching for similar apps. I find 10 and download them all. I do a string search through each. 7 of them turn up nothing useful, and 3 I find unstripped symbols, credentials, or other hints... Which apps do you think I'm going to be copying? The 3. You don't want to be those 3.

  4. Scan your source code for sensitive strings such as API secrets, sensitive keys, admin passwords, database passwords, email addresses, AWS keys, and so on. I usually search for words like "secret", "password", "passphrase", ".com", "http" using a tool called ripgrep. There will be false positives, but you may be surprised at what you find. There are automated tools which help accomplish this, such as truffleHog

  5. After you build your application, run the strings utility or a similar utility on it. View the output both manually and using a text search like ripgrep or grep. You'll be surprised at what you find.

Know about deobfuscators and look for them

Lastly, know that various obfuscators out there have deobfuscators and "unpackers." One such example is de4dot, which deobfuscates about 20 different C#/.NET obfuscator outputs. So, if your idea of protecting something sensitive is just using a commodity obfuscator, there's a high chance that there is also a deobfuscator or other folks online who are discussing deobfuscating it, and it would be useful for you to research them before you decide to use an obfuscator.

Why bother obfuscating when I can open de4dot and deobfuscate your entire program in 2 seconds by searching "[insert language here] deobfuscator?" On the other hand, if your team uses some custom obfuscation techniques, it may actually be harder for your adversaries because they would need a deeper understanding of deobfuscation and obfuscation techniques aside from searching the web for deobfuscators and just running one real quick.

OTHER TIPS

Once someone has a copy of your app, they can do anything with it. Your security model has to assume that nothing in your app is secret, and that actions that look like they have been made by your app might actually be malicious. As an approximation, a native app is about as secure as a web app.

That means that you must not store any API tokens or similar in your app. If you need to keep something secret, you have to write a server backend to manage the secret stuff and have your app talk to this backend. FaaS approaches might also work if you're not expecting many requests.

Firebase does have server-side authentication capabilities that e.g. prevent a user from modifying other user's data – if you configure everything appropriately. You can also apply some amount of validation to see that the data sent by the user makes sense. But in general, once a user has access to a document per some rules they can change whatever they want. Please read the Firebase security documentation carefully to avoid security breaches.

On mobile devices that haven't been rooted, apps can enjoy some basic security guarantees, for example it is possible to check that they are actually running on a specific device and that the app has not been modified. This e.g. means that 2FA apps or banking apps can be pretty secure, but this doesn't ensure that you can defend against decompilation. You must still ensure that your backend never trusts anything from the client.

Never trust the client. Make sure that anything you need to keep private is stored on the server and requires user-specific credentials to access.

Is this something to really worry about?

This is very dependent on the product. A lot of the time, someone doing it will "cost" you $30 a month—who cares if four or five (or most likely zero!) people do it? You can monitor the situation over time and make changes if necessary. It's a bit like profiling code; engineers make notoriously bad estimates of good and bad bits.

Also, think rationally. If you are "angry" at people who do it, put that aside. Etc.

HOWEVER!!

they could just comment out that bit of code and they get access to the entire app again

If this is a problem, there is a good chance your users could do more serious things you haven't thought of, like impersonating other users, messing with their profiles, buying things with their money.

If so should I be using something like Firebase Cloud Functions?

Yes, "something like" that. For 95% of people asking this question, the problem is pretty much eliminated if you perform authentication and authorisation and sensitive functionality on the server/cloud rather than the client (and follow best practices correctly.) You don't necessarily need Firebase Functions if you can set up Firebase security rules to do the job. It depends on your application.

However in some cases code really needs to run on the client (e.g. in games or proprietary number-crunching algorithms) or it is too slow. In these cases, obfuscation is where to put your attention. But nobody has mentioned anti-debugging techniques. Malware authors use these to shut down the program if it suspects it is being run in a debugger or VM. This makes reverse-engineering even more time-consuming.

Should I be creating a proper backend? If so what would its structure be, would my app just be a client for the backend?

Backends tend to implement behaviour and your client can sometimes access functionality partly through the backend and partly not. If you have complex rules like users managing other users or teams, loyalty points and so on, that goes on the backend. It is madness to try to securely authorise that sort of thing on the client.

Otherwise, it's a matter of taste as to how much functionality to put on the server. On the one hand it creates an extra layer to implement and maintain. On the other hand, you can update backend code "in one go", so if you want to add new features or fixes, you don't need to worry about rollouts and conflicting versions of your client app everywhere. Doing intensive things on the backend is good for client battery life (at the expense of server $). So on.

As Jörg W Mittag mentioned there is the legal aspect of what you are talking about, and then the technical. As long as the app embeds critical logic and database access inside of it, someone with enough patience can reverse engineer and do the bad things you are talking about. There are different approaches you can take to protect your efforts:

  • Use Digital Rights Management (DRM) to protect the app--still can be defeated but harder to do
  • Use a code obfuscator which has the ability to make it harder to reverse engineer the code
  • Encrypt the module that does the critical access (you have to decrypt it when you load it in memory)
  • Move all critical behavior to services hosted remotely (like in the cloud)

None of these solutions are mutually exclusive, but the one that provides the best protection is to move your database access and critical business logic to a service oriented architecture (i.e. web services you control). That way it is never part of your app to begin with, and then none of the code you are worried about is even available for someone to reverse engineer.

It also means you are free to change how that information is stored and managed without having to release a new version of the app. Of course, you'll have to provide the appropriate protection to make sure that a user can only see or interact with their own data, but now you don't have to worry about the app being hacked.

Many apps are built this way now. The app communicates with servers via HTTP with JSON, YAML, Protobuf, BSon, or some other structured exchange format. The app authenticates to get a session token that is good for a few minutes at a time, and that token is presented to your service so you don't have to worry about server side sessions.

How do app developers protect their app when a user decompiles it.

In practice, they don't.

AFAIK, in Europe, decompilation of a software is legally possible for interoperability purposes. Please check with your lawyer since I am not a lawyer. Be aware of the GDPR. A related legal question is patentability of software. This is discussed by the FSF, by the EFF, by APRIL, by AFUL (Notice that I am member of both APRIL & AFUL).

But your question make few sense. You are trying to find a technical answer to a legal, social and contractual issue.

A good way to protect a software is thru a legal contract, such as some EULA.

Writing a contract requires as much expertise as coding a software. You need to contact your lawyer.

In most countries, an unhappy former IT professional could write to some court about software license violations, and that threat is dissuasive enough for most businesses..

A dual or symmetrical question is discussed in the paper Simple Economics of Open Source, but the Big other surveillance capitalism and the prospects of an information civilization paper is also relevant.

See also of course SoftwareHeritage.

You technically could write your own GCC plugin doing code obfuscation, or customize Clang for such purposes. I don't know if that is legal. Please check with your lawyer. See also this draft report giving technical insights.

PS. Common Criteria embedded code in ICBM or aircrafts (see DOI-178C) are probably not obfuscated. Such software intensive systems are protected by other means (including personnel armed with machine guns).

There are two aspects to this.

First off, what you are describing is illegal in many, if not most, jurisdictions.

  • Decompilation: For example, in the EU, de-compiling is only legal for purposes of interoperability, and only if the copyright holder refuses to make interoperability documentation available under reasonable terms. So, unless the user is developing an app that requires interoperating with your service, and they have contacted you and asked for information required to interoperate with your service, and you refused to provide them such information, they are not legally allowed to decompile or otherwise reverse engineer your app, your service, or your network protocol.
  • Circumventing a digital protection device is illegal in the EU, the US, and many other jurisdictions.
  • Fraud: Using your app without paying is fraud, which is a crime pretty much everywhere.

So, since what you are describing is highly illegal, one potential way of dealing with the problem is to simply not do anything, under the assumption that no-one is willing to go to jail to save the money for your app. Simply put: don't do business with criminals.

Since that is not always possible, we have to talk about the second aspect: the user owns the device. That is information security 101. You cannot trust anything that is on that device or is sent by that device. Period. The user can manipulate everything you send, everything you store.

Computers are stupid. Much stupider than humans. In order to execute code, the computer has to understand it. You can compile it, obfuscate it all you want, the computer still has to be able to understand it in order to execute it. Since computers are stupider than humans, this means that the user can understand it, too, i.e. decompile / disassemble / reverse engineer it.

You can encrypt it, but the computer has to decrypt it to understand it. Therefore, you have to store the decryption key somewhere on the user's device. Since the user owns the device, the user can extract the key. Or you send the key over the network. Since the user owns the device, the user can intercept the key. (Or the user can log the device into a WiFi under the user's control, or …)

There is no way in which you can protect the code.

You have to design your security under the assumption that the user can read and change your entire code on the device, read and change your entire data on the device, read and change everything your app sends over the network, read and change everything your app receives over the network. You cannot trust the user, the user's device, or your own app. Period.

The security models of mobile devices are designed to protect the user from apps, not the other way around.

You should learn more about how to secure your database with security rules because as others said you cannot be sure the user won't access your code.

You should implement Cloud Functions for every sensible code you want to run on the server. For example, you should have one function that sets the user to premium when he has valid credentials.

You should also have restrictions on premium access in your database (set security rules) that only allows premium users to access it (you can store premium in the user's auth token).

You should always have in mind that anyone has access to your database.

I think another part to your question is about the granularity of operations.

Your questions seems to be framed such that your app has two actions:

  1. Cancel the plan
  2. Set the user status to inactive

And that these are separate, so a canny user could comment out (2) and let (1) still run.

In this case, these actions would be much better in a back-end function, and importantly, there should only be single function that does both of these things in a transactional fashion, e.g.

CancelUserPlan() {
   CancelPlan();
   SetStatusInactive();
   CommitChanges();
}

At present you have another issue in your architecture beyond a malicious user - what happens if your second call fails (a network blip for example)? Is that user now in a 'non-paying' but full access state?

Having this as a single action that the user sees (and can manipulate) means that they can either cancel and be set inactive, or they can do neither of these things.


In short, this is a bit of a deeper problem than the securing of your code on the mobile device. As stated in other answers to this question, there are valid reasons to obfuscate deployed code, but, if you haven't architected your application in a secure/robust fashion from the start, then you have another problem to rectify before you even get to obfuscation.

I believe you are looking for the concept of Obfuscation. It basically makes code more difficult to read by humans. There is in fact some documentation over at the flutter website on how to achieve this.

Code obfuscation is the process of modifying an app’s binary to make it harder for humans to understand. Obfuscation hides function and class names in your compiled Dart code, making it difficult for an attacker to reverse engineer your proprietary app.

Documentation can be found at Obfuscating Dart code

If it is something to really worry about depends on the sensitivity of the application you are building. Usually if this is a platform for businesses customers will often ask penetration test results to verify the security of your app. One of the things they do is decompile the application.

I would also suggest to hide any sensitive keys (e.g. API keys etc) in the secure storage of any OS you target. If this would be iOS this would be the keychain for example. Otherwise someone could also get hold of these keys and either impersonate you or leave you with a hefty bill if you have a subscription based on usage.

Licensed under: CC-BY-SA with attribution
scroll top