I've created a simple Mac app that gives you statistics on your working behavior over time. For example, your average words per minute, what language you are typing in, usage of the delete key, etc. Interesting stuff! However, some test users have said they wouldn't use the app if they didn't know me personally, since it collects keystrokes like a keylogger.

Is there some certification I can get to show that I'm not doing anything nefarious? (I never keep more than one word in memory!) Or will it be enough to have my app signed? Or open-source that part of the code? (Other parts I know I cannot make open source.)

有帮助吗?

解决方案 2

I would think that Gatekeeper would be adequate for most users. If it turns out an app is doing bad things, then Apple could pull the plug on a malware developer. So that and maybe some time live should establish your program as 'safe' to those who are not technically inclined (e.g. cannot understand your source).

Simply distributing it in your or your company's name can do a lot to build trust in an app (provided of course your other products/programs have not violated users' trust).

其他提示

Distributing through the Mac App Store will help, since users can see that Apple has tried your application and found nothing nefarious in it. [Added:] Also, sandboxing your app means that your app is restricted to an explicit set of abilities, which technically-skilled users could inspect. Anything not listed, you're unable to do, so this would be an easy way to prove that you don't send anything back over the internet.

Another thing would be to save all data in user-readable files. No binary plists, no Core Data stores, etc. (Whether the XML variants of either of those should count as user-readable would be more arguable, but for this purpose, I think at least an XML plist would be readable enough. Not sure about Core Data.)

If the user can read all of the raw data you store using applications that they trust (such as TextEdit), and not just your usual fancy in-app presentation of it, then they can check for themselves, and eventually trust, that you're not storing anything they wouldn't want you to.

If any concerned potential users email you about whether you report their keystrokes to your own server via the internet, and assuming that you don't make any internet connections at all (not even an update check), you can recommend that they should install Little Snitch, which pops up a confirmation alert anytime any app tries to connect to something. When they don't see such an alert about your app, they know that you're not phoning home.

You might also, on your product webpage, include a link to a tech profile. Here's Jesper's article proposing them, and here's one example of such a document, for one of his products.

If you can get the application onto Apple's App Store, then that means they will have checked it for such problems. There's no way they'd knowingly allow a key-logging app on there. Also, signing the app with an Apple certificate ensures that if it has been downloaded from the App Store and later is found to be nefarious, they can black list it.

Open-sourcing code would also be a good idea. I assume you can't Open Source all of it because it doesn't belong to you? If so, then make it clear what technologies it uses and be as open and honest about what the application does and how it goes about doing it.

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top