Distributing through the Mac App Store will help, since users can see that Apple has tried your application and found nothing nefarious in it. [Added:] Also, sandboxing your app means that your app is restricted to an explicit set of abilities, which technically-skilled users could inspect. Anything not listed, you're unable to do, so this would be an easy way to prove that you don't send anything back over the internet.
Another thing would be to save all data in user-readable files. No binary plists, no Core Data stores, etc. (Whether the XML variants of either of those should count as user-readable would be more arguable, but for this purpose, I think at least an XML plist would be readable enough. Not sure about Core Data.)
If the user can read all of the raw data you store using applications that they trust (such as TextEdit), and not just your usual fancy in-app presentation of it, then they can check for themselves, and eventually trust, that you're not storing anything they wouldn't want you to.
If any concerned potential users email you about whether you report their keystrokes to your own server via the internet, and assuming that you don't make any internet connections at all (not even an update check), you can recommend that they should install Little Snitch, which pops up a confirmation alert anytime any app tries to connect to something. When they don't see such an alert about your app, they know that you're not phoning home.
You might also, on your product webpage, include a link to a tech profile. Here's Jesper's article proposing them, and here's one example of such a document, for one of his products.