Facts: if you have to accept any form submission from anywhere anytime by anyone, you have basically no recourse at all against bots. Because that's all bots do: they submit data to your server from anywhere, anytime.
CSRF tokens have the job of requiring the user to get something from the server before submitting data to it. That gives the server something to distinguish random submissions from "real" submissions. It can also throttle the rate at which it gives out these tokens. That's only really to protect against JavaScript, browser based cross site attacks, it doesn't do much against bots that can fetch such a token at any time.
If you tie the tokens to a user and require form submissions to come from authenticated users, it give you a much better handle on limiting submissions. You can control the rate at which a user can submit data and you can control who is allowed to sign up and how. So it gives you a handle on who is allowed to submit data and how often.
Without all this, you don't really have a handle on anything regarding the validity or frequency of submissions. You have mentioned tracking a user's mouse movements... I'm not sure how you want to implement this, but if all that's required for a bot is to submit some extra data that "looks like mouse movements", that's easily circumvented too. "Mouse movements" is just data submitted to the server after all, you have no idea whether that data was generated by a mouse or not.
In short: protecting a web form against bots is possible through various techniques, including hidden honeypot fields, authentication tokens and captchas. If you require to have an open-for-all API anybody can submit to though, all this is pretty pointless.