Sorry, my response will be off topic, but the (my) rationale will be explained below.
Why does classifying your test cases by positivity or negativity matter? You have to perform both tests anyway. I prefer classifying tests as "primary" and "alternate". "Primary" describe happy paths, i.e. paths that allow the end user to actually do her job with your software. Alternate test cases describe the paths that impede the end user from doing her job, or achieving a result. (Invalid input, wrong credentials, resource unavailable, ...)
This is my definition for classifying tests. These are my words (borrowed from outside articles, with crispened definitions) It is useful to me, because I write/perform primary tests first, then alternate tests. I write primary tests thinking about what the user wants to do and how. (UX-mode) I write alternate tests thinking about what could actually "go wrong", like corrupt sysetm data, leak corporate information, and all other evils I can think of. (QA-mode)
The importance of using two words to classify tests (positive/negative) matters only if many people around you on the same project use these words. Then you should agree on a common definition. (and classification criteria) I don't think you'll get absolute answers by googling these words, just echoes of local testing cultures... unless you find any central unique authority that provides precise definitions, like a book on testing, or a website on testing, or a corporate document on testing.