That process can be guided by a component approach, where you identified coherent set of files (an application, a project, a library)
In term of history (in a source control tool), a coherent set means it will be labelled, branched or merged as a all, independently of the other set of files.
For a distributed version control system (like git), each of those set of files is a good candidate for a git repo of its own, and you can then group those you need for a specific project in a parent repo with submodules.
I describe this approach for instance in;
- "Git repository setup for a project that has a server and client" (server and client being two obvious coherent separate sets which benefit from having their own repo)
- "What is Component-Driven Development?"
The opposite (keeping everything in one repo) is called "system-based approach", but can lead to huge Git repo, which, as I mentioned in "Performance for Git", isn't compatible with how Git is implemented.
The OP onionjake asks in the comments:
Could you please include more information on the subtleties of identifying components?
This process (of identifying "components", which in turn become git repos) is guide by the software architecture of your system.
Any subset which acts as an independent set of file is a good candidate for its own repo. It can be a library, or dll, but also part of an application (a GUI, a client vs. a server, a dispatcher, ...)
Each time you identify a group of tightly linked files (meaning modifying one will likely have effect to others), there should be part of the component, or in git, the same repo.