I think the main thing you haven't considered is the likelihood of collisions in your random file names.
With such small names, you only have 36 ^ 6 = 2,176,782,336
unique files, which means you are likely to get a collision before hitting 50,000 files (http://en.wikipedia.org/wiki/Birthday_problem) - not a huge number (and of course there's always the chance of getting one much earlier).
I like your colleague's approach simply because it guarantees unique names, regardless of how it affects the filesystem.
If you absolutely don't want to rely on the database to generate a consistent sequence, you can use UUIDs for the names.
You also seem to be planning for very deep trees - how many files (and tenants) do you expect to have? A reasonable rule of thumb is to have 10,000 files (actual, not just possible) per directory, and probably more with modern filesystems. Three levels of sub-directories is almost certainly overkill.
Also, if you do need to split the tenants into multiple directories, hash them first (or use database ids) - using natural names can lead to very unbalanced "buckets" (probably not a big issue here, but it doesn't cost anything to do it correctly).
Finally, how big are the files? Depending on your actual use-case, storing them in the database may be a perfectly reasonable approach.