To answer your question, you would only be likely have issues with hash collisions if you were to approach 2^128 files in your table. This assumes that all inputs are between 0 .. +INF in length and that the hash algorithm you are using is perfect (SHA-256 is considered perfect in practice but not proven in theory) and that the output size is exactly 256 bits.
If you have under a few billion files, you should be fine.
Now for my recommendation. I would say that you need to tell us more information about your intended use. Your first thought is closer to correct than your hashing approach.
I would use a table like this (T-SQL Syntax for SQL Server):
CREATE TABLE [File]
(
[Id] BIGINT IDENTITY NOT NULL,
[Path] CHARACTER VARYING(MAX) NOT NULL
PRIMARY KEY([Id])
);
CREATE NONCLUSTERED INDEX [File_Path_IX] ON [File]([Path]);
Then, you should let your database take care of indexing and making the searches fast. If and only if you experience a major performance issue later down the road, demonstrated by profiling, should you consider changing to a hashing approach. The hashing will impose massive computational penalty on your preprocessing and will introduce complicated scenarios such as hash collisions and trying to resolve them if and when they do happen.