Pregunta

Git internally stores objects (Blobs, trees) in the .git/objects/ folder. Each object can be referenced by a SHA1 hash that is computed from the contents of the object.

However, Objects are not stored inside the .git/objects/ folder directly. Instead, each object is stored inside a folder that starts with the prefix of its SHA1 hash. So an object with the hash b7e23ec29af22b0b4e41da31e868d57226121c84 would be stored at .git/objects/b7/e23ec29af22b0b4e41da31e868d57226121c84

Why does Git subdivide its object storage this way?

The resources I could find, such as the page on Git's internals on git-scm, only only explained how, not why.

¿Fue útil?

Solución

It is possible to put all the files in one directory, though sometimes that can become a bit large. Many file systems have a limit. You want to put a git repository on a FAT32 formatted drive on a USB stick? You can only store 65,535 files in a single directory. This means that it is necessary to subdivide the directory structure so that filling a single directory is less likely.

This would even become a problem with other file systems and larger git repositories. A relatively small git repo that I've got hanging out (about 360MiB) and it has 181,546 objects for 11k files. Pull the Linux repo and you've got 4,374,054 objects. If you were to put these all in one directory, it would be impossible to check out and would crash (for some meaning of 'crash') the file system.

So? You split it up by byte. Similar approaches are done with applications such as FireFox:

~/Li/Ca/Fi/Pr/7a/Cache $ ls
0/           4/           8/           C/           _CACHE_001_
1/           5/           9/           D/           _CACHE_002_
2/           6/           A/           E/           _CACHE_003_
3/           7/           B/           F/           _CACHE_MAP_

Beyond this, it also goes to a question of performance. Consider NTFS Performance with Numerous Long Filenames:

Windows NT takes a long time to perform directory operations on Windows NT file system (NTFS) formatted drives that contain a large number of files with long file names (names that do not conform to the 8.3 convention) in a single directory.

When NTFS enumerates files in a directory, it has to look up the 8.3 names associated with the long file names. Because an NTFS directory is maintained in a sorted state, corresponding long file names and 8.3 names are generally not next to one another in the directory listing. So, NTFS uses a linear search of the directory for every file present. As a result, the amount of time required to perform a directory listing increases with the square of the number of files in the directory. For small numbers of files (less than a few hundred) the time delay is negligible. But as the number of files in a directory increases to several thousand, the time required to perform a listing can increase to minutes, hours, or even days. The problem is aggravated if the long file names are very similar -- differing only in the last few characters.

With files named after SHA1 checksums, this could be a recipe for disaster and abysmal performance.

While the above is from a tech note from Windows NT 3.5 (and NTFS 1.2 - commonly used from 1995 to the early 2000s) this can also be seen in things such as EXT3 with implementations of the filesystem being linked lists requiring O(n) lookup. And even with that B-tree change:

While the HTree algorithm significantly improved lookup times, it could cause some performance regressions for workloads that used readdir() to perform some operation of all of the files in a large directory.
...
One potential solution to mitigate this performance issue, which has been suggested by Daniel Phillips and Andreas Dilger, but not yet implemented, involves the kernel choosing free inodes whose inode numbers meet a property that groups the inodes by their filename hash. Daniel and Andreas suggest allocating the inode from a range of inodes based on the size of the directory, and then choosing a free inode from that range based on the filename hash. This should in theory reduce the amount of thrashing that results when accessing the inodes referenced in the directory in readdir order. In it is not clear that this strategy will result in a speedup, however; in fact it could increase the total number of inode blocks that might have to be referenced, and thus make the performance of readdir() + stat() workloads worse. Clearly, some experimentation and further analysis is still needed.

Incidentally, this bit on how to improve performance was from 2005, the same year git was released.

As seen with Firefox and many other applications that have lots of hash cached files, the design of splitting up the cache by byte. It has negligible performance cost, and when used cross platform with systems that may be a bit on the old side, could very well be the difference between the program working or not.

Otros consejos

There are two reasons why this is desirable.

Directories cannot be arbitrarily large. E.g. some (reasonably modern!) filesystems are limited to 32000 entries in a single directory. The number of commits in the Linux kernel is in that order of magnitude. Subdividing the commits by their first two hex digits limits the top-level size to 256 entries. The sub-directories will be much smaller for typical git repos.

Directories are scanned linearly. In some file systems (e.g. the Ext* family), a directory is a linked list or table of entries. To look up a file, the whole list is scanned until a matching file name is found. Clearly, this is undesirable for performance. Many modern file systems additionally use hash tables or B-trees for fast lookup, but not everyone may have those. Keeping each directory small means fast access times.

These 256 bucket allow git to store larger repositories on file systems that limit the number files in a directory and provide descent performance on file systems that become slower with directories containing many files.

There are some filesystems and/or filesystem implementations and/or libc implementations where performance degrades with large numbers of directory entries.

Licenciado bajo: CC-BY-SA con atribución
scroll top