Question

I know git would get slow when the repo get bigger.
But why?
As git stores files as separate directories and files under .git, i can not find out why the operations get slowwer. Let's have a look at the commit operation. Recently, i cloned the webkit repo and i branch from master, then i commit a 2k file to the branch. But i feel it gets slowwer than that i do on my small repo.
Because i have not read through git source code, i guess the commit operation compromises storing the file to the disk, inserting the commit log, updating the index and updating the HEAD to the sha value of the file.

The write is fast.
The inserting is fast.(I guess, if the insert action is append log to a file)
The updating index is fast.
The updating HEAD is fast.

And why it is slow? Can anyone illustrate it to me?
Thanks.

Some answers is helpful but not very convincible, it will be great to provide some code snippets to support you.

Was it helpful?

Solution

Committing a tree should be constant in time, since it only needs to create a new commit object (git write-tree) and update the HEAD ref.

I did benchmarks of different SCMs in the past and git commit was indeed not affected by tree size, repository size, history length, etc.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top