Question

i have a very cool gitlab setup here:

  • apache 2.2.22-1ubuntu1.4
  • gitlab 6.5 (integrated to apache using mod_proxy)
  • unicorn v4.3.1 (rails webserver)
  • 2MBit up/down connection to the internet

however, when doing a 'git clone' or 'git pull' it fails for repositories > 10 Mib in size.

ubuntu~/Projects/git/myRepo(master|✔) % git pull 
Username for 'https://example.org': my.username@mydomain.de 
Password for 'https://my.username@mydomain.de@example.org': 
remote: Counting objects: 7798, done. 
remote: Compressing objects: 100% (4132/4132), done. 
fatal: The remote end hung up unexpectedlyiB | 222 KiB/s     
fatal: early EOF
fatal: index-pack failed 

it seems that it is able to copy about 8Mib of data and runs for about 30 seconds at max. the problem is reproducible every time and shows the same signs of malfunction over and over.

i have read: http://jinsucraft.wordpress.com/2013/01/08/when-github-git-clone-fails-with-early-eof-and-index-pack-failed/ and tried:

git config --global http.postBuffer 524288000

on the client to no avail.

anyone an idea what could cause this?

Was it helpful?

Solution

The cause of this problem can be a timeout issue (or similar limit, e.g. amount of data): A server-side timeout occurs, which closes the http connection, resulting in the client-side "early EOF" error message. Such timeouts can be configured in several places (I'm listing them here because other web servers may have similar settings, so they might give you a hint where to look):

  • Apache's Timeout determines the time of absolute silence (i.e. no data transmitted), before the connection is severed. Since data was received continuously, this was not the problem here.
  • Apache mod_proxy's ProxyTimeout is a specialized setting of the afore-mentioned Timeout. Again, since it is not a limit for the total request time, it was not the problem here.
  • Apache can limit the size of a POST request using LimitRequestBody; the default is no limit, but this may vary in your distribution's configuration
  • gitlab's Unicorn configuration example suggests a timeout of 30 seconds. This is an absolute timeout, e.g. every request taking longer than 30 seconds will be terminated.

Increasing the timeout in the Unicorn config should solve your problem. Keep in mind that the number of parallel requests is also limited by Unicorn. Cloning a large repo blocks one request, but causes almost no CPU load. If your gitlab server has no high traffic profile, you should consider increasing the worker_process number.

As a sidenote: The gitlab.yml configuration also offers a git timeout; this timeout limits git operations like calculating the diff of several commits. It has no impact on the timeout when cloning/pulling.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top