Frage

I have my git bare repo initialized in remote server folder /home/bare/mygit.git

I've cloned this repo:

git clone user@ip.of.my.server:/home/bare/mygit.git .

Then I was working with project, doing commits/pushs, etc...

But today I noticed when I was doing a push this error:

user@host:/var/www/mygit (master)$ git push origin master
user@ip.of.my.server's password: 
Counting objects: 5, done.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 297 bytes, done.
Total 3 (delta 2), reused 0 (delta 0)
fatal: Unable to create '/home/bare/mygit.git/refs/heads/master.lock': Invalid argument
fatal: The remote end hung up unexpectedly
fatal: recursion detected in die handler

I searched this issue, but seems like mostly people have problem with a permissions. But in that case error looks different (like Permission denied or something).

Rights is ok, clonning/pulling/fetching is okay. No errors in log or something like that.

War es hilfreich?

Lösung

It was not a git related issue. Eventually I was not able to create some of new files.

When I ran dmesg I saw a lot of errors from kernel. I decided to reboot server first and then dig deeper, but after server has been restarted issue has gone.

Thanks everyone for helping!

Andere Tipps

Note, with Git 2.14.x/2.15 (Q3 2017), that error message will be less frequent.

See commit 4ff0f01 (21 Aug 2017) by Michael Haggerty (mhagger).
(Merged by Junio C Hamano -- gitster -- in commit f2dd90f, 27 Aug 2017)

The code to acquire a lock on a reference (e.g. while accepting a push from a client) used to immediately fail when the reference is already locked.

Now it waits for a very short while and retries, which can make it succeed if the lock holder was holding it during a read-only operation.

More precisely:

refs: retry acquiring reference locks for 100ms

The philosophy of reference locking has been, "if another process is changing a reference, then whatever I'm trying to do to it will probably fail anyway because my old-SHA-1 value is probably no longer current".

But this argument falls down if the other process has locked the reference to do something that doesn't actually change the value of the reference, such as pack-refs or reflog expire.
There actually is a decent chance that a planned reference update will still be able to go through after the other process has released the lock.

So when trying to lock an individual reference (e.g., when creating "refs/heads/master.lock"), if it is already locked, then retry the lock acquisition for approximately 100 ms before giving up. This should eliminate some unnecessary lock conflicts without wasting a lot of time.

Add a configuration setting, core.filesRefLockTimeout, to allow this setting to be tweaked.

Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit StackOverflow
scroll top