Question

When I run mongorestore I get error running create command: 24: Too many open files.

I've updated my launchctl limit and ulimit.

When I run lanchctl limit I get:

cpu         unlimited      unlimited      
filesize    unlimited      unlimited      
data        unlimited      unlimited      
stack       8388608        67104768       
core        0              unlimited      
rss         unlimited      unlimited      
memlock     unlimited      unlimited      
maxproc     709            1064           
maxfiles    256000         256000

When I run ulimit -a I get:

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 256000
pipe size            (512 bytes, -p) 1
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 709
virtual memory          (kbytes, -v) unlimited

My mongo version is 3.4.7

My macOS is 10.13.2

How can I complete mongorestore?

Also worth mentioning: if I try running mongorestore more than once without logging out of the computer or restarting, I get cannot make pipe for command substitution: Too many open files and my computer crashes (This was a brand new iMac as of August 2017).

Was it helpful?

Solution

ulimit can be set in the same terminal by ulimit -n 10000;

You can change the value in /etc/security/limit.conf file by nofiles to 10000 #line

the maximum no of files can be opened in any linux/unix system is 64000. be sure of this no. If your restoring of database size is huge then try to make them smaller pieces like create the database and then try to restore one collection or few collections at a time. If you have memory limit like 4/8 GB of RAM then it is better to restore collection by collection instead of whole database.

As you are trying mongorestore from local system I believe this is not a production system and not a serious business, if my assumption is correct please try to restore few collections at a time and see if you still face the same issue.

Thanks @wylliam-judd for sharing the solution; I didn't face this issue as I used linux as a remote host and accessed from putty and in the same window; anyway I'm adding your solution to help others

The solution was to run ulimit -n 64000 in the console running mongod rather than in the console running mongorestore. Though I did run it in both consoles to be safe, and I can't be sure whether the ulimit in the mongorestore console was necessary or not. I also attempted many other solutions in these months, and I can't be entirely sure that none of them had an impact, but the difference that finally mattered was running ulimit in the console running mongod.

OTHER TIPS

Chunks are just range (of key) information. You cannot do restore by chunks. ulimit values (limits) are user base. So, you need to give higher open files value to THAT userID what you use for restore.

Also worth mentioning: if I try running mongorestore more than once without logging out of the computer or restarting, I get cannot make pipe for command substitution: Too many open files and my computer crashes (This was a brand new iMac as of August 2017).

That sounds like a problem it means your kernel isn't reaping the file handles when the process that owns them is getting killed.

  1. You may want to set up your ulimit in the fashion mentioned here.
  2. Restart (killing those file handles).
  3. Check that the limits are right
  4. Try again.
Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top