Question

I'm performing a dump to elasticsearch using Elasticsearch-Exporter on OSX Mavericks:

node /usr/bin/node_modules/elasticsearch-exporter/exporter.js -j ${esIndexName} -f esbackup

I have an application that runs two nodes, which along with the application node adds up to a total of three nodes. The node created by the elasticsearch command is the master node. When I run the export command against my index, I get this after a few seconds of successful loading:

2014-05-07T14:31:38.325-0700 [elasticsearch[Rancor][[es][1]: Lucene Merge Thread #0]] [WARN] merge.scheduler [][] - [Rancor] [es][1] failed to merge
 815 java.io.FileNotFoundException: /private/var/data/core/elasticsearch_me/nodes/0/indices/es/1/index/_f_es090_0.tip (Too many open files)

I've tried the following:

launchctl limit 10000

sudo launchctl limit 40000 65000

elasticsearch soft nofile 32000

elasticsearch hard nofile 32000

adding -XX:-MaxFDLimit to my application's jvm arguments

None of which solve my problem. Occasionally the load will finish with no errors, but most of the time I run into the error. Does anyone have any ideas/hints on what my issue might be?

Edit:

$ launchctl limit cpu unlimited unlimited
filesize unlimited unlimited
data unlimited unlimited
stack 8388608 67104768
core 0 unlimited
rss unlimited unlimited
memlock unlimited unlimited
maxproc 709 1064
maxfiles 10000 10240

$ sudo launchctl limit Password: cpu unlimited unlimited
filesize unlimited unlimited
data unlimited unlimited
stack 8388608 67104768
core 0 unlimited
rss unlimited unlimited
memlock unlimited unlimited
maxproc 709 1064
maxfiles 40000 65000

Était-ce utile?

La solution

Ok - if you are running multiple elasticsearch nodes plus node.js apps on a single Mac I'd definitely make certain that your number of open files is bumped to the limits that ES recommends:

file descriptor

Make sure to increase the number of open files descriptors on the machine (or for the user running elasticsearch). Setting it to 32k or even 64k is recommended.

In order to test how many open files the process can open, start it with -Des.max-open-files set to true. This will print the number of open files the process can open on startup.

Alternatively, you can retrieve the max_file_descriptors for each node using the Nodes Info API, with:

curl localhost:9200/_nodes/process?pretty

You need to make certain this is done for the user running ES, not just root (unless of course you are running it as root).

To do this I'd follow these directions: (http://elasticsearch-users.115913.n3.nabble.com/quot-Too-many-open-files-quot-error-on-Mac-OSX-td4034733.html). Assuming you want 32k and 64k as the limits:

In /etc/launchd.conf put:

limit maxfiles 32000 64000


Make sure in your ~/.bashrc file you are not setting the ulimit with something like "ulimit -n 1024".  

Open a new terminal, and run:

launchctl limit maxfiles
ulimit -a

Don't forget to restart after you make these changes. Then when you start elasticsearch pass in this command line parameter:

elasticsearch -XX:-MaxFDLimit

After the above steps on my Mac I'm getting the following response from Elasticsearch:

curl http://localhost:9200/_nodes/process?pretty

{
  "cluster_name" : "elasticsearch",
  "nodes" : {
    "XXXXXXXXXXXXXXXXXXXXXXX" : {
      "name" : "Marrina Smallwood",
      "transport_address" : "inet[XX.XX.XX.XX:9300]",
      "host" : "MacBook-Pro-Retina.local",
      "ip" : "XX.XX.XX.XX",
      "version" : "1.1.1",
      "build" : "f1585f0",
      "http_address" : "inet[/XX.XX.XX.XX:9200]",
      "process" : {
        "refresh_interval" : 1000,
        "id" : 538,
        "max_file_descriptors" : 32768,
        "mlockall" : false
      }
    }
  }
}
Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top