Вопрос

I am using Rackspace as a hosting provider, using their Cloud server hosting, with 256mb plan.

I am using Geronimo 2.2 to run my java application.

The server starts up no problem, loads Geronimo quite fast, however, when I started to deploy my web application, it is taking forever, and once it is deployed, it takes forever to navigate through pages.

I've been monitoring the server activity, the CPU is not so busy, however, 60% of the memory is being used up. Could this be the problem?

If so, what are my options? Should I consider upgrading this cloud server to something with more RAM, or changing a host provider to better suit my needs?


Edit: I should note that, even if I don't deploy my application, just having Geronimo loaded, sometimes I would get a connection time when I try to shut down Geronimo.

Also the database is on the same server as the application. (however I wouldn't say its query intensive)


Update:
After what @matiu suggested, I tried running free -m, and this is the output that I get:

             total       used       free     shared    buffers     cached
Mem:           239        232          6          0          0          2
-/+ buffers/cache:        229          9
Swap:          509        403        106

This was totally different result than running ps ux, which is how I got my previous 60%.

And I did an iostat check, and about 25% iowait time, and device is constantly writing and reading.


update:
Upgraded my hosting to 512MB, now it is up to speed! Something I should note is, I forgot about the Java's Permanent Generation memory, which is also used by Geronimo. So it turns out, I do need more RAM, and more RAM did solve my problem. (As expected) yay.

Это было полезно?

Решение

I'm guessing you're running into 'swapping'.

As you'll know Linux swaps out some memory to disk. This is great for memory that's not accessed very much.

When Java starts eating heaps and heaps, linux starts:

  1. Swapping memory block A out to disk to make space to read in block B
  2. Reading/writing block B
  3. Swapping block B to disk to make space for some other block.

As disk is 1000s of times slower than RAM, as the memory usage increases your machine grinds more and more closer to a halt.

With 256 MB Cloud Servers you get 512 MB of Swap space.


Checking:

You can check if this is the case with free -m .. this page shows how to read the output:

Next I'd check with 'iostat 5' to see what the disk IO rate on the swap partition is. I would say a write rate of 300 or more means you're almost dead in the water. I'd say you'd want to keep the write rate of the swap partition down below 50 blocks a second and the read rate down below 500 blocks a second .. if possible both should be zero most of the time. Remember disk is 1000s of times slower than RAM.

You can check if it's Java eating the ram by running top and hitting shift+m to order the processes by memory consumption.

If you want .. you can disable the swap partition with swapoff -a .. then open up the web console, and hit the site a bit .. you'll soon see error messages in the console like 'OOM Killed process xxx' (OOM is for Out of Memory I think). If you see those that's linux trying to satisfy memory requests by killing processes. Once that happens, it's best to hard reboot.


Fixing:

If it's Java using the RAM .. this link might help.

I think the easy fix would be just to upgrade the size of the Cloud Server.

You may find a different Java RTE may be better.

If you run it in a 32 bit chroot it may use less RAM.

Другие советы

You should consider running a virtual dedicated Linux server, from something like linode. You'd have to worry about how to start a Java service and things like firewalls, etc, but once you get it right, you are in effect you're own hosting provider, allowing you to do anything a standalone actual Linux box can do.

As for memory, I wouldn't upgrade until you have evidence that you do not have enough. 60% being used up is less than 100% used up...

Java normally assumes that it can take whatever it is assigned to it. Meaning, if you give it a max of 200MB, it thins that it's ok to take 200MB even though it's using much less. There is a way to make Java use less memory, by using the -Xincgc incremental garbage collector. It actually ends up giving chunks of memory back to the system when it no longer needs it. This is a bit of a kept secret really. You won't see anyone point this out...

Based on my experience, memory and CPU load on VPSes are quite related. Meaning, when application server will take up all available memory, CPU usage starts to sky rock, finally making application inaccessible.

This is just a side effect though - you should really need to investigate where your problems origin!

If the memory consumption is very high, then you can have multiple causes:

  1. It's normal - maybe you have reached a point, where all processes (application server, applications within it, background processes, daemons, Operating System, etc.) put together need that huge amount of memory. This is least probably scenario.
  2. Memory leak - can happen due to bug in framework or some library (not likely) or your own code (possible). This can be monitored and solved.
  3. Huge amount of requests - each request will take both CPU and memory to be processed. You can have a look at the correlation between requests per second and memory consumption, meaning, it can be monitored and resolved.

If you are interested in CPU usage:

  1. Again, monitor requests to your application. For constant count of requests - nothing extraordinary should happen.
  2. One component is exhausting most resources (maybe your database is installed on the same server and it uses all CPU power due to inefficient queries? Slow log would help.)

As you can see, it's not trivial task, but you have tools support which will can help you out. I personally use java melody and probe.

Лицензировано под: CC-BY-SA с атрибуция
Не связан с StackOverflow
scroll top