Question

I'm writing a web application to monitor a furniture factory production flow. It has thousand of data to handle. So far, I run RoR on Mongrel + MySQL and it's really really slow (2-4min for some views). When I look at RoR logs, it seems that database queries aren't slow (0-10ms).

Is RoR slow when it converts database data to object ? Is Mongrel slow ?

Edit: First thing: I was in dev. env. In production environment, the slowest view takes 2min (which would turn down to less than 1min on a good computer, mine is 5 years old). With ruby-prof and a bit of common sense, I've found out which methods were slowing down the application. The problem is that single sql queries are called in loops on larges datasets:

ofs = Ofkb.find_by_sql ["..some large sql query..."]

for of in ofs # About 700-1000 elements
   ops = Operation.find(..the single query..)
   etc.
end

Here are ruby-prof results on those methods:

 %self     total     self     wait    child    calls  name
 32.19     97.91    97.91     0.00     0.00       55  IO#gets (ruby_runtime:0}
 28.31     86.39    86.08     0.00     0.32    32128  Mysql#query (ruby_runtime:0}
  6.14     18.66    18.66     0.00     0.00    12432  IO#write (ruby_runtime:0}
  0.80      2.53     2.42     0.00     0.11    32122  Mysql::Result#each_hash (ruby_runtime:0}

Problem is: I can't really avoid those single queries. I've got thousand of events from which I have to compute complex data. Right now I'm using memcached on those methods which is OK unless your the first to request the page.

Was it helpful?

Solution

I'll agree with everyone else. You have to profile. There is no point in doing anything to your code until you know what specifically is causing the slowness. Trying to fixing a problem without understanding the cause is like feeling ill and deciding to have lots of surgery until you feel better. Diagnose your problem first. It might be something small like a network setting or it could be one bad line in your code.

Some tips for profiling:

How to Profile Your Rails Application

Performance Testing Rails Applications

At the Forge - Profiling Rails Applications

Once you have found the bottleneck you can figure out what to do.

I recommend these videos: Railslab Scaling Rails

Revised now based on prof results:

OK. Now that you can see that your problem is that you are doing some sort of calculation using a query based on looping through the results of another active record query I'd advise you to look into building a custom SQL statement combining your initial selection criteria and the loop calculation to get what you need. You can definitely speed this up by optimizing the SQL.

OTHER TIPS

How many of those 0-10ms queries are being executed per view access? What parts of your data model are being referenced? Are you using :include to get eager loading on your associations?

Rails is as slow as you make it. With understanding comes speed (usually!)

Expanding on the above, do you have has_many associations where, in particular, your view is referencing the "many" side without an :include? This causes your find(:all) on the master table to be executed with a join to the detail - if you have large numbers of detail records and are processing all of them individually, this can get expensive.

Something like this:

Master.find(:all, :include => :details)

...might help. Still guessing from sparse info, though.

There's an old Railscast on the subject here

While R-n-R has a reputation of being slow, this sounds too extreme to be a simple problem with the language.

You should run a profiler to determine exactly what functions are slow and why. The most common thing slowing down a web application is the "n+1 problem". That is, when you have n data items in your database, the app makes n separate queries to the database instead of making one query which gets them. But you can't know until you run the profiler. ruby-prof is one profiler I've used.

Edit based on profile results edit:

I firmly believe that you can always remove a query loop. As Mike Woodhouse says, the Rails way to do this is to specify the relations between your tables with a has_many or other association and then let rails automatically generate the table join, this is clear, fast and "the Rails way". But if you are starting out with bare SQL or if the associations don't work in this case, you can simply generate the appropriate joins yourself. And If all else fails, you can create a view or denormalized table which holds the results which previously were found through a loop. Indeed, the fact that you have to iterate through generated queries might be a sign that your table design itself has some flaws.

All that said, if caching your query results works well enough for you, then stay with it. Optimize when needed.

This is not normal. You have some logic that is slowing you down. Trying commenting out bits and pieces of your code that you think are taking a long time and see if that helps. If it does than you need to figure out how to optimize that logic.

Of you are doing lots of calculation over a loop iterating through a very large number of objects, then of course it will be slow.

These types of issues can come up in any language or framework. While Ruby is not as fast as other languages, it's fast enough most of the time. If you need to constantly calculate with large data sets then Ruby may not be the right language for you. Look into writing a Ruby C extension that will handle your performance draining code. But first just try to diagnose and refactor.

Lastly, check out RubyProf to see if it can help you find the bottleneck.

The previous two answers are helpful, especially using performance monitoring tools. I use New Relic RPM and it's helped me a great deal in the past.

However, these sorts of tools are really best when you're trying to speed up from, say, 3 seconds to under 1 second.

2-4 minutes for a view to render is absolutely not normal under any normal circumstances.

Could you show us some of your development logs to figure out where the bottlenecks are?

Are you including time the browser takes to load images, javascripts, or other files into this total measurement?

Execution times this long would make me suspect a network issue - maybe a DNS query is timing out on a primary DNS server?

You could try to use JRuby or switch to Ruby 1.9.
Both of them should result in massive performance boosts.
The problem with JRuby is that gems that use C won't compile/work. There are Java equivalents which are installed by jruby's "gem" app, but some of the gems simply don't work

You basically will have the same problem with Ruby 1.9. A little bit of syntax changed, but the main problem is that a hugh amount of gems don't work anymore. People are in the progress of updating though (check progress at http://isitruby19.com/)

Why not pre-fetch all the data and have your for loop find it locally in memory, instead of querying the database each time? 1000s of queries for a single view indicates that something is seriously is wrong with your design.

There are some good screen casts on this topic http://railslab.newrelic.com/scaling-rails

Things like fragmet caching and using :include (to avoid n+1) can help. It sounds like you're already using memcached, so why not curl the url to prefetch the cache?

When I bound the server to the boxes ip address instead of 0.0.0.0, this sped things up for me.

You might profile the code first before doing anything, though, queries inside for loops are a very common cause for performance problems and at first sight this seems your problem. You might anyway find a practical profiler here:

As already said on the other answers, if both models are related you should eager load the associations, which implies instructing Active Record to perform join queries:

#left outer join
ofkbs=Ofkb.includes(:operation).where(name: "banana")

If you do not need the ofkbs but only the operations, you could perform an inner join

#inner join (discards the Ofkbs that do not have any operation)
operations=Operation.joins(:ofkb).where(ofkb:{name:"banana"})

This solution only preforms one query, and allows you to afterwards iterate through the data that will have already been collected from the DB:

operations=ofkbs.map{|of| of.operations}.flatten

operations.each do |o|
  do_whatever_you_want_with_operation(o)
end

If the queries are very complicated you should use arel instead.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top