Question

Data loss after deadlock - SQL Server 2008, Ruby on Rails, Phusion Passenger, Linux, FreeTDS

I am confronted with a mystery problem that caused a data loss in a Ruby on Rails intranet application that I am responsible for. Apologies if this is not strictly speaking about a programming problem - well at least I maintain the application's Ruby code. The problem has so far occurred three times in two years.

The environment:

  • Linux - RedHat Enterprise Server 5.2
  • Apache 2 web server (httpd-2.2.3-11.el5_2.4.rpm)
  • Phusion Passenger 2.2.15
  • Ruby 1.8.7, Rails 2.3.8, with gems:
    • actionmailer (2.3.8)
    • actionpack (2.3.8)
    • activerecord (2.3.8)
    • activerecord-sqlserver-adapter (2.3.8)
    • activeresource (2.3.8)
    • activesupport (2.3.8)
    • akami (1.2.0)
    • builder (3.0.0)
    • exception_notification (2.3.3.0)
    • fastthread (1.0.7)
    • gyoku (0.4.6)
    • httpi (1.1.1)
    • mime-types (1.16)
    • nokogiri (1.4.4)
    • nori (1.1.3)
    • passenger (2.2.15)
    • rack (1.1.0)
    • rails (2.3.8)
    • rake (0.8.7)
    • ruby-net-ldap (0.0.4)
    • rubyjedi-actionwebservice (2.3.5.20100714122544)
    • savon (1.1.0)
    • wasabi (2.5.1)
    • will_paginate (2.3.14)
  • SQL Server 2008 database server
  • Database access through ActiveRecord
  • Database driver: freetds-0.82, unixODBC-2.3.0.tar.gz, ruby-odbc-0.99991.tar.gz

Symptoms:

  • user actions requesting locks on database resources were involved in a deadlock situation.
  • SQL Server resolved the deadlock by killing process(es) involved in the deadlock, so that at least some of them could complete successfully.
  • On the Rails application side, the deadlocks resulted in unhandled exceptions (that I got notified on through the exception_notification gem)
  • After the deadlock, the number of active Rails processes was increasing (which triggered another notification of our monitoring system), the processes seemed to be hanging
  • The reason for why this happened is unknown. The processes seemed to be hanging in database operations (according to Rails logs). Normally I would have expected that SQL server's deadlock resolution function does not leave blocking processes hanging around.
  • In the first two cases, I had restarted the web server as a reaction to the exceptions/hanging processes. In the third case (I was on vacation), nobody reacted on the notifications, but a cronjob running on the weekend was apparently stopping the processes too (soft restart through Passenger by touching "restart.txt", with the same effect)
  • After the web server restarts, users reported a data loss. Before the web server restart, data was processed as expected - from the users' point of view. Rails logs and data in other systems that communicate with ours seem to indicate that the transactions had been properly committed. After the web server restart, suddenly all of the database changes since the time the deadlocks had occurred were missing. E.g., we have a "users" table that has a "last_access" column which is updated on every user action. After the web server restart, the newest "last_access" value was one day old. All transactions seemed to be missing, only the @@IDENTITY values continued with the values that were set before the data loss.
  • I have received information from our IT (who maintain the database server) that seems to indicate that all of the lost DB operations were part of one huge transaction, which was missing a final COMMIT. Of course, what I would expect is that every Rails user action runs one or more separate transasctions, but the SQL Server transaction log shows all of the operations as part of that one huge transaction.

It looks to me as if something like this happened:

  • A bug in one of the involved components (e.g. Phusion Passenger, FreeTDS, SQL Server) caused the Rails processes that were running in parallel to share a database connection, and maybe caused also the hanging of processes.
  • One of the involved processes was in a transaction and hanging somewhere before the COMMIT
  • Since the other processes shared the same connection (as I assume), they were also in the same transaction
  • Since the processes shared the connection, the users were able to see the data changes (before the web server restart), even though a COMMIT was pending.
  • The web server restart forced the connection to abort and the transaction to be rolled back.

Would that make sense? I'm wondering if anybody had similar experiences or hints where I could look further into. I suspected a bug in Passenger which may have forked the file descriptor of the database connection, but I cannot reproduce it. Passenger seems to properly create new DB connections on every fork.

I am considering changing the database's isolation model to "read committed snapshot" to reduce the number of deadlocks, but I'm aware that this doesn't fix the root cause and that this might cause other problems to me.

Was it helpful?

Solution

I have now been able to track down the problem myself and would like to share the solution with anyone who might be facing similar problems, now or in future.

The following was happening:

  • An UPDATE operation running as part of a transaction was involved in a deadlock
  • SQL Server had chosen the UPDATE operation as a deadlock victim and rolled back the transaction
  • The resulted in an ActiveRecord::StatementInvalid exception like this:

    A ActiveRecord::StatementInvalid occurred in (...):
    
    ODBC::Error: 37000 (1205) [FreeTDS][SQL Server]Transaction (Process ID 55) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.: UPDATE [(....tablename....)] SET [position] = 1 WHERE [id] = 795419
    /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.8/lib/active_record/connection_adapters/abstract_adapter.rb:221:in `log'
    
  • In ActiveRecord::ConnectionAdapaters::DatabaseStatements.transaction(), the exception is handled the following way:

    rescue Exception => database_transaction_rollback
      if transaction_open && !outside_transaction?
        transaction_open = false
        decrement_open_transactions
        if open_transactions == 0
          rollback_db_transaction
        else
          rollback_to_savepoint
        end
      end
      raise unless database_transaction_rollback.is_a?(ActiveRecord::Rollback)
    end
    
  • transaction_open is true at this point. The method outside_transaction? is implemented this way in the SQL Server adapter:

    def outside_transaction?
      info_schema_query { select_value("SELECT @@TRANCOUNT") == 0 }
    end
    
  • @@TRANCOUNT is 0 at this point because SQL Server has already rolled back the transaction - as shown in my debug output:

    SQL (1.0ms)   SELECT @@TRANCOUNT
    => TRANCOUNT=0
    
  • Therefore outside_transaction?returns true, and the exception handling code shown above does not perform a rollback. So far, so good.

  • The exception is re-thrown by the code shown above, and caught by my ApplicationController because of:

    class ApplicationController < ActionController::Base
      rescue_from Exception, :with => :render_error
    
  • The render_error method stores the exception message in a flash variable:

    flash[:exception_message] = exception.message
    
  • flash variables are stoerd in the session. And since I'm using the active_record_store, the session data will be stored in the database table sessions. (I'm actually using smart_session_store, but this makes no difference in this respect). So another transaction is starting...

    EXECUTE (1.2ms)   BEGIN TRANSACTION
    SQL (1.1ms)   SELECT session_id, data,id FROM sessions WHERE id=150091
    EXECUTE (1.3ms)   UPDATE sessions SET updated_at=CURRENT_TIMESTAMP, data='BAh7FDoWdW9faGlk(........)' WHERE id=150091
    CACHE (0.0ms)   SELECT @@TRANCOUNT
    => TRANCOUNT=0
    
  • Although a transaction has started, SELECT @@TRANCOUNT returns 0 - the value is taken from the cache! This is where the disaster is taking its course.

  • The transaction method now believes that the transaction is not active anymore and therefore does not do a COMMIT:

    if outside_transaction? # (this does the SELECT @@TRANCOUNT)
      @open_transactions = 0      # Getting here!
    elsif transaction_open
      decrement_open_transactions
      begin
        if open_transactions == 0
          commit_db_transaction   # NOT getting here!
        else
          release_savepoint
        end
      rescue Exception => database_transaction_rollback
        if open_transactions == 0
          rollback_db_transaction
        else
          rollback_to_savepoint
        end
        raise
      end
    end
    

    end

  • Since the database connection is kept open (this seems to be done only in production mode, not in development), subsequent Rails actions processed by the same worker process all append to the transaction that was left open here. To the users it still looks like data is being processed normally since they see everything through the eyes of the transaction. Only this one worker process remains active - any other started worker processes are hanging since the open transaction locks all kinds of database resources. When I restarted the web server, the one responsive worker process was stopped and its transaction was rolled back. This is where the data loss became visible in the application.

I had a brief look at newer (Rails 3.x) versions of the code shown above and it looks to me like the problem could no longer occur with them. The transaction method no longer seems to call outside_transaction? but relies on the internally maintained transaction state. However, since I cannot upgrade at this time, I will fix the problem locally, probably by changing the transaction method in a similar way it was done for Rails 3.x.

OTHER TIPS

First, I will admit that I have not read thru this issue thoroughly, but I have gut feeling and a suggestion. I have seen issues like this and they typically happen in large legacy applications with many models, some of which have a connection pool to another database. In said scenario it is easy to reference a model that uses another connection during ActiveRecord's callback phase. This leads to cross database transactions which is not supported by ActiveRecord out of the box.

If you think this may be the cause review your code for the following. Make sure that you never use #establish_connection and if you do, only use it once in a base or connection class that others inherit from. I have a github gist here (https://gist.github.com/metaskills/4065702) that shows what I am talking about. Second, make sure that your models are not talking to another model in another connection pool during save/transactions.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top