Question

I'm trying to find out whether I should be using business critical logic in a trigger or constraint inside of my database.
So far I've added logic in triggers as it gives me the control over what happens next and means I can provide custom user messages instead of an error that will probably confuse the users.

Is there any noticable performance gain in using constraints over triggers and what are the best practices for determining which to use.

Was it helpful?

Solution

Constraints hands down!

  • With constraints you specify relational principles, i.e. facts about your data. You will never need to change your constraints, unless some fact changes (i.e. new requirements).

  • With triggers you specify how to handle data (in inserts, updates etc.). This is a "non-relational" way of doing things.

To explain myself better with an analogy: the proper way to write a SQL query is to specify "what you want" instead of "how to get it" – let the RDBMS figure out the best way to do it for you. The same applies here: if you use triggers you have to keep in mind various things like the order of execution, cascading, etc... Let SQL do that for you with constraints if possible.

That's not to say that triggers don't have uses. They do: sometimes you can't use a constraint to specify some fact about your data. It is extremely rare though. If it happens to you a lot, then there's probably some issue with the schema.

OTHER TIPS

Best Practice: if you can do it with a constraint, use a constraint.

Triggers are not quite as bad as they get discredit for (if used correctly), although I would always use a constraint where ever possible. In a modern RDMS, the performance overhead of triggers is comparable to constraints (of course, that doesn't mean someone can't place horrendous code in a trigger!).

Occasionally it's necessary to use a trigger to enforce a 'complex' constraint such as the situation of wanting to enforce that one and only one of a Table's two Foreign key fields are populated (I've seen this situation in a few domain models).

The debate of whether the business logic should reside in the application rather than the DB, depends to some extent on the environment; if you have many applications accessing the DB, both constraints and triggers can serve as final guard that data is correct.

Triggers can blossom into a performance problem. About the same time that happens they've also become a maintenance nightmare. You can't figure out what's happening and (bonus!) the application behaves erratically with "spurious" data problems. [Really, they're trigger issues.]

No end-user touches SQL directly. They use application programs. Application programs contain business logic in a much smarter and more maintainable way than triggers. Put the application logic in application programs. Put data in the database.

Unless you and your "users" don't share a common language, you can explain the constraint violations to them. The alternative -- not explaining -- turns a simple database into a problem because it conflates the data and the application code into an unmaintainable quagmire.

"How do I get absolute assurance that everyone's using the data model correctly?"

Two (and a half) techniques.

  1. Make sure the model is right: it matches the real-world problem domain. No hacks or workaround or shortcuts that can only be sorted out through complex hand-waving explanations, stored procedures and triggers.

  2. Help define the business model layer of the applications. The layer of application code that everyone shares and reuses.

    a. Also, be sure that the model layer meets people's needs. If the model layer has the right methods and collections, there's less incentive to bypass it to get direct access to the underlying data. Generally, if the model is right, this isn't a profound concern.

Triggers are an train-wreck waiting to happen. Constraints aren't.

In addition to the other reasons to use constraints, the Oracle optimizer can use constraints to it's advantage.

For example, if you have a constraint saying (Amount >= 0) and then you query with WHERE (Amount = -5) Oracle knows immediately that there are no matching rows.

Constraints and triggers are for 2 different things. Constraints are used to constrain the domain (valid inputs) of your data. For instance, a SSN would be stored as char(9), but with a constraint of [0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9] (all numeric).

Triggers are a way of enforcing business logic in your database. Taking SSN again, perhaps an audit trail needs to be maintained whenever an SSN is changed - that would be done with a trigger,

In general, data integrity issues in a modern RDBMS can be handled with some variation of a constraint. However, you'll sometimes get into a situation where improper normalization (or changed requirements, resulting in now improper normalization) prevents a constraint. In that case, a trigger may be able to enforce your constraint - but it is opaque to the RDBMS, meaning it can't be used for optimization. It's also "hidden" logic, and can be a maintenance issue. Deciding whether to refactor the schema or use a trigger is a judgment call at that point.

Generally speaking I would prefer constraints and my code would catch sql server errors and present something more friendly to the user.

@onedaywhen

You can have a query as a constraint in SQL Server, you just have to be able to fit it in a scalar function:http://www.eggheadcafe.com/software/aspnet/30056435/check-contraints-and-tsql.aspx

@Mark Brackett: "Constraints are used to constrain the domain... Triggers are a way of enforcing business logic": It's not that simple in SQL Server because its constraints' functionality is limited e.g. not yet full SQL-92. Take the classic example of a sequenced 'primary key' in a temporal database table: ideally I'd use a CHECK constraint with a subquery to prevent overlapping periods for the same entity but SQL Server can't do that so I have to use a trigger. Also missing from SQL Server is the SQL-92 ability to defer the checking of constraints but instead they are (in effect) checked after every SQL statement, so again a trigger may be necessary to work around SQL Server's limitations.

If at all possible use constraints. They tend to be slighlty faster. Triggers should be used for complex logic that a constraint can't handle. Trigger writing is tricky as well and if you find you must write a trigger, make sure to use set-based statements becasue triigers operate against the whole insert, update or delete (Yes there will be times when more than one record is affected, plan on that!), not just one record at a time. Do not use a cursor in a trigger if it can be avoided.

As far whether to put the logic in the application instead of a trigger or constraint. DO NOT DO THAT!!! Yes, the applications should have checks before they send the data, but data integrity and business logic must be at the database level or your data will get messed up when multiple applications hook into it, when global inserts are done outsiide the application etc. Data integrity is key to databases and must be enforced at the database level.

@Meff: there are potential problems with the approach of using a function because, simply put, SQL Server CHECK constraints were designed with a single row as the unit of work, and has flaws when working on a resultset. For some more details on this, see: [http://blogs.conchango.com/davidportas/archive/2007/02/19/Trouble-with-CHECK-Constraints.aspx][1].

[1]: David Portas' Blog: Trouble with CHECK constraints.

Same as Skliwz. Just to let you know a canonical use of trigger is audit table. If many procedures update/insert/delete a table you want to audit ( who modified what and when ), trigger is the simplest way to do it. one way is to simply add a flag in your table ( active/inactive with some unicity constraint ) and insert something in the audit table.

Another way if you want the table not to hold the historical data is to copy the former row in your audit table...

Many people have many ways of doing it. But one thing is for sure, you'll have to perform an insert for each update/insert/delete in this table

To avoid writing the insert in dozen of different places, you can here use a trigger.

I agree with everyone here about constraints. Use them as much as possible.

There is a tendency to overuse triggers, especially with new developers. I have seen situations where a trigger fires another trigger which fires another trigger that repeats the first trigger, creating a cascading trigger that ties up your server. This is a non-optimal user of triggers ;o)

That being said, triggers have their place and should be used when appropriate. They are especially good for tracking changes in data (as Mark Brackett mentioned). You need to answer the question "Where does it make the most sense to put my business logic"? Most of the time I think it belongs in the code but you have to keep an open mind.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top