Question

I'm investigating the benefits of upgrading from MS SQL 2012 to 2014. One of the big selling points of SQL 2014 is the memory optimized tables, which apparently make queries super-fast.

I've found that there are a few limitations on memory optimized tables, such as:

  • No (max) sized fields
  • Maximum ~1KB per row
  • No timestamp fields
  • No computed columns
  • No UNIQUE constraints

These all qualify as nuisances, but if I really want to work around them in order to gain the performance benefits, I can make a plan.

The real kicker is the fact that you can't run an ALTER TABLE statement, and you have to go through this rigmarole every time you so much as add a field to the INCLUDE list of an index. Moreover, it appears that you have to shut users out of the system in order to make any schema changes to MO tables on the live DB.

I find this totally outrageous, to the extent that I actually cannot believe that Microsoft could have invested so much development capital into this feature, and left it so impractical to maintain. This leads me to the conclusion that I must have gotten the wrong end of the stick; I must have misunderstood something about memory-optimized tables that has led me to believe that it is far more difficult to maintain them than it actually is.

So, what have I misunderstood? Have you used MO tables? Is there some kind of secret switch or process that makes them practical to use and maintain?

Was it helpful?

Solution

No, in-memory really is this unpolished. If you are familiar with Agile you will know the concept of "minimal shippable product"; in-memory is that. I get the feeling that MS needed a response to SAP's Hana and its ilk. This is what they could get debugged in the timeframe for a 2014 release.

As with anything else in-memory has costs and benefits associated with it. The major benefit is the throughput that can be achieved. One of the costs is the overhead for change management, as you mentioned. This doesn't make it a useless product, in my opinion, it just reduces the number of cases where it will provide net benefit. Just as columnstore indexes are now updatable and indexes can be filtered I have no doubt that the functionality of in-memory will improve over coming releases.


SQL Server 2016 is now generally available. Just as I supposed, In-Memory OLTP has received a number of enhancements. Most of the changes implement functionality that traditional tables have enjoyed for some time. My guess is that future features will be released at the same time for both in-memory and traditional tables. Temporal tables is a case-in-point. New in this version it is supported by both In-Memory and disk-based tables.

OTHER TIPS

One of the problems with new technology - especially a V1 release that has been disclosed quite loudly as not feature-complete - is that everyone jumps on the bandwagon and assumes that it is a perfect fit for every workload. It's not. Hekaton's sweet spot is OLTP workloads under 256 GB with a lot of point lookups on 2-4 sockets. Does this match your workload?

Many of the limitations have to do with in-memory tables combined with natively compiled procedures. You can of course bypass some of these limitations by using in-memory tables but not using natively compiled procedures, or at least not exclusively.

Obviously you need to test if the performance gain is substantial in your environment, and if it is, whether the trade-offs are worth it. If you are getting great performance gains out of in-memory tables, I'm not sure why you're worried about how much maintenance you're going to perform on INCLUDE columns. Your in-memory indexes are by definition covering. These should only really be helpful for avoiding lookups on range or full scans of traditional non-clustered indexes, and these operations aren't really supposed to be happening in in-memory tables (again, you should profile your workload and see which operations improve and which don't - it's not all win-win). How often do you muck with INCLUDE columns on your indexes today?

Basically, if it's not worth it for you yet in its V1 form, don't use it. That's not a question we can answer for you, except to tell you that plenty of customers are willing to live with the limitations, and are using the feature to great benefit in spite of them.

SQL Server 2016

If you are on your way toward SQL Server 2016, I have blogged about the enhancements you will see in In-Memory OLTP, as well as the elimination of some of the limitations. Most notably:

  • Increase in maximum durable table size: 256 GB => 2 TB
  • LOB/MAX columns, indexes on nullable columns, removal of BIN2 collation requirements
  • Alter & recompile of procedures
  • Some support for ALTER TABLE - it will be offline but you should be able to alter and/or drop/re-create indexes (this does not seem to be supported on current CTP builds however, so do not take this as a guarantee)
  • DML triggers, FK/check constraints, MARS
  • OR, NOT, IN, EXISTS, DISTINCT, UNION, OUTER JOINs
  • Parallelism

You can not right-click a memory optimized table, to pull up a designer, and add new columns as you like, from within Sql Server Management Studio. You also can not click within the table name as a means of renaming the table. (SQL 2014 as of my writing this.)

Instead, you can right click the table, and script out a create command to a new query window. This create command can be amended by adding any new columns.

So, to modify the table, you could store the data in a new table, temp table, or table variable. Then you could drop and re-create the table with the new schema, and finally copy back in the actual data. This 3 container shell game is only a little less convenient for most use cases.

But, you'd have no reason to bother with Memory Optimized tables if there is not a performance problem you are trying to solve.

Then, you'll have to weigh if the limitations and work-arounds are worth it for your use case. Do you have a performance problem? Have you tried everything else? Will this improve your performance by 10-100x? Using it or not using it will likely end up being a bit of a no-brain-er either way.

you can use In-Memory OLTP in Operational Servers without any major problem. we used this technology in a Banking and Payment company,

In general we can use memory-optimized tables when workload is too high. by using In-Memory OLTP you can reach better performance to 30X ! Microsoft correct most of this limitations In SQL Server 2016 & 2017 . memory-optimized tables has a completely different architecture compared with Disk-Based tables.

memory-optimized tables are two types. durable tables and nondurable tables. Durable and Non-Durable Tables maintain table data reside in memory. Furthemore durable tables persists data on Disks for Recovery Data and Schema. most of operational scenario we should use durable tables because data lost is critical here. in some scenario for example ETL loading and Caching we can use nondurable tables.

you can use this ebooks and learn how to use this technology:

Kalen Delaney: https://www.red-gate.com/library/sql-server-internals-in-memory-oltp

Dmitri Korotkevitch: https://www.apress.com/gp/book/9781484227718

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top