In-memory OLTP Part 1: Introduction to In-memory OLTP

SQL Server 2014 introduced probably one of the biggest changes to the SQL Server engine since years, “In-memory OLTP” (or Hekaton). In-memory OLTP lets you move tables from your traditional, disk based, storage and moves them directly into the memory (RAM) of your SQL Server instance. Traditional storage throughput and latency can often be a bottleneck for your SQL Server instance, In-memory OLTP gives you a way to move away from those problems. While you can choose to move all your tables into memory, the most common practice is to move the tables that experience the heaviest OLTP workload into memory while leaving tables that are less accessed on disk. If you manage a database that fits the OLTP descriptions of being highly transactional and concurrent I am sure you can name one or more tables that are the “hotspot” of the database. Moving those tables to memory can not only increase your query performance for queries that are hitting those memory-optimized tables, but can also increase your storage performance for access to your traditional, disk-based tables. So how much performance benefit do we get when we are using In-memory tables? Microsoft, in a MSDN article, tells us the following:

The actual performance gain depends on many factors, but 5-to-20 times performance improvements are common.”

Gaining 5-20 times the performance compared to traditional disk based storage sounds like a dream come true for many of the SQL Server instances I manage! The performance benefit of use In-memory tables comes with a price, sadly we can’t get all the good stuff for free so there will have to be some kind of trade-off.

In this series of articles we will discuss the new In-memory OLTP in great depth. We will take a look at how you can create In-memory tables, natively compiled stored procedures, hash indexes and how the new lock and latch free data structures work. Since Microsoft promised 5-20 times performance gains we will also dedicate an article to performance testing.

This article will serve as an introduction to In-memory OLTP, we will take a look at requirements, architecture and limitations.

Requirements for running In-memory OLTP

Before you are able to create those awesome In-memory tables you will have to comply to the minimum system requirements for SQL Server 2014:

  • Microsoft Windows 2008R2
  • 1GB RAM
  • 811MB of free space for the Database engine

For using In-memory OTLP we have some additional requirements:

  • 64-bit edition of SQL Server 2014
  • SQL Server 2014 Enterprise or Developer edition

Yes there we have, as we know, all cool stuff comes in Enterprise, In-memory OLTP is no exception.

Off course the minimum specs are…minimal…if you are seriously considering using In-memory OLTP, or SQL Server 2014 for that matter, you have to make sure your server is sized depending on the workload you are expecting with enough RAM, CPU’s and disk space to accommodate that workload.

As you can see from the system specs above, the requirements for running In-memory OLTP are mostly the same as running SQL Server 2014, you will need a 64-bit architecture though!

Installing In-memory OLTP

The installation for the In-memory OLTP options is very straightforward…there are no additional features to install!

In-memory OLTP is integrated into the SQL Server 2014 engine whether you will use it or not!



No In-memory OLTP option to see here, it’s all integrated into the Database Engine, awesome!


So what makes In-memory OLTP tick? Like we mentioned in the chapter above, the In-memory engine is directly integrated into the SQL Server engine itself:


SQL Server 2014 Engine architecture

The main advantage of having the In-memory OLTP integrated into the SQL Server engine is that it can interact with “traditional” database objects, like disk-based tables, as well. This means that your queries can span In-memory tables but also disk-based tables in the same query!

As you can see on the image above there is also a “native compiler” for In-memory Stored Procedures. We will get into Natively Compiled Stored Procedures later in the article series, but as you probably guessed, Natively Compiled Stored Procedures perform generally better than “traditional” interpreted stored procedures. The main reason for this is the time of compilation, Natively Compiled Stored Procedures are compiled at creation time and directly into native code, “traditional” stored procedures are compiled into native code when you first execute them creating a little overhead.


Like I said in the introduction, there has to be a trade-off somewhere. Most of the trade-off is in the limitations of data types and features in memory optimized tables. Let’s bring in the list of unsupported data types and features:

Unsupported data types:

  • LOB data types (varchar(max), image, xml etc)
  • User-data types
  • CLR functions

Unsupported features:

  • Database mirroring is not supported
  • No AUTO_CLOSE (but you aren’t using that anyway are you?)
  • Computed columns
  • Triggers
  • Clustered indexes
  • Data compression
  • IDENTITY columns
  • Column store indexes
  • ALTER TABLE (That’s right, to change an In-memory table you need to drop and recreate)

As you can see we have to keep a “couple” of things in mind when we want to create an In-memory table…

When you want to convert your “traditional” Stored Procedures to Natively Stored Procedures you have to keep a few things in mind as well. A complete list of what you can and can’t do can be found on the Microsoft webpage here.

A good way of checking if the table you want to migrate to an In-memory table doesn’t have any of the data types or features in use is using the Memory Optimization Advisor. For migrating Stored Procedures you can use the Native Compilation Advisor.


Running the Memory Optimization or Native Compilation Advisor is easy and straightforward. Before you begin using both the advisors make sure you have installed the “Complete” option when installing SQL Server Management Studio with your SQL Server 2014 installation.

Memory Optimization Advisor

To use the Memory Optimization Advisor connect to your SQL Server 2014 instance, expend your table list, right-click on the table you want to check and select the “Memory Optimization Advisor”.


Just click “Next >” on the Welcome screen to start the analyzing.

As you can see on the report below the table I have selected does have some issues if I wanted to migrate it to an In-memory table:


Whoops….Some work needs to be done here…

If you have a more suitable table already you will (hopefully) see the screen below:


Much better! Let’s rock!

I won’t be going into detail for the steps that follow after the Migration Warnings since those are intended for actually migrating the table into an In-memory table.

Native Compilation Advisor

The Native Compilation Advisor works the same way as the Memory Optimization Advisor. Right-click a Stored Procedure you want to convert to a Natively Stored Procedure and select the Native Compilation Advisor:


Again, click “Next >” on the Welcome screen and the Stored Procedure will be analyzed:


More unsupported stuff…

Clicking “Next >” here will give you an overview of the unsupported T-SQL elements in your Stored Procedure including the line numbers:


And again, loads of stuff you need to fix before migrating…

Wrap up

In this article we gave you an introduction to In-memory OLTP, we took a look at the system requirements, unsupported features in both In-Memory tables and Natively Stored Procedures and we did some migration checking.

As you have probably concluded, there is a lot you need to take care off before you can convert your tables into In-Memory tables. This feature isn’t for everyone so think carefully and test it before making changes to your database(s)!

In the next article in this series we will start with the fun stuff, creating In-Memory tables!

3 comments to “In-memory OLTP Part 1: Introduction to In-memory OLTP”

You can leave a reply or Trackback this post.

Write a Reply or Comment

Your email address will not be published.