I ntroducon to MemSQL Unlike established database products, MemSQL compilaon translates a query all the way to machine code. This, combined with in-memory row store storage structures designed with code generaon in mind, allows query processing rates on the order of 20 million rows per second per core against an in-memory skip list row store table. That is about 10X faster than the per-core processing rate for scans of the B-tree indexes found in most legacy databases, in many cases. For columnstore tables, MemSQL uses a high-performance vectorized query execuon engine that can operate on blocks of 4K rows at a me, very efficiently. This vectorized execuon engine also makes use of single-instrucon, mulple-data (SIMD) instrucons available on Intel and compable processors that support the AVX-2 instrucon set. Processing rates on columnstore tables are oen over 100 million rows per second per core, and somemes as much as 2 billion rows per second per core when using SIMD and operaons on encoded (compressed) data. MemSQL also supports high-performance data movement for broadcast and shuffle operaons. This implementaon gets is speed by sending data over the wire in its nave in-memory format, so it does not have to be serialized on the sending side or deserialized on the receiving side. Rather, it can be operated on directly aer it is received, saving CPU instrucons, and thus total execuon me. Query Processing Summary Together, compiled query plans, flexible storage opons, lock-free data structures, MVCC and a mature opmizer allows for beer performance on mulple cores with high data accessibility, even under high concurrency. This is part of the "secret sauce" that makes MemSQL different than other data plaorms out there. 15

Technical Introduction to MemSQL - Page 15 Technical Introduction to MemSQL Page 14 Page 16