Caja PDF

Comparta fácilmente sus documentos PDF con sus contactos, la web y las redes sociales.

Compartir un archivo PDF Gestor de archivos Caja de instrumento Buscar PDF Ayuda Contáctenos



Microsoft Press ebook Introducing Microsoft SQL Server 2014 PDF .pdf



Nombre del archivo original: Microsoft_Press_ebook_Introducing_Microsoft_SQL_Server_2014_PDF.pdf

Este documento en formato PDF 1.6 fue generado por Adobe InDesign CS6 (Windows) / Adobe PDF Library 10.0.1, y fue enviado en caja-pdf.es el 22/07/2014 a las 04:01, desde la dirección IP 76.29.x.x. La página de descarga de documentos ha sido vista 3124 veces.
Tamaño del archivo: 8.1 MB (144 páginas).
Privacidad: archivo público




Descargar el documento PDF









Vista previa del documento


Get a head start evaluating Windows 8.1—with early technical
insights from award-winning journalist and Windows expert Ed
Bott. This guide introduces new features and capabilities, providing
a practical, high-level overview for IT professionals ready to begin
deployment planning now.

Preview new features and enhanced capabilities,
including:
• The Windows 8.1 user experience
• Deployment tools and technologies
• Security features
• Internet Explorer 11
• Delivering Windows Store apps
• Recovery options
• Networking and remote access
• Virtualization
• Windows RT 8.1
• Managing mobile devices

Note

This title is also available as a free
eBook from Microsoft at:
http://aka.ms/mspressfree

About the Author

Ed Bott is an award-winning journalist
known to millions of readers through
two decades of writing for leading
industry publications and more than 25
books on Microsoft Office and Windows,
including Windows 7 Inside Out and
Microsoft Office Inside Out: 2013 Edition.

Also see

Windows 8.1 Administration
Pocket Consultant
Essentials & Configuration
ISBN: 9780735682658
Storage, Security, & Networking
ISBN: 9780735682610

Microsoft Office
Inside Out
2013 Edition
ISBN: 9780735669062

Introducing Windows 8.1 for IT Professionals

Introducing Windows 8.1
for IT Professionals

Introducing
Microsoft
SQL Server 2014
Technical Overview

U.S.A. $9.99
Canada $10.99
[Recommended]

Operating Systems/Windows

Celebrating 30 years!

ROSS MISTRY STACIA MISNER
n

PUBLISHED BY
Microsoft Press
A Division of Microsoft Corporation
One Microsoft Way
Redmond, Washington 98052-6399
Copyright © 2014 by Microsoft Corporation
All rights reserved. No part of the contents of this book may be reproduced or transmitted in any form or by any
means without the written permission of the publisher.
Library of Congress Control Number: 2014934033
ISBN: 978-0-7356-8475-1
Printed and bound in the United States of America.
First Printing
Microsoft Press books are available through booksellers and distributors worldwide. If you need support related
to this book, email Microsoft Press Book Support at mspinput@microsoft.com. Please tell us what you think of
this book at http://www.microsoft.com/learning/booksurvey.
Microsoft and the trademarks listed at http://www.microsoft.com/en-us/legal/intellectualproperty/Trademarks/
EN-US.aspx are trademarks of the Microsoft group of companies. All other marks are property of their respective
owners.
The example companies, organizations, products, domain names, email addresses, logos, people, places, and
events depicted herein are fictitious. No association with any real company, organization, product, domain name,
email address, logo, person, place, or event is intended or should be inferred.
This book expresses the author’s views and opinions. The information contained in this book is provided without
any express, statutory, or implied warranties. Neither the authors, Microsoft Corporation, nor its resellers, or
distributors will be held liable for any damages caused or alleged to be caused either directly or indirectly by
this book.
Acquisitions Editor: Devon Musgrave
Developmental Editor: Devon Musgrave
Project Editor: Carol Dillingham
Editorial Production: Flyingspress and Rob Nance
Technical Reviewer: Stevo Smocilac; Technical Review services provided by Content Master, a member of CM
Group, Ltd.
Copyeditor: John Pierce
Indexer: Lucie Haskins
Cover: Twist Creative • Seattle

I dedicate this book to my Kyanna and Kaden. Follow your
passions, and never let anyone hold you back. And to the next
chapter in the game of life, “Return of the Autumn Wind.”
—Ross Mistry

I dedicate this book to my SQL family all around the world. You
all bring a lot of enthusiasm and passion of your own to this profession, which inspires me and spurs me on to learn more so that
I can share more. Thanks to each of you for what you give
to me.
—Stacia Misner

Contents at a glance
PART I

DATABASE ADMINISTRATION

CHAPTER 1

SQL Server 2014 editions and engine enhancements

CHAPTER 2

In-Memory OLTP investments

23

CHAPTER 3

High-availability, hybrid-cloud, and backup enhancements

39

PART II

BUSINESS INTELLIGENCE DEVELOPMENT

CHAPTER 4

Exploring self-service BI in Microsoft Excel 2013

63

CHAPTER 5

Introducing Power BI for Office 365

89

CHAPTER 6

Big data solutions

3

101

Contents
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

PART I

DATABASE ADMINISTRATION

Chapter 1 SQL Server 2014 editions and engine enhancements

3

SQL Server 2014 enhancements for database administrators . . . . . . . . . . . 4
Availability enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Backup and restore enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Scalability and performance enhancements. . . . . . . . . . . . . . . . . . . . . 8
Security enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Platform for hybrid cloud. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
SQL Server 2014 editions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Enterprise edition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Standard edition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Business Intelligence edition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Specialized editions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
SQL Server 2014 licensing overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Hardware and software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Installation, upgrade, and migration strategies. . . . . . . . . . . . . . . . . . . . . . . 18
In-place upgrade. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Side-by-side migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

Chapter 2 In-Memory OLTP investments

23

In-Memory OLTP overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
In-Memory OLTP fundamentals and architecture . . . . . . . . . . . . . . . . . . . . 25
Four In-Memory OLTP architecture pillars . . . . . . . . . . . . . . . . . . . . . 26
In-Memory OLTP concepts and terminology. . . . . . . . . . . . . . . . . . . 27
Hardware and software requirements for memory-optimized
tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
In-Memory OLTP use cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Myths about In-Memory OLTP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29


vii

In-Memory OLTP integration and application migration . . . . . . . . . . . . . . 29
Will In-Memory OLTP improve performance?. . . . . . . . . . . . . . . . . . 31
Using the Memory Optimization Advisor to migrate disk-based
tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Analyzing In-Memory OLTP behavior after memory-optimized
table migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Using In-Memory OLTP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Enabling In-Memory OLTP in a database . . . . . . . . . . . . . . . . . . . . . . 34
Create memory-optimized tables and natively compile stored
procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Execute queries to demonstrate performance when using
memory-optimized tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Memory-optimized table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Natively compiled stored procedure. . . . . . . . . . . . . . . . . . . . . . . . . . 37

Chapter 3 High-availability, hybrid-cloud, and backup
enhancements 39
SQL Server high-availability and disaster-recovery enhancements. . . . . . 40
AlwaysOn Availability Groups enhancements. . . . . . . . . . . . . . . . . . . 40
AlwaysOn Failover Cluster Instances enhancements. . . . . . . . . . . . . 43
Platform for hybrid cloud. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Cloud disaster recovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Deploy a database to a Windows Azure Virtual Machine . . . . . . . . 50
Storing SQL Server data files in Windows Azure . . . . . . . . . . . . . . . . 51
Extending on-premises apps to the cloud. . . . . . . . . . . . . . . . . . . . . . 52
Backup and restore enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
SQL Server backup to a URL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Encryption for backups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
SQL Server Managed Backup to Windows Azure. . . . . . . . . . . . . . . . 59

viii Contents

PART II

BUSINESS INTELLIGENCE DEVELOPMENT

Chapter 4 Exploring self-service BI in Microsoft Excel 2013

63

Excel Data Model and Power Pivot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Working with the Data Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Managing data as a Power Pivot model . . . . . . . . . . . . . . . . . . . . . . . 65
Power Query . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Searching for data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Importing data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Loading the worksheet or Data Model . . . . . . . . . . . . . . . . . . . . . . . . 71
Shaping data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Combining data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Power View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Creating a Power View report. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Working with visualizations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Sharing a Power View report. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Power Map. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Creating a Power Map. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Visualizing geographic data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Exploring the 3-D map. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Displaying values over time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Enhancing a map. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Working with tours, scenes, and layers . . . . . . . . . . . . . . . . . . . . . . . . 86
Sharing Power Map. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

Chapter 5 Introducing Power BI for Office 365

89

Power BI sites. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Configuring featured workbooks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Opening Power View in HTML5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Adding favorite reports to My Power BI . . . . . . . . . . . . . . . . . . . . . . . 92

Contents
ix

Shared queries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Creating a shared query. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Using a shared query. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Managing query metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Reviewing usage analytics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Power BI Q&A. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Power BI for Mobile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Preparing a workbook for mobile viewing. . . . . . . . . . . . . . . . . . . . . 97
Using Power BI for Mobile. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Sharing a report from Power BI for Mobile. . . . . . . . . . . . . . . . . . . . . 99
Power BI administration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

Chapter 6 Big data solutions

101

Big data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Describing big data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Exploring the history of Hadoop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
HDInsight. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Creating a storage account for HDInsight. . . . . . . . . . . . . . . . . . . . . 105
Provisioning an HDInsight cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Loading data into a storage account. . . . . . . . . . . . . . . . . . . . . . . . . 106
Working with Hive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Analyzing data from Hive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
PolyBase. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Exploring the benefits of PolyBase. . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Accessing HDFS data from PDW. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Index 119

What do you think of this book? We want to hear from you!
Microsoft is interested in hearing your feedback so we can continually improve our books and learning
resources for you. To participate in a brief online survey, please visit:

microsoft.com/learning/booksurvey
x Contents

Introduction

M

icrosoft SQL Server 2014 is the next generation of Microsoft’s information platform, with new features that deliver faster performance, expand capabilities in the
cloud, and provide powerful business insights. In this book, we explain how SQL Server
2014 incorporates in-memory technology to boost performance in online transactional processing (OLTP) and data-warehouse solutions. We also describe how it eases
the transition from on-premises solutions to the cloud with added support for hybrid
environments. SQL Server 2014 continues to include components that support analysis,
although no major new features for business intelligence were included in this release.
However, several advances of note have been made in related technologies such as
Microsoft Excel 2013, Power BI for Office 365, HDInsight, and PolyBase, and we describe
these advances in this book as well.

Who should read this book?
This book is for anyone who has an interest in SQL Server 2014 and wants to understand its capabilities. Many new improvements have been made to SQL Server 2014,
but in a book of this size we cannot cover every improvement in its entirety—or cover
every feature that distinguishes SQL Server from other databases or SQL Server 2014
from previous versions. Consequently, we assume that you have some familiarity with
SQL Server already. You might be a database administrator (DBA), an application developer, a business intelligence solution architect, a power user, or a technical decision
maker. Regardless of your role, we hope that you can use this book to discover the
features in SQL Server 2014 that are most beneficial to you.

Assumptions
We assume that you have at least a minimal understanding of SQL Server from both a
database administrator’s perspective and a business-intelligence perspective, including
a general understanding of Microsoft Excel, which is often used with SQL Server. In
addition, having a basic understanding of Windows Azure is helpful for getting the
most from the topics associated with private, public, and hybrid-cloud solutions.


xi

Who should not read this book
As mentioned earlier, the purpose of this book is to provide readers with a high-level
preview of the capabilities and features of SQL Server 2014. This book is not intended to
be a step-by-step, comprehensive guide.

How is this book organized?
SQL Server 2014, like its predecessors, is more than a database engine. It is a collection
of components that you can implement separately or as a group to form a scalable,
cloud-ready information platform. In broad terms, this platform is designed for two
purposes: to help you manage data and to help you deliver business intelligence. Accordingly, we divided this book into two parts to focus on the new capabilities in each
of those areas.
Part 1, “Database administration,” is written with the database administrator (DBA)
in mind and introduces readers to the numerous innovations in SQL Server 2014. Chapter 1, “SQL Server 2014 editions and engine enhancements,” discusses the key enhancements affiliated with availability, scalability, performance, manageability, security, and
programmability. It then outlines the different SQL Server 2014 editions; hardware and
software requirements; and installation, upgrade, and migration strategies. In
Chapter 2, “In-Memory OLTP investments,” readers learn about the new in-memory
feature that provides significant performance gains for OLTP workloads. Chapter 3,
“High-availability, hybrid-cloud, and backup enhancements,” aims to bring readers up
to date on these important capabilities that are fully integrated into SQL Server 2014
and Windows Azure.
Part 2, “Business intelligence development,” is for readers who need to understand
how SQL Server 2014 and related technologies can be used to build analytical solutions that enable deeper insights through the combination of all types of data—big or
small, structured or unstructured. Chapter 4, “Exploring self-service BI in Microsoft Excel
2013,” introduces add-ins for Excel that expand beyond the data mash-up capabilities
of Power Pivot by supporting the abilities to find and manipulate data, create rich
interactive visualizations, and explore location-based data on a 3-D map. Chapter 5,
“Introducing Power BI for Office 365,” shows you not only how to move Excel workbooks to the cloud for centralized access in SharePoint Online, but also how to use
Power BI features online to manage workbooks and even get answers to naturallanguage questions from published workbooks. Chapter 6, “Big data solutions,”

xii Introduction

explores technologies that extend the reach of analytical tools beyond relational data
to Hadoop by using Microsoft HDInsight in the cloud or PolyBase on-premises.

Conventions and features in this book
This book presents information using the following conventions, which are designed to
make the information more readable and easy to follow:
■■

Step-by-step instructions consist of a series of tasks, presented as numbered
steps (1, 2, and so on) listing each action you must take to complete a task.

■■

Boxed elements with labels such as “Note” provide additional information.

■■

Text that you type (apart from code blocks) appears in bold.

■■

Transact-SQL code is provided to help you further understand specific examples.

Pre-release software
To help you become familiar with SQL Server 2014 as soon as possible after its release,
we wrote this book using examples that work with the Community Technology Preview (CTP) 2 version of the product. Consequently, the final version might include new
features, and features we discuss might change or disappear. Refer to the “What’s New
in SQL Server 2014” topic in Books Online for SQL Server at http://msdn.microsoft.com/
en-us/library/bb500435(v=sql.120).aspx for the most up-to-date list of changes to the
product. Be aware that you might also notice some minor differences between the CTP
2 version of the product and the descriptions and screen shots that we provide.

Acknowledgments
First, I would like to thank my colleagues at Microsoft Press for providing me with 
another writing opportunity, which allows me to enhance the careers of many dataplatform professionals around the world. Special kudos go out to Devon Musgrave,
Colin Lyth, Anne Hamilton, Karen Szall, Carol Dillingham, John Pierce, Stevo Smocilac,
Rob Nance, Carrie Wicks, and Lucie Haskins. The publishing team’s support throughout
this engagement is much appreciated.
Second, this book would not have been possible without support from colleagues on
the SQL Server team who provided introductions, strategic technology guidance,

Introduction
xiii

technical reviews, and edits. I would like to thank Sunil Agarwal, Luis Carlos Vargas
Herring, Sethu Srinivasan, Darmadi Komo, and Luis Daniel Soto Maldonado.
Third, I would like to acknowledge Shirmattie Seenarine for assisting me on another
Microsoft Press title. Shirmattie’s hard work, contributions, edits, and rewrites are much
appreciated. And to my author partner, Stacia Misner, I want to thank you for once
again doing an excellent job on the business intelligence part of this book.
Finally, I would like to thank my mentors at Microsoft and my amazing Canadian team
at the Microsoft Technology Center who have allowed me to achieve success north of
the border.
–Ross Mistry

Here I am collaborating yet again with Ross and thank him for the opportunity. Like
Ross, I appreciate very much the team at Microsoft Press that helped us through the
process of creating this book. Various other people also help behind the scenes by
answering questions and exploring alternative options for working with the new
technologies available for the Microsoft business-intelligence stack. At Microsoft, I
thank Cristian Petculescu, Ed Price, Igor Peeve, and Blair Neumann. In the SQL Server
community, I have Joey D’Antoni and Aaron Nelson to thank for their willingness to
answer random questions at odd hours that ultimately helped me save much time
during the writing of this book.
–Stacia Misner

Errata & book support
We’ve made every effort to ensure the accuracy of this book. Any errors that have been
reported since this book was published are listed at:
http://aka.ms/IntroSQLServer2014/errata
If you discover an error that is not already listed, please submit it to us at the same
page. If you need additional support, email Microsoft Press Book Support at
mspinput@microsoft.com.
Please note that product support for Microsoft software and hardware is not offered
through the previous addresses. For help with Microsoft software or hardware, go to
http://support.microsoft.com.

xiv Introduction

We want to hear from you
At Microsoft Press, your satisfaction is our top priority, and your feedback our most
valuable asset. Please tell us what you think of this book at:
http://aka.ms/tellpress
The survey is short, and we read every one of your comments and ideas. Thanks in
advance for your input!

Stay in touch
Let’s keep the conversation going! We’re on Twitter: http://twitter.com/MicrosoftPress.

Introduction

A09I684751.indd v

xv

3/24/2014 10:23:50 AM

PAR T I

Database
administration
CHAPTER 1



SQL Server 2014 editions and engine
enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

CHAPTER 2

In-Memory OLTP investments . . . . . . . . . . . . . . . . . 23

CHAPTER 3

High-availability, hybrid-cloud, and backup
enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39




1

CHAPTER 1

SQL Server 2014 editions and
engine enhancements

S

QL Server 2014, the latest complete information platform from Microsoft, embodies the new
era of Microsoft’s Cloud OS, which provides organizations and customers with one consistent
platform for infrastructure, apps and data that span customer data centers, hosting serviceprovider data centers, and the Microsoft public cloud. The benefits that customers experience
with a consistent platform include common development, management, data, identity, and virtualization, no matter where an application is being run.
SQL Server 2014 also offers organizations the opportunity to efficiently protect, unlock,
and scale their data across the desktop, mobile devices, data centers, and a private, public, or
hybrid cloud. Furthermore, SQL Server 2014 builds on the success of SQL Server 2012 by making a strong impact on organizations worldwide with significant new capabilities that are tightly
aligned to the trends transforming the IT industry, including Microsoft’s Cloud OS. SQL Server
provides mission-critical performance for the most demanding database applications while delivering the highest forms of security, scalability, high availability, and support. SQL Server 2014’s
mission is to deliver faster insights into big data, small data—all data—and, most importantly,
deliver business intelligence in a consumable manner through familiar tools such as Microsoft
Excel.
Finally, SQL Server 2014 enables new unique hybrid-cloud solutions based on the Cloud OS.
These solutions can positively affect an organization’s bottom line and allow an organization to
create innovative solutions for its database applications. Figure 1-1 shows the mission statement
for SQL Server 2014, based on three pillars.

FIGURE 1-1  SQL Server 2014, a cloud-ready information platform.

This chapter examines the new features, capabilities, and editions of SQL Server 2014 from a database administrator’s perspective. It also describes hardware and software requirements and installation strategies.


3

SQL Server 2014 enhancements for database administrators
The organizations of today are looking for a trusted, cost-effective, and scalable database platform
that offers mission-critical confidence, breakthrough insights, and flexible cloud-based offerings.
These organizations regularly face changing business conditions in the global economy and numerous challenges to remaining competitive, such as gaining valuable business insights, using the right
information at the right time, and working successfully within budget constraints.
In addition, organizations must be fluid because new and important trends regularly change the
way software is developed and deployed. Some of these trends focus on in-memory databases that
can improve performance and efficiency, a capability that has most likely been influenced by the dramatic drop in the cost of memory. Another important trend sweeping over IT is the adoption of cloud
computing by customers who require an infrastructure that provides a generational leap in agility,
elasticity, and scalability across a set of shared resources with more automation of cloud services and
self-service provisioning. Last but not least is the data-explosion trend, where discussions about storage capacity naturally involve the word zettabytes. Customers want to gain new insights about their
businesses faster, so they are analyzing larger data sets, both internal and external. These new insights
are helping customers obtain a competitive edge, but they are also leading to massive volumes of
data.
Microsoft has made major investments in SQL Server 2014 as a whole. The new features and capabilities that should interest database administrators are introduced in this chapter, including availability, backup and restore, scalability, performance, security, manageability, programmability, and
the platform for the hybrid cloud. Other chapters in this book offer a deeper explanation of the major
technology investments.

Availability enhancements
A tremendous number of high-availability enhancements were added to SQL Server 2012, which increased both the confidence that organizations have in their databases and the maximum uptime for
those databases. Microsoft built on this work with new high-availability enhancements in SQL Server
2014.

AlwaysOn Availability Groups
In SQL Server 2012, AlwaysOn Availability Groups was the most highly anticipated feature related to
the Database Engine for DBAs. This high-availability and disaster-recovery capability provided protection by allowing one or more databases to fail over as a single unit. Shared storage was not needed,
and replicas could be leveraged to offload backups and reporting workloads from the primary.
Enhancements in SQL Server 2014 have given databases even better data redundancy, protection,
and availability. First, the maximum number of secondary replicas has increased from four to eight.
This change allows organizations to further offload read-only operations such as reporting and backups to additional secondary replicas. The additional secondary replicas can also be placed in more
data centers for higher levels of protection and disaster recovery. Moreover, with SQL Server 2014, the
4

PART I  Database administration

secondary replicas have been enhanced and can be used for read-only operations even in the case
of network failures or loss of quorum between replicas. Second, whether a manual or an automatic
failover is performed, the operation no longer needs to fail over databases one at a time. Numerous
databases can now fail over simultaneously, which increases availability.
Finally, SQL Server 2014 introduces the Add Azure Replica wizard for organizations looking to
create a secondary replica that is stored in Windows Azure, Microsoft’s public cloud. (See Figure 1-2.)
Placing a secondary replica in Windows Azure is a great way for an organization to achieve additional
disaster-recovery protection in the unlikely event that all data centers hosting its secondary replicas
become unavailable.

FIGURE 1-2  A global organization using AlwaysOn Availability Groups, including Windows Azure, to achieve high

availability and disaster recovery.



CHAPTER 1  SQL Server 2014 editions and engine enhancements

5

In Figure 1-2, company X is an organization with a global presence. It achieves both high
availability and disaster recovery for mission-critical databases by using AlwaysOn Availability Groups.
Secondary replicas are placed in data centers around the world, including in Windows Azure, and are
being used to offload reporting and backups.

AlwaysOn Failover Cluster Instances (FCI)
AlwaysOn Failover Cluster Instances (FCIs) is a feature that provides superior instance-level protection by using Windows Server Failover Clustering and shared storage. Traditionally, each SQL Server
failover cluster instance required at least one logical unit number (LUN) because the LUN was the unit
of failover. This requirement imposed a significant limitation because when a database administrator
ran out of drive letters or mount points, the administrator also lost the opportunity to host any more
failover cluster instances. This anomaly has been addressed in SQL Server 2014 with the use of the
Cluster Shared Volumes (CSVs) feature, as shown in Figure 1-3, which requires fewer LUNs. As a note,
this feature is included in Windows 2012 and later releases.

FIGURE 1-3  Using the Cluster Shared Volume feature associated with Windows Server 2012 R2 for storing SQL

Server 2014 virtual machines.

6

PART I  Database administration

CSVs reduce the number of LUNs (disks) required for SQL Server clustered instances because many
failover cluster instances can use a single CSV LUN and can fail over without causing the other FCIs
on the same LUN to also fail over. Finally, new dynamic management views introduced in SQL Server
2014 help administrators troubleshoot and return information pertaining to FCIs.

Backup and restore enhancements
Regardless of how many database replicas an organization has within its enterprise, there is still a
need to protect data with backups. Hence, Microsoft continued its investments in backup and recovery to protect data with SQL Server 2014. The new enhancements include the following:
■■

■■

■■



SQL Server Managed Backups to Windows Azure  Backups in SQL Server 2014 natively
support the Windows Azure Blob storage service for simplifying backups to the cloud. Hybridcloud backups reduce capital expenditures (CAPEX) and operational expenditures (OPEX) and
improve disaster recovery for an organization’s backups because the backups stored in the
Windows Azure cloud are automatically replicated to multiple data centers around the world.
The process to exploit this new enhancement is fairly straightforward. First, create a Windows
Azure storage account and a blob container, and then generate a SQL Server credential that
will be used to store security information and access the Windows Azure storage account.
Finally, create a backup that will use the Windows Azure Blob storage service.
SQL Server backups to URLs  SQL Server backups have been updated to use URLs as one
of the destination options when backups are performed with SQL Server Management Studio.
Backups are stored in Windows Azure because the Windows Azure Blob storage service is
used. Previously, only Transact-SQL, PowerShell, and SQL Server Management Objects (SMO)
were supported when using SQL Server 2012 SP1 CU2 and later. A database backup to a URL
destination is depicted in Figure 1-4.
Encryption for backups  For years, DBAs have been asking for the ability to natively encrypt
data while creating a backup. This task can now be performed in SQL Server 2014 by specifying an encryption algorithm and an encryptor—a certificate or an asymmetric key—to secure
the encryption key. The industry standard encryption algorithms that are supported include
AES 128, AES 192, AES 256, and Triple DES. Encrypted backups are supported in Windows
Azure storage or on-premises.

CHAPTER 1  SQL Server 2014 editions and engine enhancements

7

FIGURE 1-4  Backing up a database to an Azure storage container by using a URL prefix.

Scalability and performance enhancements
The SQL Server product group made sizable investments to improve scalability and performance associated with the SQL Server Database Engine. Some of the main enhancements allow organizations
to improve their SQL Server workloads, especially when using Windows Server 2012 or later releases:
■■

■■

8

In-Memory OLTP  In-Memory OLTP (project code name Hekaton) is considered the most
important feature release and investment in SQL Server 2014. This new feature is fully integrated into the Database Engine component. Databases were originally designed to reside
on disk because of the high costs associated with procuring memory. This situation has since
changed, due mainly to the significant drop in price for memory. It is now possible for most
OLTP databases to fit into memory, which reduces I/O expense, and in turn increases transaction speed performance. To date, organizations testing In-Memory OLTP have reported promising numbers: transaction speeds improved up to 30 times that of their past performance. As
with any new capability, mileage will vary, and the best performance gains have been achieved
when the business logic resides in the database and not in the applications.
Computing resources  At the heart of the Microsoft Cloud OS vision, Windows Server 2012
R2 offers a tremendous amount of computing resources to provide scale for large, missioncritical databases not only in a physical environment but also in a virtual environment.
Windows Server 2012 R2 supports up to 2,048 logical processers for a Hyper-V host, which
can handle the largest database applications. In a virtual environment you can use up to 64
virtual CPUs, up to 1 terabyte (TB) of memory, and up to 64 TB of virtual disk capability for

PART I  Database administration

each Hyper-V virtual guest. In addition, you now have up to 64 nodes in a SQL Server cluster
and up to 8,000 virtual machines within a Hyper-V cluster. Enterprise scale on this magnitude
has never been achieved before.
■■

■■

■■

■■

■■

■■



Scale networking  Windows Server 2012 R2 introduced many new capabilities in the area
of virtual networking that bolster the SQL Server experience. Network virtualization provides
a needed layer of abstraction, which allows SQL Server workloads to be moved from one data
center to another. NIC teaming, which was introduced with Windows Server 2012, still exists
and can be used to provide fault tolerance by enabling multiple network interfaces to work together as a team. Finally, SMB Multichannel and Network Quality of Service (QoS) can be used
in conjunction with SQL Server to improve database application availability over physical and
virtual networks by ensuring that multiple paths are available to application shares and that
sufficient available bandwidth is reserved for the application.
Scale storage  Windows Server 2012 introduced Storage Spaces, a feature that was enhanced in Windows Server 2012 R2. Storage spaces allow database administrators to take advantage of sophisticated virtualization enhancements to the storage stack that can distribute
or tier SQL Server workloads across storage pools. For example, high-capacity spinning disks
can be used to store less frequently used data, while high-speed solid-state disks can be used
to store more frequently used data. Windows Server 2012 R2 recognizes the tiers and optimizes them by placing hot data in the fastest tier and less-utilized data in lower tiers, improving
performance without increasing costs.
Resource Governor enhancements  In previous versions of SQL Server, it was possible to
pool CPU and memory to manage SQL Server workloads and system-resource consumption.
In SQL Server 2014, I/O has been added to Resource Governor, which lets I/O be pooled and
tiered following an organization’s criteria. This ensures greater scale and performance predictability for your SQL Server workloads, especially when running applications in private clouds
and environments managed by hosting organizations.
Buffer pool extension  Buffer pool extension enables integration of a nonvolatile randomaccess-memory extension with the Database Engine buffer pool to significantly improve I/O
throughput. Solid-state drives (SSDs) would be considered nonvolatile random access memory,
which would be added to the SQL Server system to improve query performance. Benefits that
can be achieved when you use buffer pool extension include increased random I/O throughput, reduced I/O latency, increased transaction throughput, improved read performance with
a larger buffer pool, and a caching architecture that can take advantage of present and future
low-cost memory drives.
Sysprep enhancements  SQL Server 2014 supports new functionality in Sysprep, which
allows you to fully support deployments of clustered SQL Server instances. This capability
reduces deployment time for SQL Server failover clusters and is great for building private or
public clouds.
Columnstore enhancements  Columnstore indexes are used to accelerate query performance for data warehousing that primarily performs bulk loads. In the previous version of
CHAPTER 1  SQL Server 2014 editions and engine enhancements

9

SQL Server, tables that had columnstore indexes could not be updated. In the past, you had to
drop the index, perform the update, and then rebuild the index, or use partition switching or
two tables—one with a columnstore index and another for updating—and then use UNION
ALL queries to return data. As you can imagine, this could be challenging from an administrative perspective. With SQL Server 2014, in-memory columnstore has been modified to support
updateable operations such as inserts, updates, and deletes.

Security enhancements
Approximately a decade has passed since Microsoft initiated its trustworthy computing initiative.
Compared with other major database players in the industry, SQL Server has had the best track record
since then, with the least number of vulnerabilities and exposures. Based on results from an April
2013 study conducted by the National Institute of Standards and Technology (NIST), shown in Figure
1-5, SQL Server led the way five years in a row as the least-vulnerable database among the top data
platforms tracked. Moreover, it is currently the most-utilized database in the world, with 42 percent of
market share. (Source: ITIC 2013: “SQL Server Delivers Industry-Leading Security.”)

FIGURE 1-5  Common vulnerabilities and exposures reported to NIST from January 2006 to January 2013.

With SQL Server 2014, the product continues to expand its solid foundation to deliver enhanced
security and compliance within the database platform. By splitting the database administrator role
from the system administrator role and allowing organizations to further customize the rights of each
DBA or system administrator, greater compliance and security can be achieved. Here is a snapshot of
some of the enhanced enterprise-ready security capabilities and controls that enable organizations to
meet strict compliance policies and regulations:

10

■■

Redefined engineering security process

■■

CC certification at high assurance level

■■

Enhanced separation of duty

PART I  Database administration

■■

Transparent data encryption (TDE)

■■

Encryption key management

■■

Support for Windows Server Core

A new set of explicit server-level and database-level permissions for securables have been introduced in SQL Server 2014 to further enhance access and security:
■■

■■

■■

■■

CONNECT ANY DATABASE  A server-level permission that grants a login the ability to
connect to all databases that currently exist and to any new databases that might be created
in the future.
IMPERSONATE ANY LOGIN  Another new server-level permission, IMPERSONATE ANY
LOGIN allows a middle-tier process to impersonate the account of clients connecting to it as it
connects to databases.
SELECT ALL USER SECURABLES  When granted, this new server-level permission allows a
login to view data in all databases that the user can connect to.
ALTER ANY DATABASE EVENT SESSION  Unlike the other permissions listed here, ALTER
ANY DATABASE EVENT SESSION is a database-level permission. It is typically used to give a
role the ability to read metadata associated with a database for monitoring purposes. A
perfect example would be Microsoft System Center Operations Manager agents used to proactively monitor a SQL Server database.

Platform for hybrid cloud
SQL Server 2014 empowers a diverse set of hybrid-cloud opportunities that can reduce both capital
and operational expenditures for an organization. These include backing up to the cloud, extending
high availability to the cloud, hybrid application development, and improved on-premises disaster
recovery. Let’s review the different types of hybrid-cloud solutions that organizations can take advantage of:
■■

Deploy a database to SQL Server in a Windows Azure Virtual Machine  A new wizard in
SQL Server 2014 allows a DBA to use SQL Server Management Studio to seamlessly deploy and
transition a database from an on-premises deployment to a Windows Azure Virtual Machine.
Windows Azure Virtual Machines are an Infrastructure-as-a-Service (IaaS) offering included in
Windows Azure. The implementation is based on a few simple steps:
1. Specify the source connection settings such as the SQL Server instance, database name,

and temporary location for backup files.
2. Provide Windows Azure sign-in credentials that include a management certificate.
3. Enter information for the Windows Azure Virtual Machine or, if you plan on deploying

your database to an existing Windows Azure Virtual Machine, provide the DNS name for
the cloud service and the credentials for the virtual machine and SQL Server instance.



CHAPTER 1  SQL Server 2014 editions and engine enhancements

11

■■

■■

■■

■■

12

Deploy a database to Windows Azure SQL Database  Another new feature in SQL Server
2014 allows a DBA to easily deploy an on-premises database to Windows Azure SQL Database.
Windows Azure SQL Database is a relational-database service in the Windows Azure Platformas-a-Service (PaaS) environment. The wizard used in this type of deployment can also be used
to move databases from Windows Azure SQL Database to an on-premises SQL Server instance
or to move databases from one instance of Windows Azure SQL Database to another. The deployment process is very straightforward. Only credentials associated with the Windows Azure
SQL Database account are required; the wizard takes care of the rest.
Simplified cloud backup and cloud disaster recovery  As mentioned earlier, SQL Server
2014 and Windows Azure are tightly integrated, providing organizations with the ability
to implement hybrid scenarios such as backing up on-premises databases to the cloud. To
achieve this goal, all that is required is a URL and a storage key. Then, a straightforward policy
can be created and used to back up a single database or all databases within a SQL Server instance directly to Windows Azure storage. This process can be automatic or manual. Windows
Azure storage provides additional benefits through out-of-the-box geo-replication. The use of
geo-replication protects databases because the backups are stored in multiple Windows Azure
data centers around the world, hence offering superior disaster recovery. Finally, the backups
stored in Windows Azure can be restored on Windows Azure Virtual Machines (should a terrible disaster transpire, taking out an organization’s on-premises data center).
Better together: AlwaysOn and Windows Azure  Many organizations need to account
for offsite disaster recovery, but they do not have a secondary data center or their secondary
data center is within close proximity to the primary data center. Therefore, a single disaster
could potentially take out both the primary and the secondary data centers, causing a major
outage. In these cases, hybrid cloud provides organizations better disaster-recovery scenarios,
and these scenarios also lower RTO (recovery time objective) and increase RPO (recovery point
objective). As mentioned earlier, the Add Azure Replica wizard in SQL Server 2014 can be used
to create secondary AlwaysOn asynchronous replicas in Windows Azure Virtual Machines.
Therefore, in the event of a disaster, a replica can be failed over to the Windows Azure public
cloud because all transactions committed on-premises will be sent asynchronously to the
Windows Azure replica.
Extend on-premises applications  An organization can take advantage of hybrid scenarios
to extend and scale on-premises applications. For example, a physical retailer could use its
on-premises SQL Server and supporting infrastructure to continue to serve physical retail
transactions and utilize cloud-based Windows Azure infrastructure services to support online
sales. Another scenario is a pizza organization that needs to scale to support the irregular
peak demands of its business, which occur only on Super Bowl Sunday. The pizza maker could
transition the web tier of its online pizza ordering system to Windows Azure while maintaining
the back-end database on-premises. Windows Azure would provide the automatic scale-out
functionality to support the irregular demand without the need to overprovision on-premises
infrastructure, which would be costly and seldom used throughout the year.

PART I  Database administration

■■

Enhancing backups with Windows Azure Blob storage  Windows Azure Blob storage allows for a flexible and reliable backup option that does not require the overhead of traditional
hardware management, including the hassle and additional costs associated with storing backups offsite. In addition, by leveraging a direct backup to Windows Azure Blob storage, you can
bypass the 16-disk limit, which was a concern in the past.

As you can see, SQL Server 2014 delivers many new capabilities for building hybrid-cloud solutions
that use Microsoft’s Cloud OS vision. This vision entails a consistent experience with a common set of
tools across the entire application life cycle, no matter where you are running your data platform.

SQL Server 2014 editions
Similar to the previous version, SQL Server 2014 is available in three principal editions. All three editions have tighter alignment than their predecessors and were designed to meet the needs of almost
any customer. Each edition comes in a 32-bit and 64-bit version. The principal editions, as shown in
Figure 1-6, are the following:
■■

Enterprise edition

■■

Standard edition

■■

Business Intelligence edition

FIGURE 1-6  The main editions of SQL Server 2014.

Enterprise edition
The Enterprise edition of SQL Server 2014 is the uppermost SKU and is considered the premium offering. It is designed to meet the highest demands of large-scale data centers and data warehouse solutions by providing mission-critical performance and availability for tier 1 applications and the ability
to deploy private-cloud, highly virtualized environments and large, centralized, or external-facing
business-intelligence solutions.



CHAPTER 1  SQL Server 2014 editions and engine enhancements

13

Note  The Datacenter edition included in the previous version of SQL Server is now retired.
However, all capabilities of the Datacenter edition are in the Enterprise edition of SQL
Server 2014.

The Enterprise edition features include the following:
■■

A maximum number of cores, subject to the operating system being used

■■

Unlimited virtualization

■■

AlwaysOn to achieve advanced high availability

■■

Unlimited virtualization for organizations with software assurance

■■

Support for the new columnstore indexing feature

■■

Advanced auditing

■■

Transparent data encryption

■■

Compression and partitioning

In addition, all the features and capabilities of the Business Intelligence edition are available, including:
■■

Reporting

■■

Analytics

■■

Multidimensional BI semantic model

■■

Data quality services

■■

Master data services

■■

In-memory tabular BI semantic model

■■

Self-service business intelligence

Standard edition
The Standard edition is a data-management platform tailored toward departmental databases and
limited business-intelligence applications that are typically appropriate for medium-class solutions,
smaller organizations, or departmental solutions. It does not include all the bells and whistles of the
Enterprise and Business Intelligence editions, although it continues to offer best-in-class manageability and ease of use. Compared with the Enterprise and Business Intelligence editions, the Standard
edition supports up to 16 cores and includes the following features:

14

PART I  Database administration

■■

Spatial support

■■

FileTable

■■

Policy-based management

■■

Corporate business intelligence

■■

Reporting

■■

Analytics

■■

Multidimensional BI semantic model

■■

AlwaysOn 2-Node failover clustering to achieve basic high availability

■■

Up to four processors, up to 64 GB of RAM, one virtual machine, and two failover clustering
nodes

Business Intelligence edition
The Business Intelligence edition offers organizations the full suite of powerful BI capabilities, such as
scalable reporting and analytics, Power View, and PowerPivot. It is tailored toward organizations that
need corporate business intelligence and self-service capabilities but do not require the full online
transactional processing (OLTP) performance and scalability found in the Enterprise edition. Here is a
high-level list of what the Business Intelligence edition includes:
■■

Up to a maximum of 16 cores for the Database Engine

■■

Maximum number of cores for business-intelligence processing

■■

All of the features found in the Standard edition

■■

Corporate business intelligence

• Reporting
• Analytics
• Multidimensional BI semantic model
■■

Self-service capabilities

• Alerting
• Power View
• PowerPivot for SharePoint Server
■■

Enterprise data management

• Data quality services
• Master data services


CHAPTER 1  SQL Server 2014 editions and engine enhancements

15

■■

In-memory tabular BI semantic model

■■

Basic high availability can be achieved with AlwaysOn 2-Node failover clustering

Specialized editions
Above and beyond the three principal editions of SQL Server 2014, Microsoft continues to deliver
specialized editions for organizations that have a unique set of requirements. Some examples include
the following:
■■

■■

■■

Developer  The Developer edition includes all the features and functionality found in the
Enterprise edition; however, it is meant strictly for the purpose of development, testing, and
demonstration. Note that you can transition a SQL Server Developer installation directly to
production (without reinstallation) by upgrading it to SQL Server 2014 Enterprise.
Web  Available at a much more affordable price than the Enterprise and Standard editions,
the SQL Server 2014 Web edition is focused on service providers hosting Internet-facing webservices environments. Unlike the Express edition, this edition doesn’t have database size restrictions, it supports four processors, and supports up to 64 GB of memory. SQL Server 2014
Web edition does not offer the same premium features found in the Enterprise and Standard
editions, but it still remains the ideal platform for hosting websites and web applications.
Express  This free edition is the best entry-level alternative for independent software
vendors, nonprofessional developers, and hobbyists building client applications. Individuals
learning about databases or learning how to build client applications will find that this edition meets all their needs. In a nutshell, this edition is limited to one processor and 1 GB of
memory, and it can have a maximum database size of 10 GB. Also, Express is integrated with
Microsoft Visual Studio.

Note  To compare the different editions of SQL Server 2014 based on their key capabilities,
review “Features Supported by the Editions of SQL Server 2014” at http://msdn.microsoft.
com/en-us/library/cc645993(v=sql.120).aspx and http://msdn.microsoft.com/en-us/library/
ms144275(v=sql.120).aspx.

SQL Server 2014 licensing overview
The licensing models affiliated with SQL Server 2014 are simplified to better align to customer solutions and also optimized for virtualization and cloud deployments. Organizations should pay close
attention to the information that follows to ensure that they understand the licensing model. With
SQL Server 2014, the licensing for computing power is core-based and the Business Intelligence and
Standard editions are available under the Server + Client Access License (CAL) model. In addition,
organizations can save on cloud-based computing costs by licensing individual database virtual machines. Unfortunately, because each organization’s environment is unique, this section cannot provide
16

PART I  Database administration

an overview of how the licensing changes affect an organization’s environment. For more information
on the licensing changes and how they influence your organization, please contact your Microsoft
representative or partner. .

Hardware and software requirements
The recommended hardware and software requirements for SQL Server 2014 vary depending on the
component being installed, the database workload, and the type of processor class that will be used.
Review Table 1-1 and Table 1-2 to understand the hardware and software requirements for SQL Server
2014.
Because SQL Server 2014 supports many processor types and operating systems, Table 1-1
covers the hardware requirements only for a typical SQL Server 2014 installation. Typical installations include SQL Server 2014 Standard or Enterprise running on Windows Server 2012 R2 operating systems. Readers needing information for other scenarios should reference “Hardware and
Software Requirements for Installing SQL Server 2014” at http://msdn.microsoft.com/en-us/library/
ms143506(v=SQL.120).aspx.
TABLE 1-1  Hardware requirements
Hardware Component

Requirements

Processor

Processor type: (64-bit) x64
Minimum: AMD Opteron, AMD Athlon 64, Intel Xeon with Intel EM64T
support, Intel Pentium IV with EM64T support
Processor speed: minimum 1.4 GHz; 2.0 GHz or faster recommended
Processor type: (32-bit)
Intel Pentium III-compatible processor or faster
Processor speed: minimum 1.0 GHz; 2.0 GHz or faster recommended

Memory (RAM)

Minimum: 1 GB
Recommended: 4 GB or more
Maximum: Operating system maximum

Disk Space

A minimum of 6 GB of available disk space; however, disk-space requirements will
vary depending on the components you install.
Database Engine: 811 MB
Analysis Services: 345 MB
Reporting Services: 304 MB
Integration Services: 591 MB
Client components: 1,823 MB

TABLE 1-2  Software requirements



Software Component

Requirements

Operating system

Windows Server 2012 R2 64-bit Datacenter, Enterprise, Standard, or Web edition;
Windows Server 2012 64-bit Datacenter, Enterprise, Standard, or Web edition; or
Windows Server 2008 R2 SP1 64-bit Datacenter, Enterprise, Standard, or Web
edition.

.NET Framework

Microsoft .NET Framework 3.5 SP1 and
Microsoft .NET Framework 4.0

Windows PowerShell

Windows PowerShell 2.0

CHAPTER 1  SQL Server 2014 editions and engine enhancements

17

Software Component

Requirements

SQL Server support tools and
software

SQL Server 2014 - SQL Server Native Client
SQL Server 2014 - SQL Server Setup Support Files
Minimum: Windows Installer 4.5

Internet Explorer

Minimum: Windows Internet Explorer 7 or later version

Virtualization

Supported in virtual machine environments running on the Hyper-V role in
Windows Server 2008 SP2 Standard, Enterprise, and Datacenter editions;
Windows Server 2008 R2 SP1 Standard, Enterprise, and Datacenter editions;
Windows Server 2012 Datacenter and Standard editions;
Windows Server 2012 R2 Datacenter and Standard editions

Note  The server hardware has supported both 32-bit and 64-bit processors for several
years, but Windows Server 2008 R2 and above is 64-bit only. Take this into serious
consideration when planning SQL Server 2014 deployments.

Installation, upgrade, and migration strategies
Like its predecessors, SQL Server 2014 is available in both 32-bit and 64-bit editions. Both can be
installed either with the SQL Server installation wizard through a command prompt or with Sysprep
for automated deployments with minimal administrator intervention. SQL Server 2014 supports
installation on the Server Core, which is an installation option of Windows Server 2008 R2 SP1 or later.
Finally, database administrators also have the option to upgrade an existing installation of SQL Server
or conduct a side-by-side migration when installing SQL Server 2014. The following sections elaborate
on the different strategies.

In-place upgrade
An in-place upgrade is the upgrade of an existing SQL Server installation to SQL Server 2014. When
an in-place upgrade is conducted, the SQL Server 2014 setup program replaces the previous SQL
Server binaries on the existing machine with the SQL Server 2014 binaries. SQL Server data is automatically converted from the previous version to SQL Server 2014. This means data does not have
to be copied or migrated. In the example in Figure 1-7, a database administrator is conducting an
in-place upgrade on a SQL Server 2012 instance running on Server 1. When the upgrade is complete,
Server 1 still exists, but the SQL Server 2012 instance and all of its data is upgraded to SQL Server
2014.

Note  Organizations can perform an in-place upgrade to SQL Server 2014 if they are running SQL Server 2008 SP3 or later, SQL Server 2008 R2 SP2 or later, or SQL Server 2012
SP1 or later. Unfortunately, earlier versions of SQL Server, including SQL Server 2005, SQL
Server 2000, SQL Server 7.0, and SQL Server 6.5 cannot be upgraded to SQL Server 2014.

18

PART I  Database administration

FIGURE 1-7  An in-place upgrade from SQL Server 2012 to SQL Server 2014.

Review the information available through the following link for a detailed list of upgrades to SQL
Server 2014 supported for earlier versions of SQL Server: http://msdn.microsoft.com/en-us/library/
ms143393(SQL.120).aspx.

In-place upgrade pros and cons
The in-place upgrade strategy is usually easier and considered less risky than the side-by-side migration strategy. Upgrading is fairly fast, and additional hardware is not required. Because the names of
the server and instances do not change during an upgrade process, applications still point to the old
instances. As a result, this strategy is less time-consuming because no changes need to be made to
application connection strings.
The disadvantage of an in-place upgrade is that it provides less granular control over the upgrade
process. For example, when running multiple databases or components, a database administrator does not have the flexibility to choose individual items for upgrade. Instead, all databases and
components are upgraded to SQL Server 2014 at the same time. In addition, the instance remains
offline during the in-place upgrade, which means that if a mission-critical database or application or
an important line-of-business application is running, a planned outage is required. Furthermore, if a
disaster transpires during the upgrade, the rollback strategy can be a complex and time-consuming
affair. A database administrator might have to install the operating system from scratch, install SQL
Server, and then restore all the SQL Server data.

SQL Server 2014 high-level in-place strategy
The high-level in-place upgrade strategy for upgrading to SQL Server 2014 consists of the following
steps:
1. Ensure that the instance of SQL Server to be upgraded meets the hardware and software

requirements for SQL Server 2014.
2. Review the deprecated and discontinued features in SQL Server 2014. For more information,

refer to “Deprecated Database Engine Features in SQL Server 2014” at http://msdn.microsoft.
com/en-us/library/ms143729(v=sql.120).aspx.
3. Ensure that the version and edition of SQL Server that will be upgraded is supported. To

review all the upgrade scenarios supported for SQL Server 2014, see “Supported Version and
Edition Upgrades” at http://msdn.microsoft.com/en-us/library/ms143393(v=sql.120).aspx.


CHAPTER 1  SQL Server 2014 editions and engine enhancements

19

4. Run the SQL Server 2014 Upgrade Advisor. The Upgrade Advisor is a tool included with SQL

Server 2014, or it can be downloaded directly from the Microsoft website. It analyzes the installed components on the SQL Server instance you plan to upgrade to ensure that the system
supports SQL Server 2014. The Upgrade Advisor generates a report identifying anomalies that
require fixing or attention before the upgrade can begin. The Upgrade Advisor analyzes any
SQL Server 2012, SQL Server 2008 R2, SQL Server 2008, or SQL Server 2005 components that
are installed.
5. Install the SQL Server 2014 prerequisites.
6. Begin the upgrade to SQL Server 2014 by running setup.

Side-by-side migration
The term side-by-side migration describes the deployment of a brand-new SQL Server 2014 instance
alongside a legacy SQL Server instance. When the SQL Server 2014 installation is complete, a database administrator migrates data from the legacy SQL Server database platform to the new SQL
Server 2014 database platform. Side-by-side migration is depicted in Figure 1-8.

Note  You can conduct a side-by-side migration to SQL Server 2014 by using the same
server. The side-by-side method can also be used to upgrade to SQL Server 2014 on a
single server.

FIGURE 1-8  Side-by-side migration from SQL Server 2012 to SQL Server 2014.

20

PART I  Database administration

Side-by-side migration pros and cons
The greatest advantage of a side-by-side migration over an in-place upgrade is the opportunity to
build out a new database infrastructure on SQL Server 2014 and avoid potential migration issues that
can occur with an in-place upgrade. A side-by-side migration also provides more granular control
over the upgrade process because an administrator can migrate databases and components independently of one another. In addition, the legacy instance remains online during the migration process.
All of these advantages result in a more powerful server. Moreover, when two instances are running
in parallel, additional testing and verification can be conducted. Performing a rollback is also easy if a
problem arises during the migration.
However, there are disadvantages to the side-by-side strategy. Additional hardware or a virtual machine may be required. Applications might also need to be directed to the new SQL Server
2014 instance, and it might not be a best practice for very large databases because of the duplicate
amount of storage required during the migration process.

SQL Server 2014 high-level side-by-side strategy
The high-level side-by-side migration strategy for upgrading to SQL Server 2014 consists of the following steps:
1. Ensure that the instance of SQL Server you plan to migrate meets the hardware and software

requirements for SQL Server 2014.
2. Review the deprecated and discontinued features in SQL Server 2014 by referring to “Depre-

cated Database Engine Features in SQL Server 2014” at http://technet.microsoft.com/en-us/
library/ms143729(v=sql.120).aspx.
3. Although a legacy instance that is no longer supported will not be upgraded to SQL Server

2014, it is still beneficial to run the SQL Server 2014 Upgrade Advisor to ensure that the data
being migrated to SQL Server 2014 is supported and there is no possibility of a blocker preventing the migration.
4. Procure the hardware and install your operating system of choice. Windows Server 2012 is

recommended.
5. Install the SQL Server 2014 prerequisites and desired components.
6. Migrate objects from the legacy SQL Server to the new SQL Server 2014 database platform.
7. Point applications to the new SQL Server 2014 database platform.
8. Decommission legacy servers after the migration is complete.



CHAPTER 1  SQL Server 2014 editions and engine enhancements

21

CHAPTER 2

In-Memory OLTP investments
I

n the previous two releases of SQL Server, Microsoft built into the product a number of in-memory
capabilities to increase speed and throughput and to accelerate analytics. The first of these capabilities was an in-memory analytics add-in for Excel, also known as PowerPivot, in SQL Server 2008 R2.
SQL Server 2012 included in-memory Analysis Services and in-memory columnstore. SQL Server 2014
comes with a new capability, known as In-Memory OLTP, designed to accelerate OLTP workloads. This
feature, along with the other in-memory capabilities, provides organizations with a holistic approach
to drive real-time business with real-time insights. This chapter focuses on the new capabilities associated with In-Memory OLTP.

In-Memory OLTP overview
Microsoft first announced work on the In-Memory OLTP feature during the 2012 PASS Summit in
Seattle. For the worldwide database community, which has most likely known of this feature by its
project name, Hekaton, the wait has now come to an end. Microsoft SQL Server 2014 includes this
impressive new database engine feature that allows organizations to achieve significant performance
gains for OLTP workloads while also reducing processing times. In many cases, when In-Memory OLTP
is combined with new lock-free and latch-free algorithms that are optimized for accessing memoryresident data enhancements and natively compiled stored procedures, performance improved by up
to 30 times.
To give database administrators the opportunity to appreciate this new feature, this chapter not
only teaches and enlightens its readers, it also aims to dispel some flawed beliefs about In-Memory
OLTP. This chapter addresses a series of questions, including the following:
■■

What are In-Memory OLTP and memory-optimized tables?

■■

How does In-Memory OLTP work?

■■

Are there real-world cases that demonstrate In-Memory OLTP performance gains?

■■

Can existing database applications be migrated to In-Memory OLTP?

Let’s look under the hood to see how organizations can benefit from In-Memory OLTP.
The proliferation of data being captured across devices, applications, and services today has
led organizations to work continuously on ways to lower the latency of applications while aiming


23

to achieve maximum throughput of performance-critical data at a lower cost. Consider a financial
organization that offers credit-card services to its customers. This organization must ensure that it can
validate, authorize, and complete millions of transactions per second, or face the fact that it will lose
financial opportunities for both itself and the vendors who use its service. Online gaming is another
industry that requires maximum throughput, needing to service millions of customers who want to
gamble online. Gone are the days when people made static bets on the outcome of a game. Today,
people place bets in real time based on events transpiring in real time. Take, for example, a football
game for which your bet depends on whether you believe the kicker will kick the winning field goal
in the Super Bowl. In situations like this, the database platform must be well equipped to process millions of transactions concurrently at low latency, or the online gaming organization faces the possibility that it could experience financial ruin.
The SQL Server product group recognized that customer requirements were quickly changing in
the data world and that the group needed to provide new capabilities to decrease processing times
and deliver higher throughput at lower latency. Fortunately, the world was also experiencing a steady
trend in the hardware industry that allowed the product group to generate these new capabilities.
First, the product group realized that the cost of memory had vastly decreased over the past 20 to 25
years, while the size of memory continued to increase. Moreover, the cost of memory had reached a
price point and a capacity point at which it was now viable to have large amounts of data in memory.
This trend is illustrated in Figure 2-1.

FIGURE 2-1  The price of RAM has drastically decreased over the past 20 years.

Second, the group recognized both that CPU clock rates had plateaued and that CPU clock rates
were not getting any faster even after the number of cores on a processor had drastically increased,
as shown in Figure 2-2. Armed with knowledge from these trends, the SQL Server team reevaluated
the way SQL Server processes data from disk and designed the new In-Memory OLTP engine, which
can take full advantage of the larger memory sizes that are available and use processors with more
cores to significantly improve performance of OLTP applications.

24

PART I  Database administration

FIGURE 2-2  Stagnating growth in CPU clock speeds while the number of cores increase.

On average, most OLTP databases are 1 terabyte or less. As such, the majority of today’s production OLTP databases can reap the performance benefits of In-Memory OLTP because the whole database can fit into memory. Just imagine the possibilities should the current hardware trend continue.
Perhaps in the next decade, servers will support petabytes of memory, making it possible to move the
largest databases and workloads to memory. It will be interesting to see what the future holds.

In-Memory OLTP fundamentals and architecture
The next few sections discuss In-Memory OLTP fundamentals, architecture, concepts, terminology,
hardware and software requirements, and some myths about how it is implemented in SQL Server
2014.


CHAPTER 2  In-Memory OLTP investments

25

Four In-Memory OLTP architecture pillars
Before thinking about how to use In-Memory OLTP, it is important to understand the underlying
architecture. In-Memory OLTP is built on four pillars. The pillars were developed in the context of
industry hardware and business trends to offer customer benefits. Figure 2-3 summarizes the four pillars and associated customer benefits.

FIGURE 2-3  Pillars of In-Memory OLTP architecture.

Main memory optimized
As a result of the steady decline in the price of memory and the rapid rate at which the size of
memory was growing, putting tables in main memory became feasible, thereby replacing the need
to place and access tables on disk. With this change came a significant reduction in the time required
to gain access to tables, because pages were no longer required to be read into cache from disk. New
functionality, such as hash indexes and nonclustered range indexes, can exploit data that is in physical
memory rather than on disk, which allows for faster access and higher performance in data operations.

T-SQL compiled to machine code
The SQL Server product group recognized that if it could reduce the number of instructions needed
to execute the same logic, it could do the same work and also decrease processing time. The product
group implemented this idea by transforming SQL stored procedures to a C program and then compiling the program into a DLL by using the Visual C compiler. The resulting machine code replaced
stored procedures and the usual interpretation of logic through query execution. This made it possible to run a stored procedure by using fewer instructions, leading to more efficient business-logic
26

PART I  Database administration

processing that was significantly faster. With the optimization in processing time, internal testing at
the lower level of the storage engine verified that machine code could reduce the instruction count
by 30 to 50 times, which resulted in a proportional increase in throughput and in lower latency.

High concurrency
SQL Server has scaled extremely well because of the performance and scalability improvements made
over the past releases. Unfortunately, certain application patterns—for example, a last page insert in
the clustering key order or concurrent updates of hot pages—still suffered latch contention and did
not scale as well as a result. The additional improvements implemented through In-Memory OLTP in
SQL Server 2014 allow for higher concurrency. First, the product group did away with page
structures for accessing memory-optimized tables. This means that no paging or latching occurs to
create bottlenecks. Second, the core engine uses lock-free algorithm structures that are based on
multiversion optimistic concurrency control with full ACID (atomic, consistent, isolated, and durable)
support. These improvements remove common scalability bottlenecks and provide high concurrency
and frictionless scale-up opportunities to increase overall performance when memory–optimized
tables are used.

SQL Server integration
The SQL Server product group decided that In-Memory OLTP should be easy to consume and that
performance-critical tables should take advantage of this feature. What evolved from this idea is an
In-Memory OLTP engine that is fully integrated into the SQL Server Database Engine and managed
with a familiar set of tools. People who are familiar with SQL Server can quickly make use of the
benefits of In-Memory OLTP because the management, administration, and development experiences
are the same. Moreover, In-Memory OLTP works seamlessly with other features, such as AlwaysOn
Availability Groups, AlwaysOn Failover Cluster Instances, replication, backups, and restores.

In-Memory OLTP concepts and terminology
The following section reviews In-Memory OLTP concepts and terminology:
■■

■■

■■



Disk-based tables  This is the traditional way SQL Server has stored data since the product’s
inception. Data in a table is stored in 8-KB pages and read and written to disk. Each table also
had its own data and index pages.
Memory-optimized tables  Memory-optimized tables are the alternative to traditional
disk-based tables and follow the new structures associated with In-Memory OLTP. The primary
store for memory-optimized tables is main memory, but a second copy in a different format is
maintained on disk for durability purposes.
Native compilation  To achieve faster data access and efficient query execution, SQL Server
natively compiles stored procedures that access memory-optimized tables into native DLLs.
When stored procedures are natively compiled, the need for additional compilation and interpretation is reduced. Also, compilation provides additional performance enhancements, as
compared with using memory-optimized tables alone.
CHAPTER 2  In-Memory OLTP investments

27

■■

■■

■■

Interop  In this process, interpreted Transact-SQL batches and stored procedures are used
instead of a natively compiled stored procedure when accessing data in a memory-optimized
table. Interop is used to simplify application migration.
Cross-container transactions  This is a hybrid approach in which transactions use both
memory-optimized tables and disk-based tables.
Durable and nondurable tables  By default, memory-optimized tables are completely
durable and offer full ACID support. Note that memory-optimized tables that are not durable
are still supported by SQL Server, but the contents of a table exist only in memory and are lost
when the server restarts. The syntax DURABILITY=SCHEMA_ONLY is used to create nondurable
tables.

Hardware and software requirements for memory-optimized
tables
A unified experience for organizations has been created in every area—including but not limited to
deployment and support—through the tight integration of In-Memory OLTP with SQL Server 2014.
However, before you try this new capability, you should become acquainted with the requirements
for using memory-optimized tables. In addition to the general hardware and software requirements
for installing SQL Server 2014 (described in Chapter 1, “SQL Server 2014 editions and engine enhancements”), here are the requirements for using memory-optimized tables:
■■

■■

■■

■■

Production environments require the 64-bit Enterprise edition of SQL Server 2014 with the
Database Engine Services component. The Developer edition can also be used when developing and testing. The 32-bit environments are not supported.
To store data in tables and also in indexes, SQL Server requires sufficient memory. You must
configure memory to accommodate memory-optimized tables and to have indexes be fully
resident in memory.
When configuring memory for SQL Server, you should account for the size of the buffer pool
needed for the disk-based tables and for other internal structures.
The processor used for the instance of SQL Server must support the cmpxchg16b instruction.

In-Memory OLTP use cases
Many use cases show the benefits of In-Memory OLTP. Consider these scenarios:
■■

■■

28

An application that is incurring high latch contention can alleviate this contention and scale up
by converting tables from disk-based tables to memory-optimized tables.
Natively compiled stored procedures can be used to address low-latency scenarios because
In-Memory OLTP reduces the response times associated with poor performing procedures (assuming that business logic can be compiled).

PART I  Database administration

■■

■■

Many scale-out operations that require only read access suffer from CPU performance bottlenecks. By moving the data to In-Memory OLTP, it is possible to significantly reduce CPU. With
higher scalability, this allows you to take advantage of existing processing resources to achieve
higher throughput.
Think about the data-staging and load phases of a typical ETL process. At times, numerous operations need to be completed, including gathering data from an outside source and
uploading it to a staging table in SQL Server, making changes to the data, and then transferring the data to a target table. For these types of operations, nondurable memory-optimized
tables provide an efficient way to store staging data by completely eliminating storage cost,
including transactional logging.

Myths about In-Memory OLTP
Before moving on to the next section and walking through some In-Memory OLTP examples, it’s useful to rectify some of the misconceptions surrounding In-Memory OLTP.
■■

■■

■■

■■

■■

Myth 1: SQL Server In-Memory OLTP is a recent response to competitors’ offerings  Work on In-Memory OLTP commenced approximately four years ago in response to
business and hardware trends occurring in the industry.
Myth 2: In-Memory OLTP is like DBCC PINTABLE  DBCC PINTABLE was an old operation
in SQL Server 7 that made it possible for the pages associated with a table to be read into the
buffer pool and remain in memory instead of being removed or deleted. Although there are
some similarities, In-Memory OLTP is a new design focused on optimizing in-memory data
operations. There are no pages or buffer pool for memory-optimized tables.
Myth 3: In-memory databases are new separate products  Unlike with many of its competitors, In-Memory OLTP is fully integrated into SQL Server 2014. If you know SQL Server, you
know In-Memory OLTP.
Myth 4: You can use In-Memory OLTP in an existing SQL Server application without
any changes  In reality, a few changes are required. At the very least, some changes to the
schema will need to be made.
Myth 5: Since tables are in memory, the data is not durable or highly available—I will
lose it after a server crash  In reality, In-Memory OLTP is fully durable and includes several highly available features, including AlwaysOn features. Data is persisted on disk and will
survive a server crash.

In-Memory OLTP integration and application migration
A before-and-after illustration is the best way to compare the internal behavior of SQL Server when
transactions are processed using traditional disk-based tables as opposed to memory-optimized
tables. Figure 2-4 shows how a traditional transaction from a client application is processed using


CHAPTER 2  In-Memory OLTP investments

29

disk-based tables. Figure 2-6, shown later in the chapter, demonstrates the processing behavior when
the same tables are migrated to memory-optimized tables and the In-Memory OLTP engine is used.
Both figures also illustrate how tightly coupled In-Memory OLTP is with the Database Engine component.

FIGURE 2-4  Client application process using disk-based table access.

In Figure 2-4, the SQL Server Database Engine communicates with the client application by using a
Microsoft communication format called a Tabular Data Stream (TDS). The transaction goes through a
parser and a catalog and an optimizer, and the T-SQL query is compiled for execution. During execution, the data is fetched from storage into the buffer pool for changes. At the time the transaction is
committed, the log records are flushed to disk. The changes to the data and index pages are flushed
to disk asynchronously.

30

PART I  Database administration

Will In-Memory OLTP improve performance?
Figure 2-4 depicts a traditional scenario using disk-based tables. Although the processing times are
sufficient in this example, it is not hard to conceive that further optimization of the database
application’s performance will be needed one day. When that day comes, organizations can use the
native tools in SQL Server to help them determine whether In-Memory OLTP is right for their environment. Specifically, organizations can use the Analysis, Migrate and Report (AMR) tool built into SQL
Server Management Studio. The following steps can determine whether In-Memory OLTP is right for
an organization:
1. Establish a system performance baseline.
2. Configure the Management Data Warehouse (MDW).
3. Configure data collection.
4. Run a workload.
5. Run the AMR tool.
6. Analyze results from AMR reports and migrate tables.
7. Migrate stored procedures.
8. Run workload again and collect performance metrics.
9. Compare new workload performance results to the original baseline.
10. Complete.

Ultimately, the AMR tool analyzes the workload to determine whether In-Memory OLTP will
improve performance. It also helps organizations plan and execute their migration to memoryoptimized tables. In addition, the report provides scan statistics, contention statistics, execution
statistics, table references, and migration issues to ensure that organizations are given a wealth of
information to further assist them with their analysis and eventually their migration.

Using the Memory Optimization Advisor to migrate disk-based
tables
After running the AMR tool and identifying a table to port to In-Memory OLTP, you can use the Table
Memory Optimization Advisor to help migrate specific disk-based tables to memory-optimized tables. Do this by right-clicking a table in Management Studio and then selecting Memory Optimization
Advisor. This step invokes a wizard that begins conducting validation tests and providing migration
warnings, as illustrated in Figure 2-5. The wizard also requires users to make a number of decisions
about memory optimization, such as selecting which memory-optimized filegroup to use, the logical
file name, and the file path. Finally, the wizard allows users to rename the original table, estimates
current memory cost in megabytes, and prompts users to specify whether to use data durability for
copying table data to the new memory-optimized table.



CHAPTER 2  In-Memory OLTP investments

31

FIGURE 2-5  Using the Table Memory Optimization Advisor checklist to migrate disk-based tables.

Analyzing In-Memory OLTP behavior after memory-optimized
table migration
Now it’s time to take account of Figure 2-6. Let’s assume that the AMR tool made a recommendation
to migrate Tables 1 and 2 in the example depicted in Figure 2-4 to memory-optimized tables. Figure
2-6 focuses on In-Memory OLTP behavior after migration to memory-optimized tables has occurred
and stored procedures have been natively compiled.
A new area of memory is added for memory-optimized tables and indexes. In addition, a full
suite of new DMVs, XEvents, and instrumentation is also added, allowing the engine to keep track
of memory utilization. Finally, a memory-optimized filegroup, which is based on the semantics of
FILESTREAM, is also added. Access to memory-optimized tables can occur via query interop, natively
compiled stored procedures, or a hybrid approach. In addition, indexes for In-Memory OLTP are not
persisted. They reside only in memory and are loaded when the database is started or is brought
online.
Query interop is the easiest way to migrate the application to In-Memory OLTP and access
memory-optimized tables. This method does not use native compilations. It uses either ad hoc interpreted Transact-SQL or traditional stored procedures, which is the approach depicted by option 1 in
Figure 2-6. As an alternative, natively compiled stored procedures are the fastest way to access data in
memory-optimized tables. This approach is depicted by option 2 in Figure 2-6.

32

PART I  Database administration

FIGURE 2-6  Client application process based on memory-optimized table access.

Using In-Memory OLTP
The following examples outline how to use In-Memory OLTP and memory-optimized tables to improve performance of OLTP applications through efficient, memory-optimized data access and native
compilation of business logic.



CHAPTER 2  In-Memory OLTP investments

33


Documentos relacionados


Documento PDF microsoft press ebook introducing microsoft sql server 2014 pdf
Documento PDF supernatural hero book 1 action adventure
Documento PDF data center efficiency xeon baidu case study
Documento PDF eset emsx 45 userguide esn
Documento PDF window server
Documento PDF economy and business terms charlie your teacher of english


Palabras claves relacionadas