Dallas/Houston Launch Event Questions and Answers (9-18/9-19 2012)

Good afternoon everyone!  I hope everyone has enjoyed the events so far and I know I have had a lot of meeting all of you.  Both in Dallas and in Houston I had a lot of questions.   Here are some great resources for you to use from the event:

Resources:

So here you go, the questions from the session:

Q: How do you perform multiple live migrations at one time?
A:
  Kinda, the simultaneous migrations are designed for cluster networks so you can perform multiple migrations at the same time.   The environment that I used during the event with my two laptops were stand alone servers.  BTW in case your wondering here is the PowerShell script you could use to run the live migration, could not resist.

Move-VM critical winserver2012rtm2 –IncludeStorage –DestinationStoragePath D:\critical

Q: What are the drive requirements for the Storage Spaces, how do they need to be connected?
A:
Storage Spaces in Windows Server 2012 enables cost-effective, optimally used, highly available, scalable, and flexible storage solutions.  This allows you to take your drives and combine them into one virtual drive.    The drives need to be Serial ATA (SATA) or Serial Attached SCSI (SAS) connected disks (in an optional just-a-bunch-of-disks [JBOD] enclosure).  My good friend John Savill wrote a great article here on Storage Spaces: Windows Server 2012 Storage Spaces 

Q: Can you provide some more detail on ReFS?
A:
ReFS stands for the resilient file system and is Microsoft’s next generation file system.   The file system is designed to detect and correct corruption before it happens and is an exciting addition to the platform.  The engineering team really did a good job of rewriting the file system from the ground up, and building on past success of NTFS.  It is truly a new on disk storage engine.  The The main goals of ReFS are:

  • Maintain a high degree of compatibility with a subset of NTFS features that are widely adopted while deprecating others that provide limited value at the cost of system complexity and footprint.
  • Verify and auto-correct data. Data can get corrupted due to a number of reasons and therefore must be verified and, when possible, corrected automatically. Metadata must not be written in place to avoid the possibility of “torn writes,” which we will talk about in more detail below.
  • Optimize for extreme scale. Use scalable structures for everything. Don’t assume that disk-checking algorithms, in particular, can scale to the size of the entire file system.
  • Never take the file system offline. Assume that in the event of corruptions, it is advantageous to isolate the fault while allowing access to the rest of the volume. This is done while salvaging the maximum amount of data possible, all done live.
  • Provide a full end-to-end resiliency architecture when used in conjunction with the Storage Spaces feature, which was co-designed and built in conjunction with ReFS.

You can read more here: Building the next generation file system for Windows: ReFS 

Q: How does the Distributed File System and the continuously available file shares relate?
A:
This was really a great question, and the answer is that they work hand in hand.  Imagine the power of combining the name space capability and the new file share technology to allow powerful access to your Windows Server 2012 file services.    Remember DFS provides some great access to your infrastructure.  DFS Namespaces and DFS Replication in Windows Server 2012 are role services in the File and Storage Services role.

  • DFS Namespaces Enables you to group shared folders that are located on different servers into one or more logically structured namespaces. Each namespace appears to users as a single shared folder with a series of subfolders. However, the underlying structure of the namespace can consist of numerous file shares that are located on different servers and in multiple sites.
  • DFS Replication Enables you to efficiently replicate folders (including those referred to by a DFS namespace path) across multiple servers and sites. DFS Replication uses a compression algorithm known as remote differential compression (RDC). RDC detects changes to the data in a file, and it enables DFS Replication to replicate only the changed file blocks instead of the entire file.

You can read more about the changes to DFS (including the incorporation of PowerShell here: DFS Namespaces and DFS Replication Overview 

Q: What are some of the costs of Azure and what is included?
A:
   Azure provides some great capabilities for your IaaS environment.  Inlcuding the ability to create running virtual machines you can leverage in the cloud for your needs.  Azure uses the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure. We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment. All you need to do is download the VHD file and boot it up locally, no import/export steps required.  So you truly have a persistent vhd to work with in the cloud  There is a great article covering some of the capabilities: Infrastructure as a Service Series: Virtual Machines and Windows 

You can also learn more here: Azure Virtual Machines  The Azure calculator is here: Pricing overview

Q: What is the best way to clone a .vhd?
A:
There is actually a very quick way to do this.  In the new virtual hard disk wizard you have the ability to copy the contents of an existing vhd into your newly created one, there is also PowerShell scripts for that as well.  See the screenshot below:

vhd

Q: What are some of the changes to NTFS?
A:
One of the big improvements to NTFS is around health and chkdsk.  As we chatted the repair feature will take less than 8 seconds, pretty amazing technologies!  The new model of chkdsk has the following benefits:

  • Customers can confidently deploy large volumes. Corruption-related downtime is now proportional to only the number of corruptions on the volume.
  • Customers who are using clustered shared volumes do not see any downtime, even for correcting corruption events that would normally require a remount.
  • Windows Server 2012 actively monitors the health state of the file system volume, and it always provides the health state to the administrator.
  • Customers do not see any downtime for transient corruption events.
  • Customers experience significantly fewer corruption events.

This article talks a little bit about that: NTFS Health and Chkdsk 

Q: How do you get Active Directory in Azure?
A:
There are a couple of capabilities for this a

  • Integrate with your on-premises active directory
  • Offer access control for you applications
  • Build social connections across the enterprise
  • Provide single sign-on across your cloud applications

This article gives you a quick overview: Windows Azure Active Directory