By James Kehr, Networking Support Escalation Engineer
When I started writing this article it was going to be about Container networking, and nothing but Container networking. As the article progresses I realized there was a lot of useful information that applies to all of Windows network virtualization (WNV). Those who program shouldn’t be surprised. There’s no need to reinvent code when there’s already perfectly good code that can do it.
This article goes deeper into how network virtualization works than many articles on the subject. Those who are idly curious will likely fall asleep somewhere in the middle of this introductory piece. This series of articles is designed more for Windows and network admins who use, or are planning to use, Windows network virtualization technologies and want a better understanding of how it works, how to capture and read the virtualization network data, and how to apply the knowledge to make life easier.
Don’t be afraid of big, bad PowerShell.
There are two new technologies in Windows Server 2016 that are a huge departure for Windows administrators: Containers and Nano server. Why? Because there is no UI. But wait, some may say, there’s no UI in Server Core! Yes, but you are only partly correct. There is some UI in Core, just not a lot. There is zero UI in Nano Server and Containers. As in, not even a dialog box, pop-up, manager, or menu in sight.
While Linux and Unix admins will likely yawn at the lack of UI, a lot of Windows administrators are less than thrilled. We like our menus and our managers, thank you very much. People paying attention may have already noticed this shift to a more command line driven Windows than in the past; or more PowerShell driven, to be accurate.
Working in Containers, Core, and Nano server is going to require a paradigm shift for Windows admins. There’s no easy way to say this: it’s time to embrace the command line…er… PowerShell.
What is a Container?
There are a lot of articles out there explaining what a Container is. The link below has the Windows Containers documentation, for those who are curious about the official stuff.
The word I use to describe Containers is pseudo-virtualization. A virtual machine is an operating system running on synthetic, or virtualized, hardware. It has its own file system, its own resource pool, and synthetic hardware. The guest knows it’s a virtual machine, but acts like its physical hardware separate from the physical host.
Containers are kind of like that… except Containers think they are the host. Logon to a Container via PowerShell remoting, pull up the hardware information, and you will see a virtual copy of the host system. The biggest difference deals with the OS kernel. A virtual machine has a completely independent kernel from the host operating system. A Container shares the host’s kernel. The applications under the Container think they are running on the host, and act like they are running on the host, and they technically are, but container applications are isolated from the host.
Think of Containers as living in a world between application virtualization and virtual machines. The purpose of a Container is to make it easy to develop and deploy applications, without truly virtualizing them in the traditional App-V or Hyper-V style.
The exception to this rule is the Hyper-V Container. This Container style lives on a highly optimized virtual machine, and shares the kernel with the VM instead of the host. This adds more isolation between application and host, providing more security, while not compromising the ease of deployment and development that Containers offer.
Making your head hurt
All this lengthy explanation serves a purpose. I’m warming up your brain before I make it hurt.
The primary goal of my investigation into Containers was to see if there was anything new with Container networking. So, I built a Container, remoted in, and tried a network trace. Except the network trace failed. Turns out network capture tools don’t work inside a Container. Not even the tools built into Windows.
Then I took a packet capture on the host to see where the Container traffic interfaced with the host and… I saw the Container traffic. Not the hand off, not the interface, the actual Container traffic including the network stack events.
Remember that part where I said the Container and the host share a kernel? Part of that kernel sharing means they share the same network stack (kind of), the same hardware (sort of), and the same firewall (for real).
Take a minute to think about that.
To see the network traffic in a Container you run the packet capture on the host.
Does your head hurt yet? Not yet? Let me help you out.
Not only do you capture the network data on the host but you can’t use traditional tools like Wireshark or Network Monitor. Those will only show the traffic on the host NIC, and the NAT NIC, but nothing beyond. In order to capture Container network traffic you need to use an NDIS-based packet capture like Message Analyzer or NetEventPacketCapture in PowerShell.
From there you need to understand how to process, read, and follow the traffic through Windows and to the Container network stack – which is actually the host network stack, kind of.
And that should complete the brain bleed.
Part 2 covers how network virtualization works inside of Windows. This is possibly the most critical part of the series, as the data will likely not make sense without understanding part 2.