RRAS Performance results

Hello Customers,

 

A lot of you have requested directly or through the field channels about performance results of RRAS for different VPN tunnel types – specifically SSTP. I am writing this blog to share the results for the tests done internally by our test team (thanks Sai and other test team members).

 

First, a few guidelines to help you better interpret the test and results:

· The main goal of this performance test is to validate a target VPN server deployment in terms of the performance requirements i.e. x number of simultaneous VPN connections doing y Mbps of aggregated data transfer on a specific hardware.

· The number of simultaneous VPN connections and aggregated VPN server throughput that we have used in the test are some of our internal benchmark numbers – based upon our understanding of common customer deployment scenarios of RRAS. This by no means is the only deployment scenario of RRAS.

· The performance numbers without hardware specification is meaningless. Hence, we have shared the details of server hardware used in our performance lab. However, this should not be read as Microsoft recommended hardware platform for RRAS.

· All our tests are done inside our lab environment which means minimal delay (< 10msec) and close to zero % data loss. You may say that is not close to real deployment. And the way I see this – the delay and loss may change the data throughput numbers as experienced by a VPN client for a given tunnel type. However delay and loss doesn’t drastically change the aggregated throughput as seen on VPN server and our focus has been on VPN server performance – hence this set-up.

· This blog is focussed on Windows server 2008 as the VPN server and performance is compared between PPTP and SSTP. I will extend this blog with Windows 7 results shortly – including results for “VPN reconnect” or IKEv2 based VPN tunnel.

 

Setup:

· 1 VPN server:HP DL 165 G5 server with two AMD Opteron™ 2352 Quad Core Processor 2.10 GHz, 16GB RAM, two 1 Gig Ethernet port – running Windows server 2008 released version.

· 10 machines - code changed to emulate n VPN client connections per machine. Each of these machines are running internal tools to manage VPN connection management and generated data load.

· All machines connected on a Gigabit Ethernet switched network.

 


Performance Test 1:

· Generate 100 Mbps data load using 1000 VPN clients. Measure the average CPU utilization on the VPN Server.

Tunnel Type

Average CPU usage (sum of CPU usage per core/no of core) – in %

Data Throughput (on VPN server) – in Mega bits/sec

PPTP

13.23

100

SSTP

33.65

100

Performance Test 2:

· Generate maximum data load from a single VPN client connection. Measure the aggregate data throughput and average CPU utilization on the VPN server

Tunnel Type

Average CPU usage (sum of CPU usage per core/no of core) – in %

Maximum Data Throughput (on VPN server) – in Mega bits/sec

PPTP

40.29

644.78

SSTP

71.44

685.96

Performance Test 3:

· Generate a constant background data load (0, 25 Mbps, 100 Mbps) using 1000 VPN clients and in parallel start a 580 Mega byte file transfer from one of the VPN client to machine behind VPN Server. Measure the file transfer time and average CPU utilization on the VPN Server.

Tunnel Type

Average CPU usage (sum of CPU usage per core/no of core) – in %

Time taken to transfer 580 MB file - in seconds

PPTP with 0 Mbps of traffic

26.42

14

PPTP with 25 Mbps of traffic