This article will show you how to test your network latency to different Azure regions so you can make the best decision possible when choosing where to deploy your resources

Why you should care about latency

The distance and speed between you and services hosted in Azure can have a profound impact on performance and responsiveness. The laws of physics bind us, so in general, the closer you are physically to an Azure region, the lower the latency. This is not a hard-and-fast rule though. The internet is a messy place, and network traffic may not take the shortest route to get the lowest latency. You may be surprised to find that what you thought was the best Azure region to host your services in—because it was physically the closest—may have higher latency than another farther away.

But how do you determine what the lowest-latency Azure region is? Funnily enough, Microsoft does not provide a publicly discoverable service to which we can simply ping and measure the response time. We'll have to come up with our own methods to test each region.

The simplest way to test latency to a region is to download a small file from Azure Blob storage and measure the time difference between the start of the operation and the end.

Let's get started

To get started, let's go ahead and log in to Azure and create a resource group that will hold our storage account. We're going to use the westus Azure region for this. If you would like to test another Azure region, simply substitute your desired region name below. To make things easy, here are the currently available Azure regions around the globe.

australiaeast
australiasoutheast
brazilsouth
canadacentral
canadaeast
centralindia
centralus
eastasia
eastus
eastus2
francecentral
japaneast
japanwest
koreacentral
koreasouth
northcentralus
northeurope
southcentralus
southeastasia
southindia
uksouth
ukwest
westcentralus
westeurope
westindia
westus
westus2
Login-AzureRmAccount > $null
$rg = New-AzureRmResourceGroup -Name speedtest -Location westus
$region = 'westus'

We'll need to create a storage account to perform our test. In Azure, storage accounts require public DNS names, so we need to come up with a globally unique name. Let's create a random identifier using the first eight characters of a GUID.

$uniqueId = (New-Guid).ToString().Split('-')[0]

Now we'll go on to creating our storage account.

$storageAccountParams = @{
    ResourceGroupName      = $rg.ResourceGroupName
    Name                   = "$region$uniqueId"
    SkuName                = 'Standard_LRS'
    Location               = $region
    Kind                   = 'StorageV2'
    EnableHttpsTrafficOnly = $true
}
$storageAccount = New-AzureRmStorageAccount @storageAccountParams

We're using locally redundant storage because in the event of an Azure region failure, we don't want our storage account to fail over to another region, as that would defeat the purpose of the region-specific latency test.

Now we need to create a test file for uploading. For this, we'll just use some dummy text. To gather our latency metric, we're going to measure the time it takes to download this file.

$text = @'
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor
incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud
exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint
occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id.
'@
$text | Out-File -FilePath ./test.txt

Now let's move on to creating a storage container and uploading our test file. To do this, we'll need the storage context from the newly created storage account. We'll also set the permissions to Blob. This will grant anonymous read access to a blob but won't allow read access to the container itself. After uploading this, we'll also need to get the URI of the resultant blob.

$ctx = $storageAccount.Context
$container = New-AzureStorageContainer -Context $ctx -Name test -Permission Blob
$blob = Set-AzureStorageBlobContent -File ./test.txt -Container $container.Name -Blob test.txt -Context $ctx
$blobUri = $blob.ICloudBlob.Uri.AbsoluteUri

Great! We now have a storage account in the westus Azure region and some test data to work with. Now let's measure the time it takes to download this file.

$iterations = 100
$stopwatch = [System.Diagnostics.Stopwatch]::new()
$origProgressPref = $ProgressPreference

$rawResults   = [System.Collections.ArrayList]::new()
$regionResult = @{
    PSTypeName   = 'AzureRegionLatencyResult'
    Region       = 'westus'
    ComputerName = $env:COMPUTERNAME
}

$ProgressPreference = 'SilentlyContinue'
for ($i = 0; $i -lt $iterations; $i++) {
    $stopwatch.Start()
    Invoke-WebRequest -Uri $blobUri -UseBasicParsing > $null
    $stopwatch.Stop()

    $rawResults.Add($stopwatch.ElapsedMilliseconds) > $null

    $stopwatch.Reset()
}
$ProgressPreference = $origProgressPref

$regionResult.Average = ($rawResults | Measure-Object -Average).Average
$regionResult.Minimum = ($rawResults | Measure-Object -Minimum).Minimum
$regionResult.Maximum = ($rawResults | Measure-Object -Maximum).Maximum

$finalResult = [PSCustomObject]$regionResult

The above section is a long one so let's break it down.

Simply downloading the file once wouldn't give us a representative result. The internet is a crazy place, and we don't want an errant slow result throwing off our measurement. To smooth out the results, we'll perform 100 iterations and use the [System.Diagnostics.Stopwatch] .NET class to measure the time it took. We could have used Measure-Command, but we want an accurate measurement, and using raw .NET classes will perform better than PowerShell cmdlets.

We create an ArrayList collection to hold our raw results and a hashtable called $regionResult to hold our final region results. We're not using your normal PowerShell array for our raw results, since again, we want an accurate measurement, and appending to PowerShell arrays is slow. Using the raw .NET collection classes is much faster.

We now enter our for loop. Notice that before entering the loop, we've disabled the progress bar. Invoke-WebRequest has a bad reputation for being slow because by default, it displays a progress bar when executing a request to show the amount of data received and sent. It calculates the size and time remaining, and by doing that calculation, it slows down the operation substantially. We don't want that calculation throwing off our results, so we'll disable it.

We now start the stopwatch, use Invoke-WebRequest to download the file, then stop the stopwatch. Notice we also redirected the output of Invoke-WebRequest to $null with the redirection operator >. We don't really care about the output of Invoke-WebRequest, just the time it took.

Now we add the elapsed milliseconds to our $rawResult variable and reset the stopwatch for the next round in the loop.

Once we've completed all 100 iterations, we'll be a good citizen and reset the $ProgressPreference variable to what it was set to originally.

Using the raw results we've collected, we now calculate the average, minimum, and maximum times and add them to our region result.

Finally, we'll cast the hashtable to a PSCustomObject to display it better.

Result

Let's look at our results for the westus region.

Subscribe to 4sysops newsletter!

$finalResult
Final region latency results

Final region latency results

Not too bad—it's about 40ms to westus. I wonder what the latency would be to the France Central region.

I'll leave that as an exercise for the reader.

1 Comment
  1. Joe 4 years ago

    Great work! salute!

Leave a reply

Your email address will not be published. Required fields are marked *

*

© 4sysops 2006 - 2023

CONTACT US

Please ask IT administration questions in the forums. Any other messages are welcome.

Sending

Log in with your credentials

or    

Forgot your details?

Create Account