Azure File Sync became generally available to the public this month and I decided to implement it in my lab to gauge its strengths and weaknesses. The proposition of fully replicated, managed, and secured file synchronization across all branch offices of an organization makes for one of the strongest stand-alone use cases for the Cloud after Backup & DR, as long as it solves more problems than it introduces.
First off, don’t confuse Azure Files with Azure File Sync. The former provides SMB 3.0 access to Cloud volumes over port 445, allowing for the mounting of remote shares. Azure File Sync however is agent based, and communicates with Cloud endpoints over HTTPS. You still get local files, they’re just on a file server and not your client.
The goal for Azure File Sync is to replace any DFS-R synchronization solutions you may currently have in place, as well as facilitate the transfer of your files into Azure, and help begin to familiarize admins with Azure management. Stand-alone use cases such as this are a great way to introduce Cloud offerings as a solution to on-premise pain points.
So how does it work?
Essentially you create your Cloud storage resources, install and register sync agents, and define a Sync Group consisting of a cloud storage endpoint (File share where your files are stored in Azure) and one or more server endpoints (which are paths on a Windows Server)
Here’s the process in a bit more depth
- Log-in to Azure and go to Storage Account
- You can select an existing storage account, but you should probably create a new one
- Under that account go to Files and then click + File Share
- Give the share a name and an optional quota
- From Azure Marketplace search Azure File Sync and click Create. Give it a name and resource group
- From Homepage click All Services and star “Storage Sync Services” so it shows up in the shortcut column
- Install AzureRM PS Module on the local server
- From and elevated prompt: Install-Module -Name AzureRM
- Download Azure File Sync client and install
- Can also be done through Powershell
- Microsoft has published a page to aid in Azure File Sync Troubleshooting
- After the installer completes and asks to reboot close the window and wait. A wizard will appear asking for credentials. Log-in and register the agent, then reboot
- In Azure go to Storage Sync Services, click the Storage Sync Service name, and then Registered Servers. Make sure your server is online and reporting in.
- Now go to Sync Groups and click + Sync Group. Point it to the File Share you created above
- Open the Sync Group and add a server endpoint
- The first endpoint you add should be an existing server with the current master copy of the document share you wish to replicate. Since ideally this share is not located on the System volume we’ll refer to it as D:\Documents. Choose the registered server from the dropdown list, add the path to an existing local document share on that server (D:\Documents), and for now leave Cloud Tiering Disabled.
- In Azure go to Storage Accounts > the account for this share > Files > File Share name and you should see the same directory structure as on your server
From this point forward, the Cloud copy is considered the master copy, and all endpoints are hot replicas.
Now install additional sync agents on servers in branch offices, go back into the Sync Group and add the endpoint while pointing to a local path of an empty folder. By this I mean if on the branch office server you want all the files to be in F:\Synced_Documents then create the Synced_Documents folder but be sure it is empty. When you add the endpoint in Azure you’ll see the folder structure and files appear.
So how’s the sync speed?
Well first the good news. Most changes to a local file trigger a write to the Windows USN Journal that the agent picks up on. This triggers an immediate sync and in a very short time that change should be propagated throughout the sync group. The bad news is that similar operations that take place to the copies stored in the Cloud, meaning the creation or modification of files directly in the Cloud volume (potentially through the API), don’t get registered and don’t trigger a sync. That capability just doesn’t yet exist and the Azure team knows it’s a problem. To overcome this Azure File Sync has a change detection job that runs once every 24 hours and does a full local\Cloud sync comparison.
What are the strong positives?
ACL Permissions - The permissions on your local file shares are copied up to Azure along with the files and replicated to all endpoints. Just set your AD permissions in one place.
Encryption - Works with BitLocker, Azure Information Protection (AIP), and AD Rights Management Services (RMS). Does not work with EFS.
Replication - Easy synchronization of files across branch offices.
Local Access - Retain the ability to access shares in a traditional manner.
Centralized Storage - Having files stored in Azure Storage adds capabilities like snapshotting and no-delete locks.
Rapid Namespace Restore - Restores are accomplished by simply installing the sync agent on a new server and adding the endpoint. Restores of this manner occur quickly because the files aren’t downloaded. Stubs for the files appear almost immediately and files are downloaded as they are accessed.
Cloud Tiering - You can enable Cloud tiering of cold files to keep infrequently accessed files off your expensive storage with the full version online only. A stub file is created locally and the file is downloaded on-demand when requested by a user. More below.
Tiering comes with a few caveats. Files must be greater than 64KB, if you enable deduplication on a volume then you cannot enable cloud tiering, and any tiered files will not be indexed by Windows Search.
Tiering is made possible by the Azure agent and the StorageSync.sys Azure File Sync file system filter. This technology is remarkable in that it obfuscates which bits of the file are stored locally and which still need to be downloaded. This is readily seen in one of the Microsoft promo videos for Azure File Sync. Take an MP4 video file stored in an Azure File share and check the properties of the local copy it may report a 20 MB file size and 0 Bytes on disk. As you open the file and begin playing it the necessary bits are streamed down. After a few minutes of playing you can check the properties again and see perhaps 5 MB on disk. This will continue until the file is 100% downloaded.
Considerations & Limitations
Management Software compatibility - The first consideration is the ability of your other endpoint agents to understand the “O” offline bit set by Azure File sync. Tiering your files won’t work if every day all the files are being accessed by agents. This will trigger a full download of each file. Anti-virus and Backup software must be able to work with the sync agent.
Part of this is negated by not needing to backup the files anymore either. Your master copy is now in Azure Storage and any restores will originate from there. There’s no need to backup the local copy. If anything, you need to use Azure Backup to ensure the safety of the master copy.
System Drive Folders - If you synchronize files from the System/OS volume then you lose the Cloud Tiering capability and Rapid Namespace Restore.
Supported configurations - Only local NTFS volumes on Server 2012R2 and 2016 supported.
File Locking - There is no mechanism for file locks in Azure Files. If two people, in two branch offices both edit the same document at the same time, then two copies will be saved. There is no may to merge versions or edits.
The combination of no file locks along with 24 hours between full-synchronizations severely limits the use case for Azure File Sync. If you’re not planning on manipulating the files in Azure Storage directly then the 24 hour sync doesn’t affect you, but that still leaves users with the job of manually reconciling differences, potentially many differences depending on the frequency of file access. Until the Azure team finds a better way to version files and reconcile differences I don’t see myself endorsing this product.