Azure Blob storage
Xperience by Kentico supports file system providers that allow you to map parts of the file system to Microsoft Azure Blob Storage. You can use Blob storage when:
- deploying Xperience projects to the SaaS environment
- deploying to private cloud
- you wish to store parts of the application file system (e.g., media library files) in a shared storage
Blob storage is particularly suitable for storing content item assets, media library files and all other unmanaged binary files referenced by the Xperience application. Some application deployment environments, like Azure Web Apps for example, do not guarantee a persistent file system for files created outside of the original deployment package. Therefore, if Azure needs to recycle the application due to rolling infrastructure updates or unexpected outages, all but the image with the original deployment is lost. Blob storage does not suffer from these limitations, as the infrastructure ensures redundancy in case of an outage.
Follow the instructions on this page to create Azure Blob storage providers for:
File name case
Unlike regular file systems (NTFS, VFAT), Azure Blob storage is case-sensitive. To ensure consistent behavior, Xperience automatically converts all file and folder names to lowercase when processing files on Azure storage.
Media library files in Azure Blob storage
Media library files stored in Azure Blob storage have the following limitations:
- Storing a large number (thousands) of media library files in a single media library can significantly affect the performance and user experience of the Media libraries application.
- We recommend structuring media library files into multiple media libraries and storing at most 100 files in a single media library folder.
- Mapping subfolders of media libraries is not supported. You can map either the directory containing the media libraries (~/assets/media), or individual media libraries (~/assets/media/<MediaLibraryName>).
- The system’s automatic clearing of files from the server-side cache does not work for files stored in an external storage. If you modify a media file, the website may still display the old version until the cache expires (unless you manually clear the application’s cache). See also: File caching.
Azure Blob storage for Kentico’s SaaS
When developing an Xperience application that you want to deploy to the SaaS environment, you need to create an Azure Blob storage provider (AzureStorageProvider
) for folders with external application files and media library files that you want to deploy with your application.
Azure Blob storage is part of your Xperience by Kentico SaaS subscription and is used when deploying projects to the SaaS environment, as storing persistent data alongside Xperience application binaries deployed in Azure App Service is not recommended.
Managed and unmanaged data
Only data that is not handled by the CI/CD features and is stored outside of the Xperience database needs to be deployed to the Azure Blob Storage.
- Media library metadata is stored in the Xperience database, only the media library (binary) files are deployed.
- Files stored in the content item asset fields do not need to be deployed as they are handled by CI/CD.
Default Kentico-managed Azure Blob storage configuration
Xperience provides Azure Blob storage with the required accounts as part of every subscription. Projects created with the --cloud
parameter contain the StorageInitializationModule.cs file with a sample storage initialization module and the Export-DeploymentPackage.ps1 deployment script.
The default configuration maps:
- ~/assets (content item assets, media libraries, files uploaded via the Upload file form component) project folder to an Azure Blob storage provider for the Qa, Uat, and Production environments. The default Azure Blob storage container name is default.
- Use Azure Blob storage containers to categorize files.
- ~/assets/media (media libraries) project folder to a local storage provider configured for the development environment. The media libraries are stored in the ~/$StorageAssets/default/assets/media folder, and a container named default, but accessed under the ~/assets/media path in your application.
- Using local storage providers for development is a good practice, as it is not recommended to use production containers during development.
The default configuration ensures that when your application accesses the ~/assets/media folder, the mapping automatically forwards the request to the ~/$StorageAssets/default/assets/media folder during development. When you deploy the application to the SaaS environment, your application stores the folders in the Azure Blob storage container named default. You can use multiple containers to store files.
The configuration can be further customized – see Map folders to a Kentico-managed Azure Blob storage.
Map folders to a Kentico-managed Azure Blob storage
To use a different configuration (a different container name or multiple containers for multiple folders), create, configure, and map Azure Blob storage providers and local file system providers for each folder:
- Get a basic understanding of Azure Blob storage containers.
- Use container names conforming to the naming requirements in Container names.
- Use folder structure inside containers and file name requirements conforming to Naming and Referencing Containers, Blobs, and Metadata.
- Edit the custom module in the StorageInitializationModule.cs file that was automatically created during installation.
- Get familiar with the module code and modify it to suit your application:
Modify the
MapAzureStoragePath
method to map a directory to a specific Azure Blob storage provider:Use the
path
method argument to set the folder you want to map to an Azure Blob storage provider, where~
is your project root folder.Set the
PublicExternalFolderObject
provider property totrue
, if you want the files from a container to be publicly accessible.Use the
container_name
method argument to set the target Azure Blob storage container. Use a separate folder for each folder you want to deploy to a separate Azure Blob storage container.Containers do not need to have an existing folder in the storage assets root folder, but need to have a valid mapping in your application – Xperience automatically ensures that the folder structure gets created in Azure Blob storage when your application tries to access those folders when deployed to Azure Blob storage.
Containers in deployment environments are isolated. You can map folders in different environments to containers with identical names.
C#Mapping a folder to an Azure Blob storage providerprivate void MapAzureStoragePath(string path, string containerName) { // Creates a new StorageProvider instance for Azure Blob storage var provider = AzureStorageProvider.Create(); // Specifies the target container provider.CustomRootPath = containerName; provider.PublicExternalFolderObject = false; StorageHelper.MapStoragePath(path, provider); }
Modify the
MapLocalStoragePath
method to create a local storage provider for storage assets. Use a separate folder for each folder you want to deploy to a separate Azure Blob storage container under ~/$StorageAssets/<container_name>:Use the
path
method argument to set the folder you want to map to a local storage provider, where~
is the root folder of your project,Use the
container_name
method argument to set the target Azure Blob storage container. Use a separate folder for each folder you want to deploy to a separate Azure Blob storage container.C#Mapping a folder to a local storage providerprivate void MapLocalStoragePath(string path, string containerName) { // Creates a new StorageProvider instance for local storage var provider = StorageProvider.CreateFileSystemStorageProvider(); // A local path where to map the folder provider.CustomRootPath = $"{LOCAL_STORAGE_ASSETS_DIRECTORY_NAME}/{containerName}"; StorageHelper.MapStoragePath(path, provider); }
You can use a different root folder than $StorageAssets to store the local binary files. Change the
$StorageAssetsFolderName
variable in Export-DeploymentPackage.ps1 script and theLOCAL_STORAGE_ASSETS_DIRECTORY_NAME
variable in StorageInitializationModule.cs.
Modify the
OnInit()
method to map the folder path either to the Azure Blob storage provider or to a local storage provider, depending on the current environment.C#Mapping project folders to managed Azure storage// Changed from "$StorageAssets". "Export-DeploymentPackage.ps1" needs to reflect this. private const string LOCAL_STORAGE_ASSETS_DIRECTORY_NAME = "CustomStorageAssetsFolder"; private const string CONTAINER_NAME_LIBRARIES = "media-libraries-container"; private const string CONTAINER_NAME_CONTENT_ITEM_ASSETS = "content-item-assets-container"; private const string CONTAINER_NAME_FORM_FILES = "biz-form-files-container"; private const string CONTAINER_NAME_DOCUMENTS = "documents-container"; // Initialization code executed on application startup protected override void OnInit() { base.OnInit(); // // NOTE: The "~" gets resolved into the project root // NOTE: The "documents" folder is an example folder with arbitrary document files // // Mapping for SaaS deployment environments: // -- media libraries to "media-libraries-container" // -- content item assets to "content-item-assets-container" // -- form files to "biz-form-files-container" // -- "documents" folder to "documents-container" // if (Environment.IsQa() || Environment.IsUat() || Environment.IsProduction()) { MapAzureStoragePath("~/assets/media", CONTAINER_NAME_LIBRARIES); MapAzureStoragePath("~/assets/contentitems", CONTAINER_NAME_CONTENT_ITEM_ASSETS); MapAzureStoragePath("~/assets/bizformfiles", CONTAINER_NAME_FORM_FILES); MapAzureStoragePath("~/assets/documents", CONTAINER_NAME_DOCUMENTS); } // Mapping for the local development environment, under the "~/CustomStorageAssetsFolder/" folder: // -- media libraries to the "media-libraries-container" folder // -- "documents" folder to the "documents-container" folder // NOTE: The "~/CustomStorageAssetsFolder/" folder is exported into the deployment package. else { MapLocalStoragePath("~/assets/media", CONTAINER_NAME_LIBRARIES); MapLocalStoragePath("~/assets/documents", CONTAINER_NAME_DOCUMENTS); } }
- Rebuild the solution.
- Create and deploy the deployment package.
The binary files are now deployed according to the defined configuration and environment – either to Azure Blob storage or to local storage.
Azure Blob storage for private cloud deployments
To map parts of the file system to Azure Blob storage when deploying to private cloud:
Follow these recommendations:
- Use separate Azure Blob storage containers for each deployment environment for the same project (for example, production and testing environments). Such environments often contain identically named files that overwrite each other. To avoid collisions, use the
CustomRootPath
property to map folders for each environment to a different container. - Use HTTPS to connect to Azure Blob Storage accounts (this behavior can be enabled via the Security transfer required setting available in the Azure portal). Xperience is by default configured to use HTTPS with Azure Blob Storage.
If you need to use HTTP (for example because of storage accounts supporting only unencrypted (HTTP) connections), include the CMSAzureBlobEndPoint configuration key in your application’s configuration file. Set the value of the key to the full endpoint URL of the external storage account and explicitly specify the HTTP protocol:
JSONappsettings.json{ ... "CMSAzureBlobEndPoint": "http://_StorageAccountName_.blob.core.windows.net" }
- Use separate Azure Blob storage containers for each deployment environment for the same project (for example, production and testing environments). Such environments often contain identically named files that overwrite each other. To avoid collisions, use the
Specify the storage account name and primary access key in your application configuration file (appsettings.json by default).
Open the Azure Management Portal.
Open Storage accounts.
Select your storage.
Switch to the Access keys tab.
Use the Storage account name and one of the provided access key values.
JSONappsettings.json{ ... "CMSAzureAccountName": "StorageAccountName", "CMSAzureSharedKey": "PrimaryAccessKey" }
Open the Xperience project in Visual Studio.
Add a custom Class Library project and add the Kentico.Xperience.AzureStorage NuGet package as a dependency.
Create a custom module class in the created library.
Override the module’s
OnInit
method and for each folder that you want to store in the blob storage:Create a new instance of the Azure storage provider.
(Optional) Specify the target container using the
CustomRootPath
property of the provider.(Optional) You can specify whether you want the container to be publicly accessible using the
PublicExternalFolderObject
property of the provider.True
means the container is publicly accessible.Map the folder to the provider:
As some deployment environments don’t provide a persistent file system, we recommend mapping the ~/assets/ project folder to prevent possible loss of files due to redeployment, swapping of slots, etc.C#Mapping assets to self-managed Azure storage/* The following code snippet demonstrates the recommended mappings for the ~/assets/ folder mentioned in the note above. You can use the same approach to also map other folders from your project's file system.*/ using CMS; using CMS.DataEngine; using CMS.IO; using Kentico.Xperience.AzureStorage; // Registers the custom module into the system [assembly: RegisterModule(typeof(CustomInitializationModule))] public class CustomInitializationModule : Module { // Module class constructor, the system registers the module under the name "CustomInit" public CustomInitializationModule() : base("CustomInit") { } // Contains initialization code that is executed when the application starts protected override void OnInit() { base.OnInit(); // Creates a new StorageProvider instance for Azure Blob storage var assetsProvider = AzureStorageProvider.Create(); // Specifies the target container, the provider ensures its existence in the storage account assetsProvider.CustomRootPath = "myassetscontainer"; // Makes the 'myassetscontainer' container publicly accessible assetsProvider.PublicExternalFolderObject = true; // Maps the local directory to the storage provider StorageHelper.MapStoragePath("~/assets", assetsProvider); } }
(Optional) Set Optional application settings for Azure storage.
The application deployed in the Azure instance now stores files from the ~/assets project folder in the myassetscontainer Azure Blob storage container.
Optional application settings for Azure storage
The optional application settings are applicable only for private cloud deployments.
Key | Description |
CMSAzureTempPath | The system uses the specified folder to store temporary files on a local disk, for example when transferring large files to or from the storage account. If not set, the system creates and uses an ~/AzureTemp directory in the project’s root. JSON Sample value
|
CMSAzureCachePath | Specifies a folder on a local disk where files requested from the storage account are cached. This helps minimize the amount of blob storage operations, which saves time and resources. If not set, the system creates and uses an ~/AzureCache directory in the project’s root. JSON Sample value
|
CMSAzureBlobEndPoint | Sets the endpoint used for the connection to the blob service of the specified storage account. If you wish to use the default endpoint, remove the setting completely from the appropriate files. JSON Sample value
|
CMSAzurePublicContainer | Indicates if the blob container used to store the application’s files is public. If true, it will be possible to access files directly through the URL of the appropriate blob service, for example: https://<StorageAccountName>.blob.core.windows.net/media/imagelibrary/logo.png JSON Sample value
|
CMSDownloadBlobTimeout | Specifies the timeout interval in minutes for importing files from Azure Blob storage into Xperience. The default is 1.5 minutes. Increase the interval if you encounter problems when importing large files (2GB+). JSON Sample value
|