Azure CLI is a versatile tool that allows you to automate file transfers and manage Azure Storage efficiently. This guide provides practical examples and step-by-step instructions to streamline your cloud workflows.

Installation and setup

Install Azure CLI on your system:

On Windows

winget install -e --id Microsoft.AzureCLI

On macOS

brew install azure-cli

On Linux (Ubuntu/Debian)

# Install pre-requisites
sudo apt-get update
sudo apt-get install -y ca-certificates curl apt-transport-https lsb-release gnupg

# Download and install the Microsoft signing key
sudo mkdir -p /etc/apt/keyrings
curl -sLS https://packages.microsoft.com/keys/microsoft.asc |
    gpg --dearmor |
    sudo tee /etc/apt/keyrings/microsoft.gpg > /dev/null
sudo chmod go+r /etc/apt/keyrings/microsoft.gpg

# Add the Azure CLI software repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/microsoft.gpg] https://packages.microsoft.com/repos/azure-cli/ $(lsb_release -cs) main" |
    sudo tee /etc/apt/sources.list.d/azure-cli.list

# Update repository information and install the azure-cli package
sudo apt-get update
sudo apt-get install azure-cli

After installation, authenticate with Azure:

az login

For automated or headless authentication, use Microsoft Entra ID with managed identities or service principals:

# Using service principal with rbac
az login --service-principal \
    --username $APP_ID \
    --password $PASSWORD \
    --tenant $TENANT_ID

# Assign proper rbac role
az role assignment create \
    --assignee $APP_ID \
    --role "Storage Blob Data Contributor" \
    --scope "/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Storage/storageAccounts/$STORAGE_ACCOUNT"

Verify the installation and check the version:

# Current version as of April 2024: 2.68.0
az --version

Creating a storage account

Create a resource group and storage account with enhanced security settings:

# Create resource group
az group create --name myResourceGroup --location eastus

# Create storage account with security settings
az storage account create \
    --name mystorageaccount \
    --resource-group myResourceGroup \
    --location eastus \
    --sku Standard_LRS \
    --min-tls-version TLS1_2 \
    --allow-blob-public-access false \
    --public-network-access Disabled \
    --https-only true \
    --encryption-services blob

Using Microsoft Entra ID authentication

Use Microsoft Entra ID authentication instead of storage account keys:

# Enable Microsoft Entra ID authentication
az storage account update \
    --name mystorageaccount \
    --resource-group myResourceGroup \
    --enable-hierarchical-namespace true \
    --default-action Deny

# Assign yourself the storage blob data contributor role
az role assignment create \
    --role "Storage Blob Data Contributor" \
    --assignee-object-id $(az ad signed-in-user show --query id -o tsv) \
    --scope "/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Storage/storageAccounts/$STORAGE_ACCOUNT"

Transferring files using Azure CLI

Uploading files

Upload files using Microsoft Entra ID authentication:

# Create a container
az storage container create \
    --name mycontainer \
    --account-name mystorageaccount \
    --auth-mode login

# Upload a file
az storage blob upload \
    --container-name mycontainer \
    --file /path/to/local/file.txt \
    --name remote-file.txt \
    --account-name mystorageaccount \
    --auth-mode login

Downloading files

Retrieve files using Microsoft Entra ID authentication:

az storage blob download \
    --container-name mycontainer \
    --name remote-file.txt \
    --file /path/to/local/destination.txt \
    --account-name mystorageaccount \
    --auth-mode login

Automating batch file transfers with Azure CLI

Create a script to handle multiple file transfers with error handling:

#!/bin/bash

# Configuration
source_dir="/path/to/local/files"
container_name="mycontainer"
account_name="mystorageaccount"

# Check if storage account exists
if ! az storage account show --name $account_name --resource-group myResourceGroup >/dev/null 2>&1; then
    echo "Storage account does not exist"
    exit 1
fi

# Upload all files in directory with retry logic
for file in "$source_dir"/*; do
    filename=$(basename "$file")
    max_retries=3
    retry_count=0

    while [ $retry_count -lt $max_retries ]; do
        if az storage blob upload \
            --container-name $container_name \
            --file "$file" \
            --name "$filename" \
            --account-name $account_name \
            --auth-mode login; then
            echo "Uploaded: $filename"
            break
        else
            retry_count=$((retry_count + 1))
            if [ $retry_count -lt $max_retries ]; then
                echo "Retry $retry_count for $filename"
                sleep 5
            else
                echo "Failed to upload $filename after $max_retries attempts"
            fi
        fi
    done
done

Optimizing transfer performance

For large files or many files, use concurrent uploads with azcopy:

# Install azcopy
az storage copy \
    --source "/path/to/source/directory/*" \
    --destination "https://$account_name.blob.core.windows.net/$container_name" \
    --recursive \
    --put-md5 \
    --auth-mode login

Managing file access with Azure storage

Generate user delegation SAS tokens for secure file sharing:

# Get user delegation key
end_time=$(date -u -d "30 minutes" '+%Y-%m-%dT%H:%MZ')

az storage container generate-sas \
    --name mycontainer \
    --account-name mystorageaccount \
    --permissions r \
    --expiry $end_time \
    --auth-mode login \
    --as-user \
    --https-only

Monitoring and logging

Track file transfer operations and enable diagnostic logging:

# Enable diagnostic settings
az monitor diagnostic-settings create \
    --name "storage-diagnostics" \
    --resource "$storage_account_id" \
    --logs '[{"category": "StorageRead","enabled": true},{"category": "StorageWrite","enabled": true}]' \
    --metrics '[{"category": "Transaction","enabled": true}]' \
    --workspace "$log_analytics_workspace_id"

# List all blobs in a container
az storage blob list \
    --container-name mycontainer \
    --account-name mystorageaccount \
    --auth-mode login \
    --output table

Best practices

  1. Use managed identities: Implement Microsoft Entra ID managed identities for authentication instead of storage account keys.

  2. Enable soft delete: Protect against accidental deletions.

    az storage blob service-properties delete-policy update \
        --days-retained 7 \
        --enable true \
        --account-name mystorageaccount \
        --auth-mode login
    
  3. Use private endpoints: Secure access to storage accounts.

    az network private-endpoint create \
        --name "storage-endpoint" \
        --resource-group myResourceGroup \
        --vnet-name myVNet \
        --subnet mySubnet \
        --private-connection-resource-id "$storage_account_id" \
        --group-id blob \
        --connection-name "storage-connection"
    
  4. Enable versioning: Maintain multiple versions of your files.

    az storage account blob-service-properties update \
        --account-name mystorageaccount \
        --enable-versioning true
    
  5. Implement lifecycle management: Automate data lifecycle by creating a management policy.

    az storage account management-policy create \
        --account-name mystorageaccount \
        --resource-group myResourceGroup \
        --policy @policy.json
    
  6. Enable infrastructure encryption: Enhance data security by enabling infrastructure encryption, if supported.

    az storage account update \
        --name mystorageaccount \
        --resource-group myResourceGroup \
        --enable-infrastructure-encryption true
    

Next steps

  • Set up your first automated file transfer.
  • Configure lifecycle management for your storage account.
  • Enable versioning for critical files.

Need to handle complex file processing workflows? Check out Transloadit for comprehensive file importing and exporting services.

Troubleshooting common issues

  1. Check connectivity issues:

    az network watcher check-connection \
        --source sourceResourceId \
        --dest storage_account_endpoint
    
  2. View operation logs:

    az monitor log-analytics query \
        --workspace $workspace_id \
        --query "StorageBlobLogs | where TimeGenerated > ago(1h)"
    
  3. Test storage account access:

    az storage account check-name \
        --name mystorageaccount