Automatically change Data Factory Integration Runtime proxy settings

Regarding to self-hosted Integration runtime (IR) setup instructions, this is possible to proxify outbound traffic from IR to DataFactory.

There is a configuration option “Use system proxy” which is not a real system proxy, but .NET application config in files diahost.exe.config and diawp.exe.config. So, any changes to WinHTTP or Internet Explorer proxy settings does not affect IR services.

There is a very simple method to configure the proxy for IR automatically without complicated XML parsing:

$config_proxy = '<defaultProxy enabled="true"><proxy bypassonlocal="true" proxyaddress="" /></defaultProxy>'

$diawp_config_file = "C:\\Program Files\\Microsoft Integration Runtime\\4.0\\Shared\\diawp.exe.config"
$diahost_config_file = "C:\\Program Files\\Microsoft Integration Runtime\\4.0\\Shared\\diahost.exe.config"

$diawp_config = Get-Content -Path $diawp_config_file
$diawp_config = [string]::Join(" ", $diawp_config)
$diawp_config = $diawp_config -replace '<defaultProxy useDefaultCredentials="true" />', $config_proxy
$diawp_config | Out-File -FilePath $diawp_config_file -Encoding utf8

$diahost_config = Get-Content -Path $diahost_config_file
$diahost_config = [string]::Join(" ", $diahost_config)
$diahost_config = $diahost_config -replace '<defaultProxy useDefaultCredentials="true" />', $config_proxy
$diahost_config | Out-File -FilePath $diahost_config_file -Encoding utf8

Apply the script and restart the service and IR should be in Ready state on Data Factory side.

If bypass URLs are needed or proxy authentication, there are more information about proxy settings customization in “Configure proxy server settings” section and proxy element reference.

Get Virtual Machines names attached to Virtual Network

There is Azure Resource Graph Explorer which is a very useful tool for auditing cloud resources over multiple subscriptions under the same tenant

Nowadays, Azure Portal does not have any view which provides possibility to understand which resource exactly connected to a Virtual Network because most of resources are attached to Vnet over its Network Interface. So, when we look to Connected Devices tab we see only NICs names and not names of resources.

It possible to make more complex views with KUSTO query language and join information about several resources. In our case, to get the list of VMs with associated Vnets, there are VMs, NICs and VNets:

| where type == "microsoft.compute/virtualmachines"
| project name, vmnics = (properties.networkProfile.networkInterfaces)
| mv-expand vmnics
| project name, vmnics_id = tostring(
| join (Resources | where type == "" | project nicname=(name), vmnics_id = tostring(id), properties) on vmnics_id
| mv-expand ipconfigs = (properties.ipConfigurations)
| extend subnet_resource_id = split(tostring(, '/')
| order by name asc, nicname asc
| project vmname=(name), nicname, vnetname=subnet_resource_id[8], subnetname=subnet_resource_id[10]

How to launch the query in Azure Resource Graph Explorer:

Terraform workaround: Azure RM template deployment parameters types

If you are using azurerm_template_deployment terraform resource and getting following errors:

  • ‘[parameter]’ expected type ‘string’, got unconvertible type ‘array’
  • ‘[parameter]’ expected type ‘string’, got unconvertible type ‘object’
  • ‘[parameter]’ expected type ‘string’, got unconvertible type ‘int’
  • etc.

Then you are using parameters argument of this resource, so that is possible to pass only string-type parameters with this mechanic.

As an example, the problems begins when you need to pass reference to a KeyVault secret by parameters.

There is a closed issues on AzureRM Terraform provider on GitHub which seems to be impossible to resolve

To avoid this error only possible way which I have found it to use parameters_body argument. In this case we will lost any validation by Terraform except validity of parameters JSON but we will able to put any type of parameters. The further validation of parameters will be done by ARM.


variable "array_variable_example" {
  default = <<EOF

variable "object_variable_example" {
  default = <<EOF
      "key": {
        "subkey": "value"

variable "int_variable_example" {
  default = "1"

variable "string_variable_example" {
  default = "string"

resource "azurerm_template_deployment" "terraform_resource_name" {
    name                   = "_deployment_${var.environment}"
    resource_group_name    = "${var.resource_group_name}"

    template_body          = "${file("${path.module}/arm/azuredeploy.json")}"

    parameters_body  = <<EOF
            "keyVaultSecretParameter": {
              "reference": {
                "keyVault": {
                  "id": "${var.key_vault_id}"
                "secretName": "${var.secret_name}"

            "objectParameter": {
              "value": "${var.object_variable_example}"

            "stringParameter": {
              "value": "${var.string_variable_example}"

            "arrayParameter": {
              "value": ${var.array_variable_example}

            "integerParameter": {
              "value": ${var.int_variable_example}

    deployment_mode = "Incremental"

Programmatically create Azure AD Domain Services with Azure CLI

Azure AD Domain Services does not yet very well documented but from existing documentation and Swagger API specification we can find a way for Azure AD Domain Services creation with enabled LDAPS.

Azure AD Domain Services (Azure AD DS, AAD DS) Swagger REST API specification:
az resource create  --subscription [subscriotion-id] \
                    --resource-group [resource-group-name] \
                    --name [managed-domain-name] \
                    --resource-type 'Microsoft.AAD/DomainServices' \
                    --properties <<EOF
                            "DomainName"  : "[managed-domain-name]" , 
                            "SubnetId"    : "[subnet-id]”,
                            "domainSecuritySettings": {
                                "ntlmV1": "Enabled",
                                "tlsV1": "Disabled",
                                "syncNtlmPasswords": "Enabled"
                            "ldapsSettings" : {
                                "ldaps": "Enabled",
                                "pfxCertificate": "[pfx-content-inbase64]”,
                                "pfxCertificatePassword": "[pfx-password]",
                                "externalAccess": "Disabled"

My issue on GitHub for the documentation update:

How to work with Azure Blobs when only Azure CLI installed

To avoid Azure Storage account keys usage and give to user the just enough access that is recommended to use Azure AD authentication and RBAC.

To download or read the blob from Storage Account with Private container, user needs at least “Storage Blob Data Reader” role (even if he is an owner of Storage Account resource)

Azure CLI script example:


az login
az account set -s $subscription_id
az storage blob download  --account-name "$storage_account_name" \
                          --container-name "$container_name" \
                          --name "$blob_path" \
                          --file "$output_file_path" \
                          --auth-mode login

In Linux you have also Python by default and Python is included with Azure CLI installation (with all Azure, Azure AD, Azure Storage Python modules), following Python script can be used to get the similar to Azure CLI result with Device Login:

import adal

from import (
from import (

storage_account_name  = "<storage-account-name>"
container_name        = "<container-name>"
blob_path             = "<blob-name>"
output_file_path      = "<local-file-path>"

def get_device_login_token():
    # only for example Azure CLI Application ID
    client_id     = '04b07795-8ddb-461a-bbee-02f9e1bf7b46'
    # Your organisation's Tenant ID which used for RBAC for Storage
    tenant_id     = '<tenant-id>'
    authority_uri = ('' + tenant_id + '/')
    resource_uri  = ''

    context = adal.AuthenticationContext(authority_uri, api_version=None)
    code = context.acquire_user_code(resource_uri, client_id)
    mgmt_token = context.acquire_token_with_device_code(resource_uri, code, client_id)

    return TokenCredential(mgmt_token['accessToken'])

block_blob_service = BlockBlobService(
        account_name  = storage_account_name, 
        token_credential  = get_device_login_token()

block_blob_service.get_blob_to_path(container_name, blob_path, output_file_path)

If you use your custom application as Python code then service principal must be registered in the tenant of your organisation. Multi-tenant application ID of Azure CLI is used here as an example, in this case we will see the logins from Python script as from Azure CLI. How to create and register multi-tenant application is explained here:

Check the size of AD Connect MS SQL Express database

MS SQL Express has database size limit of 10 Gb, so how to know when to switch to another edition of MS SQL?

To check current database size of AD Connect you can use following commands:

"C:\Program Files\Microsoft SQL Server\110\Tools\Binn\SqlLocalDB.exe" info .\ADSync

Name: ADSync 
Shared name: ADSync
Instance pipe name: np:\\.\pipe\LOCALDB#SHD66A55\tsql\query

"C:\Program Files\Microsoft SQL Server\110\Tools\Binn\SQLCMD.EXE" -S "np:\\.\pipe\LOCALDB#SHD66A55\tsql\query" -Q "sp_helpdb ADSync;" > ADSync.txt

You can also connect to ADSync database with Invoke-Sqlcmd cmdlet from SQLPS PowerShell module:

Invoke-Sqlcmd -ServerInstance "(localdb)\.\AdSync" -Query 'sp_helpdb ADSync'

iptables – source host not found

If you see in your logs something like this:

iptables v1.4.21: host/network `' not found 
Try `iptables -h' or 'iptables --help' for more information. 
iptables v1.4.21: host/network `' not found 
Try `iptables -h' or 'iptables --help' for more information. 
iptables v1.4.21: host/network `' not found 
Try `iptables -h' or 'iptables --help' for more information. 
iptables v1.4.21: host/network `' not found 
Try `iptables -h' or 'iptables --help' for more information. 
iptables v1.4.21: host/network `' not found 
Try `iptables -h' or 'iptables --help' for more information. 
iptables v1.4.21: host/network `' not found 
Try `iptables -h' or 'iptables --help' for more information.

Authorize your DNS traffic before :

# ...
# DNS 
/sbin/iptables -t filter -A OUTPUT -p tcp --dport 53 -j ACCEPT 
/sbin/iptables -t filter -A OUTPUT -p udp --dport 53 -j ACCEPT 
/sbin/iptables -t filter -A INPUT -p tcp --dport 53 -j ACCEPT 
/sbin/iptables -t filter -A INPUT -p udp --dport 53 -j ACCEPT 
/sbin/iptables -A INPUT -i eth0 -p icmp --source -j ACCEPT
/sbin/iptables -A INPUT -i eth0 -p icmp --source -j ACCEPT 
/sbin/iptables -A INPUT -i eth0 -p icmp --source -j ACCEPT 
/sbin/iptables -A INPUT -i eth0 -p icmp --source -j ACCEPT 
/sbin/iptables -A INPUT -i eth0 -p icmp --source -j ACCEPT
# ...

Arrays and objects in ARM Templates

Please, review the article about ARM templates before : Authoring Azure Resource Manager templates.

Before thinking about Nested templates that is possible to imagine a lightweight scenario based on standard ARM template capabilities. Types of variables and parameters in ARM template are not only scalars and they can represent objects and arrays.

Template parameters can take as an input object, array, and secureObject. Variables are type agnostic. Grace to JSON, syntax for those types using is mainly defined by JavaScript Objects and Arrays.

Continue reading “Arrays and objects in ARM Templates”

Test your PowerShell DSC configuration on Azure VM

Troubleshooting and debugging PowerShell DSC configurations could be sometimes very painful. If you are preparing your DSC configuration to use in Azure or even on-premises, you would like to test it in the real environment.

Continue reading “Test your PowerShell DSC configuration on Azure VM”

How to transfer resources between subscriptions in Microsoft Azure?

With Global Availability of Azure Ibiza portal we have got some update on feature request from Feedback Forums, see Move services between subscriptions. Unfortunately I do not have the full list of services and resources which support this feature, Everyday product teams of Microsoft add new ones to the list. Today, you can try it with Azure VM, Azure Automation, Azure Storage, Azure Cache, Azure Search, Azure Batch, Azure DocumentDB, HDInsight.

Change subsription feature of Azure Portal

There are might be few situations when you need it:

  • You have to change contract or create an other subscription
  • You want to transfer some resources to your client from your Sandbox subscription (like in my case it can be a MSDN Developer access) and you do not want or cannot use the ARM Templates.
  • Some other cases? Put it in comments, please.