Automatically change Data Factory Integration Runtime proxy settings

Regarding to self-hosted Integration runtime (IR) setup instructions, this is possible to proxify outbound traffic from IR to DataFactory.

There is a configuration option “Use system proxy” which is not a real system proxy, but .NET application config in files diahost.exe.config and diawp.exe.config. So, any changes to WinHTTP or Internet Explorer proxy settings does not affect IR services.

There is a very simple method to configure the proxy for IR automatically without complicated XML parsing:

$config_proxy = '<defaultProxy enabled="true"><proxy bypassonlocal="true" proxyaddress="http://10.10.10.10:3128" /></defaultProxy>'

$diawp_config_file = "C:\\Program Files\\Microsoft Integration Runtime\\4.0\\Shared\\diawp.exe.config"
$diahost_config_file = "C:\\Program Files\\Microsoft Integration Runtime\\4.0\\Shared\\diahost.exe.config"

$diawp_config = Get-Content -Path $diawp_config_file
$diawp_config = [string]::Join(" ", $diawp_config)
$diawp_config = $diawp_config -replace '<defaultProxy useDefaultCredentials="true" />', $config_proxy
$diawp_config | Out-File -FilePath $diawp_config_file -Encoding utf8

$diahost_config = Get-Content -Path $diahost_config_file
$diahost_config = [string]::Join(" ", $diahost_config)
$diahost_config = $diahost_config -replace '<defaultProxy useDefaultCredentials="true" />', $config_proxy
$diahost_config | Out-File -FilePath $diahost_config_file -Encoding utf8

Apply the script and restart the service and IR should be in Ready state on Data Factory side.

If bypass URLs are needed or proxy authentication, there are more information about proxy settings customization in “Configure proxy server settings” section and system.net proxy element reference.

Get Virtual Machines names attached to Virtual Network

There is Azure Resource Graph Explorer which is a very useful tool for auditing cloud resources over multiple subscriptions under the same tenant

Nowadays, Azure Portal does not have any view which provides possibility to understand which resource exactly connected to a Virtual Network because most of resources are attached to Vnet over its Network Interface. So, when we look to Connected Devices tab we see only NICs names and not names of resources.

It possible to make more complex views with KUSTO query language and join information about several resources. In our case, to get the list of VMs with associated Vnets, there are VMs, NICs and VNets:

Resources
| where type == "microsoft.compute/virtualmachines"
| project name, vmnics = (properties.networkProfile.networkInterfaces)
| mv-expand vmnics
| project name, vmnics_id = tostring(vmnics.id)
| join (Resources | where type == "microsoft.network/networkinterfaces" | project nicname=(name), vmnics_id = tostring(id), properties) on vmnics_id
| mv-expand ipconfigs = (properties.ipConfigurations)
| extend subnet_resource_id = split(tostring(ipconfigs.properties.subnet.id), '/')
| order by name asc, nicname asc
| project vmname=(name), nicname, vnetname=subnet_resource_id[8], subnetname=subnet_resource_id[10]

How to launch the query in Azure Resource Graph Explorer: https://docs.microsoft.com/en-us/azure/governance/resource-graph/first-query-portal

Terraform workaround: Azure RM template deployment parameters types

If you are using azurerm_template_deployment terraform resource and getting following errors:

  • ‘[parameter]’ expected type ‘string’, got unconvertible type ‘array’
  • ‘[parameter]’ expected type ‘string’, got unconvertible type ‘object’
  • ‘[parameter]’ expected type ‘string’, got unconvertible type ‘int’
  • etc.

Then you are using parameters argument of this resource, so that is possible to pass only string-type parameters with this mechanic.

As an example, the problems begins when you need to pass reference to a KeyVault secret by parameters.

There is a closed issues on AzureRM Terraform provider on GitHub which seems to be impossible to resolve https://github.com/terraform-providers/terraform-provider-azurerm/issues/34

To avoid this error only possible way which I have found it to use parameters_body argument. In this case we will lost any validation by Terraform except validity of parameters JSON but we will able to put any type of parameters. The further validation of parameters will be done by ARM.

Example:

variable "array_variable_example" {
  default = <<EOF
    [ 
        "Element1",  
        "Element2", 
        "Element3" 
    ]
  EOF
}

variable "object_variable_example" {
  default = <<EOF
    {
      "key": {
        "subkey": "value"
      }`
    }
  EOF
}

variable "int_variable_example" {
  default = "1"
}

variable "string_variable_example" {
  default = "string"
}


resource "azurerm_template_deployment" "terraform_resource_name" {
    name                   = "_deployment_${var.environment}"
    resource_group_name    = "${var.resource_group_name}"

    template_body          = "${file("${path.module}/arm/azuredeploy.json")}"

    parameters_body  = <<EOF
      {
            "keyVaultSecretParameter": {
              "reference": {
                "keyVault": {
                  "id": "${var.key_vault_id}"
                },
                "secretName": "${var.secret_name}"
              }
            },

            "objectParameter": {
              "value": "${var.object_variable_example}"
            },

            "stringParameter": {
              "value": "${var.string_variable_example}"
            },

            "arrayParameter": {
              "value": ${var.array_variable_example}
            },

            "integerParameter": {
              "value": ${var.int_variable_example}
            }
      }
    EOF

    deployment_mode = "Incremental"
}

Programmatically create Azure AD Domain Services with Azure CLI

Azure AD Domain Services does not yet very well documented but from existing documentation and Swagger API specification we can find a way for Azure AD Domain Services creation with enabled LDAPS.

Azure AD Domain Services (Azure AD DS, AAD DS) Swagger REST API specification: https://github.com/Azure/azure-rest-api-specs/blob/master/specification/domainservices/resource-manager/Microsoft.AAD/stable/2017-06-01/domainservices.json
az resource create  --subscription [subscriotion-id] \
                    --resource-group [resource-group-name] \
                    --name [managed-domain-name] \
                    --resource-type 'Microsoft.AAD/DomainServices' \
                    --properties <<EOF
                        {
                            "DomainName"  : "[managed-domain-name]" , 
                            "SubnetId"    : "[subnet-id]”,
                            "domainSecuritySettings": {
                                "ntlmV1": "Enabled",
                                "tlsV1": "Disabled",
                                "syncNtlmPasswords": "Enabled"
                            },
                            "ldapsSettings" : {
                                "ldaps": "Enabled",
                                "pfxCertificate": "[pfx-content-inbase64]”,
                                "pfxCertificatePassword": "[pfx-password]",
                                "externalAccess": "Disabled"
                            }
                        }
                    EOF

My issue on GitHub for the documentation update: https://github.com/MicrosoftDocs/azure-docs/issues/40480#issuecomment-540573164

How to work with Azure Blobs when only Azure CLI installed

To avoid Azure Storage account keys usage and give to user the just enough access that is recommended to use Azure AD authentication and RBAC.

To download or read the blob from Storage Account with Private container, user needs at least “Storage Blob Data Reader” role (even if he is an owner of Storage Account resource)

Azure CLI script example:

subscription_id="00000-0000-0000-0000-00000000"
storage_account_name="<storage-account-name>"
container_name="<container-name>"
blob_path="<blob-name>"
output_file_path="<local-file-name>"

az login
az account set -s $subscription_id
az storage blob download  --account-name "$storage_account_name" \
                          --container-name "$container_name" \
                          --name "$blob_path" \
                          --file "$output_file_path" \
                          --auth-mode login

In Linux you have also Python by default and Python is included with Azure CLI installation (with all Azure, Azure AD, Azure Storage Python modules), following Python script can be used to get the similar to Azure CLI result with Device Login:

import adal

from azure.storage.blob import (
    BlockBlobService, 
    PublicAccess
)
from azure.storage.common import (
    TokenCredential
)

storage_account_name  = "<storage-account-name>"
container_name        = "<container-name>"
blob_path             = "<blob-name>"
output_file_path      = "<local-file-path>"


def get_device_login_token():
    # only for example Azure CLI Application ID
    client_id     = '04b07795-8ddb-461a-bbee-02f9e1bf7b46'
    # Your organisation's Tenant ID which used for RBAC for Storage
    tenant_id     = '<tenant-id>'
    authority_uri = ('https://login.microsoftonline.com/' + tenant_id + '/')
    resource_uri  = 'https://storage.azure.com'

    context = adal.AuthenticationContext(authority_uri, api_version=None)
    code = context.acquire_user_code(resource_uri, client_id)
    print(code['message'])
    mgmt_token = context.acquire_token_with_device_code(resource_uri, code, client_id)

    return TokenCredential(mgmt_token['accessToken'])

block_blob_service = BlockBlobService(
        account_name  = storage_account_name, 
        token_credential  = get_device_login_token()
    )

block_blob_service.get_blob_to_path(container_name, blob_path, output_file_path)

If you use your custom application as Python code then service principal must be registered in the tenant of your organisation. Multi-tenant application ID of Azure CLI is used here as an example, in this case we will see the logins from Python script as from Azure CLI. How to create and register multi-tenant application is explained here: https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-convert-app-to-be-multi-tenant

Check the size of AD Connect MS SQL Express database

MS SQL Express has database size limit of 10 Gb, so how to know when to switch to another edition of MS SQL?

To check current database size of AD Connect you can use following commands:

"C:\Program Files\Microsoft SQL Server\110\Tools\Binn\SqlLocalDB.exe" info .\ADSync

Name: ADSync 
Shared name: ADSync
Owner: NT SERVICE\ADSync 
Instance pipe name: np:\\.\pipe\LOCALDB#SHD66A55\tsql\query

"C:\Program Files\Microsoft SQL Server\110\Tools\Binn\SQLCMD.EXE" -S "np:\\.\pipe\LOCALDB#SHD66A55\tsql\query" -Q "sp_helpdb ADSync;" > ADSync.txt

You can also connect to ADSync database with Invoke-Sqlcmd cmdlet from SQLPS PowerShell module:

Invoke-Sqlcmd -ServerInstance "(localdb)\.\AdSync" -Query 'sp_helpdb ADSync'

iptables – source host not found

If you see in your logs something like this:

iptables v1.4.21: host/network `proxy.ovh.net' not found 
Try `iptables -h' or 'iptables --help' for more information. 
iptables v1.4.21: host/network `proxy.p19.ovh.net' not found 
Try `iptables -h' or 'iptables --help' for more information. 
iptables v1.4.21: host/network `proxy.rbx.ovh.net' not found 
Try `iptables -h' or 'iptables --help' for more information. 
iptables v1.4.21: host/network `proxy.sbg.ovh.net' not found 
Try `iptables -h' or 'iptables --help' for more information. 
iptables v1.4.21: host/network `proxy.bhs.ovh.net' not found 
Try `iptables -h' or 'iptables --help' for more information. 
iptables v1.4.21: host/network `ping.ovh.net' not found 
Try `iptables -h' or 'iptables --help' for more information.

Authorize your DNS traffic before :

# ...
# DNS 
/sbin/iptables -t filter -A OUTPUT -p tcp --dport 53 -j ACCEPT 
/sbin/iptables -t filter -A OUTPUT -p udp --dport 53 -j ACCEPT 
/sbin/iptables -t filter -A INPUT -p tcp --dport 53 -j ACCEPT 
/sbin/iptables -t filter -A INPUT -p udp --dport 53 -j ACCEPT 
/sbin/iptables -A INPUT -i eth0 -p icmp --source proxy.ovh.net -j ACCEPT
/sbin/iptables -A INPUT -i eth0 -p icmp --source proxy.p19.ovh.net -j ACCEPT 
/sbin/iptables -A INPUT -i eth0 -p icmp --source proxy.rbx.ovh.net -j ACCEPT 
/sbin/iptables -A INPUT -i eth0 -p icmp --source proxy.sbg.ovh.net -j ACCEPT 
/sbin/iptables -A INPUT -i eth0 -p icmp --source proxy.bhs.ovh.net -j ACCEPT
# ...

Arrays and objects in ARM Templates

Please, review the article about ARM templates before : Authoring Azure Resource Manager templates.

Before thinking about Nested templates that is possible to imagine a lightweight scenario based on standard ARM template capabilities. Types of variables and parameters in ARM template are not only scalars and they can represent objects and arrays.

Template parameters can take as an input object, array, and secureObject. Variables are type agnostic. Grace to JSON, syntax for those types using is mainly defined by JavaScript Objects and Arrays.

Continue reading “Arrays and objects in ARM Templates”

Test your PowerShell DSC configuration on Azure VM

Troubleshooting and debugging PowerShell DSC configurations could be sometimes very painful. If you are preparing your DSC configuration to use in Azure or even on-premises, you would like to test it in the real environment.

Continue reading “Test your PowerShell DSC configuration on Azure VM”

How to transfer resources between subscriptions in Microsoft Azure?

With Global Availability of Azure Ibiza portal we have got some update on feature request from Feedback Forums, see Move services between subscriptions. Unfortunately I do not have the full list of services and resources which support this feature, Everyday product teams of Microsoft add new ones to the list. Today, you can try it with Azure VM, Azure Automation, Azure Storage, Azure Cache, Azure Search, Azure Batch, Azure DocumentDB, HDInsight.

Change subsription feature of Azure Portal

There are might be few situations when you need it:

  • You have to change contract or create an other subscription
  • You want to transfer some resources to your client from your Sandbox subscription (like in my case it can be a MSDN Developer access) and you do not want or cannot use the ARM Templates.
  • Some other cases? Put it in comments, please.