Automatically change Data Factory Integration Runtime proxy settings

Regarding to self-hosted Integration runtime (IR) setup instructions, this is possible to proxify outbound traffic from IR to DataFactory.

There is a configuration option “Use system proxy” which is not a real system proxy, but .NET application config in files diahost.exe.config and diawp.exe.config. So, any changes to WinHTTP or Internet Explorer proxy settings does not affect IR services.

There is a very simple method to configure the proxy for IR automatically without complicated XML parsing:

$config_proxy = '<defaultProxy enabled="true"><proxy bypassonlocal="true" proxyaddress="http://10.10.10.10:3128" /></defaultProxy>'

$diawp_config_file = "C:\\Program Files\\Microsoft Integration Runtime\\4.0\\Shared\\diawp.exe.config"
$diahost_config_file = "C:\\Program Files\\Microsoft Integration Runtime\\4.0\\Shared\\diahost.exe.config"

$diawp_config = Get-Content -Path $diawp_config_file
$diawp_config = [string]::Join(" ", $diawp_config)
$diawp_config = $diawp_config -replace '<defaultProxy useDefaultCredentials="true" />', $config_proxy
$diawp_config | Out-File -FilePath $diawp_config_file -Encoding utf8

$diahost_config = Get-Content -Path $diahost_config_file
$diahost_config = [string]::Join(" ", $diahost_config)
$diahost_config = $diahost_config -replace '<defaultProxy useDefaultCredentials="true" />', $config_proxy
$diahost_config | Out-File -FilePath $diahost_config_file -Encoding utf8

Apply the script and restart the service and IR should be in Ready state on Data Factory side.

If bypass URLs are needed or proxy authentication, there are more information about proxy settings customization in “Configure proxy server settings” section and system.net proxy element reference.

My Microsoft Ignite 2019 notes, Donovan Brown: Empowering every developer to innovate with Microsoft Azure

My Microsoft Ignite 2019 notes, Kirk Koenigsbauer: Microsoft’s roadmap for security, compliance and identity

Announces: https://www.microsoft.com/security/blog/2019/11/04/microsoft-announces-new-innovations-in-security-compliance-and-identity-at-ignite/ 

My Microsoft Ignite 2019 notes, Erin Chapple: Create more value for your business on Azure infrastructure, from the cloud to the edge

Azure Updates: https://azure.microsoft.com/en-us/updates/ 

My Microsoft Ignite 2019 notes, Satya Nadella ‘s Vision Keynote with links

Microsoft Ignite 2019 Page: https://www.microsoft.com/en-us/ignite 

Videos on-demand: https://aka.ms/msigniteondemand 

Azure 

Trust  

Developers 

Microsoft Power Platform https://powerplatform.microsoft.com/en-us/ 

  • + Power Automate – robotics automation, UI clicker with recorder 
  • + Power Virtual Agents –  conversational bots with flows designer 

Dynamics 365  

Microsoft 365 

  • Project Cortex – Knowledge Center – AI for simplify navigation around organisation knowledge and people connections
  • Fluid Framework – live collaboration on documents over different interfaces (Outlook, Teams, etc.) 
  • Whiteboard content camera  
  • Microsoft’s AI-powered eye gaze tech for Surface
  • Teams Integrations and Improvements 
  • Edge and Bing https://www.microsoft.com/en-us/windows/microsoft-edge 
    • Edge built on Chromium engine 
    • Ships independently of OS, can be launched on any OS 
    • 100% Internet Explorer 11 compatibility inside Edge 
    • Privacy settings 
    • Page collections with export to Excel and share capabilities 
    • Bing Internet and Intranet search (via Connectors) 

Minecraft Earth 

Get Virtual Machines names attached to Virtual Network

There is Azure Resource Graph Explorer which is a very useful tool for auditing cloud resources over multiple subscriptions under the same tenant

Nowadays, Azure Portal does not have any view which provides possibility to understand which resource exactly connected to a Virtual Network because most of resources are attached to Vnet over its Network Interface. So, when we look to Connected Devices tab we see only NICs names and not names of resources.

It possible to make more complex views with KUSTO query language and join information about several resources. In our case, to get the list of VMs with associated Vnets, there are VMs, NICs and VNets:

Resources
| where type == "microsoft.compute/virtualmachines"
| project name, vmnics = (properties.networkProfile.networkInterfaces)
| mv-expand vmnics
| project name, vmnics_id = tostring(vmnics.id)
| join (Resources | where type == "microsoft.network/networkinterfaces" | project nicname=(name), vmnics_id = tostring(id), properties) on vmnics_id
| mv-expand ipconfigs = (properties.ipConfigurations)
| extend subnet_resource_id = split(tostring(ipconfigs.properties.subnet.id), '/')
| order by name asc, nicname asc
| project vmname=(name), nicname, vnetname=subnet_resource_id[8], subnetname=subnet_resource_id[10]

How to launch the query in Azure Resource Graph Explorer: https://docs.microsoft.com/en-us/azure/governance/resource-graph/first-query-portal

Terraform workaround: Azure RM template deployment parameters types

If you are using azurerm_template_deployment terraform resource and getting following errors:

  • ‘[parameter]’ expected type ‘string’, got unconvertible type ‘array’
  • ‘[parameter]’ expected type ‘string’, got unconvertible type ‘object’
  • ‘[parameter]’ expected type ‘string’, got unconvertible type ‘int’
  • etc.

Then you are using parameters argument of this resource, so that is possible to pass only string-type parameters with this mechanic.

As an example, the problems begins when you need to pass reference to a KeyVault secret by parameters.

There is a closed issues on AzureRM Terraform provider on GitHub which seems to be impossible to resolve https://github.com/terraform-providers/terraform-provider-azurerm/issues/34

To avoid this error only possible way which I have found it to use parameters_body argument. In this case we will lost any validation by Terraform except validity of parameters JSON but we will able to put any type of parameters. The further validation of parameters will be done by ARM.

Example:

variable "array_variable_example" {
  default = <<EOF
    [ 
        "Element1",  
        "Element2", 
        "Element3" 
    ]
  EOF
}

variable "object_variable_example" {
  default = <<EOF
    {
      "key": {
        "subkey": "value"
      }`
    }
  EOF
}

variable "int_variable_example" {
  default = "1"
}

variable "string_variable_example" {
  default = "string"
}


resource "azurerm_template_deployment" "terraform_resource_name" {
    name                   = "_deployment_${var.environment}"
    resource_group_name    = "${var.resource_group_name}"

    template_body          = "${file("${path.module}/arm/azuredeploy.json")}"

    parameters_body  = <<EOF
      {
            "keyVaultSecretParameter": {
              "reference": {
                "keyVault": {
                  "id": "${var.key_vault_id}"
                },
                "secretName": "${var.secret_name}"
              }
            },

            "objectParameter": {
              "value": "${var.object_variable_example}"
            },

            "stringParameter": {
              "value": "${var.string_variable_example}"
            },

            "arrayParameter": {
              "value": ${var.array_variable_example}
            },

            "integerParameter": {
              "value": ${var.int_variable_example}
            }
      }
    EOF

    deployment_mode = "Incremental"
}

Programmatically create Azure AD Domain Services with Azure CLI

Azure AD Domain Services does not yet very well documented but from existing documentation and Swagger API specification we can find a way for Azure AD Domain Services creation with enabled LDAPS.

Azure AD Domain Services (Azure AD DS, AAD DS) Swagger REST API specification: https://github.com/Azure/azure-rest-api-specs/blob/master/specification/domainservices/resource-manager/Microsoft.AAD/stable/2017-06-01/domainservices.json
az resource create  --subscription [subscriotion-id] \
                    --resource-group [resource-group-name] \
                    --name [managed-domain-name] \
                    --resource-type 'Microsoft.AAD/DomainServices' \
                    --properties <<EOF
                        {
                            "DomainName"  : "[managed-domain-name]" , 
                            "SubnetId"    : "[subnet-id]”,
                            "domainSecuritySettings": {
                                "ntlmV1": "Enabled",
                                "tlsV1": "Disabled",
                                "syncNtlmPasswords": "Enabled"
                            },
                            "ldapsSettings" : {
                                "ldaps": "Enabled",
                                "pfxCertificate": "[pfx-content-inbase64]”,
                                "pfxCertificatePassword": "[pfx-password]",
                                "externalAccess": "Disabled"
                            }
                        }
                    EOF

My issue on GitHub for the documentation update: https://github.com/MicrosoftDocs/azure-docs/issues/40480#issuecomment-540573164

How to work with Azure Blobs when only Azure CLI installed

To avoid Azure Storage account keys usage and give to user the just enough access that is recommended to use Azure AD authentication and RBAC.

To download or read the blob from Storage Account with Private container, user needs at least “Storage Blob Data Reader” role (even if he is an owner of Storage Account resource)

Azure CLI script example:

subscription_id="00000-0000-0000-0000-00000000"
storage_account_name="<storage-account-name>"
container_name="<container-name>"
blob_path="<blob-name>"
output_file_path="<local-file-name>"

az login
az account set -s $subscription_id
az storage blob download  --account-name "$storage_account_name" \
                          --container-name "$container_name" \
                          --name "$blob_path" \
                          --file "$output_file_path" \
                          --auth-mode login

In Linux you have also Python by default and Python is included with Azure CLI installation (with all Azure, Azure AD, Azure Storage Python modules), following Python script can be used to get the similar to Azure CLI result with Device Login:

import adal

from azure.storage.blob import (
    BlockBlobService, 
    PublicAccess
)
from azure.storage.common import (
    TokenCredential
)

storage_account_name  = "<storage-account-name>"
container_name        = "<container-name>"
blob_path             = "<blob-name>"
output_file_path      = "<local-file-path>"


def get_device_login_token():
    # only for example Azure CLI Application ID
    client_id     = '04b07795-8ddb-461a-bbee-02f9e1bf7b46'
    # Your organisation's Tenant ID which used for RBAC for Storage
    tenant_id     = '<tenant-id>'
    authority_uri = ('https://login.microsoftonline.com/' + tenant_id + '/')
    resource_uri  = 'https://storage.azure.com'

    context = adal.AuthenticationContext(authority_uri, api_version=None)
    code = context.acquire_user_code(resource_uri, client_id)
    print(code['message'])
    mgmt_token = context.acquire_token_with_device_code(resource_uri, code, client_id)

    return TokenCredential(mgmt_token['accessToken'])

block_blob_service = BlockBlobService(
        account_name  = storage_account_name, 
        token_credential  = get_device_login_token()
    )

block_blob_service.get_blob_to_path(container_name, blob_path, output_file_path)

If you use your custom application as Python code then service principal must be registered in the tenant of your organisation. Multi-tenant application ID of Azure CLI is used here as an example, in this case we will see the logins from Python script as from Azure CLI. How to create and register multi-tenant application is explained here: https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-convert-app-to-be-multi-tenant

Create a table in PowerShell

Sometimes we need to have a CSV in output of the script. The manual concatenation of string is not a beautiful solution for object-oriented PowerShell. The easiest way to create the table and export it to CSV for me :

$records = @() 

$records += New-Object psobject -Property @{ Column1 = "Value1" Column2 = "Value2" } 

$records += New-Object psobject -Property @{ Column1 = "Value3" Column2 = "Value4" } 

$records | Export-Csv -NoTypeInformation -Path .\table.csv $records | Out-GridView