Configure MSDTC using PowerShell

As of blackpearl v4.6.7, MSDTC is a requirement to deploy workflows.

You can use the following script to automate the MSDTC configuration during VM provisioning.

Note: Execution policy might have to be changed to run this script. (Set-ExecutionPolicy RemoteSigned –Force)

# ———————————
# Enable MSDTC for Network Access
# ———————————
Write-Host “Enabling MSDTC for Network Access…” foregroundcolor yellow
$System_OS=(Get-WmiObject class Win32_OperatingSystem).Caption
If ($System_OS match “2012 R2”)
    Set-DtcNetworkSetting DtcName Local AuthenticationLevel Incoming InboundTransactionsEnabled OutboundTransactionsEnabled RemoteClientAccessEnabled confirm:$false
    .\ConfigureMSDTC.ps1 Out-Null
    Restart-Service MSDTC
Write-Host “——MSDTC has been configured—–” foregroundcolor green

The above script uses the inbuilt cmdlet if your OS is Windows Server 2012 R2, else it will use the traditional approach of modifying the registry.

# Save the following script as a separate file: ConfigureMSDTC.ps1
$DTCSecurity “Incoming”
$RegPath “HKLM:\SOFTWARE\Microsoft\MSDTC\”

#Set Security and MSDTC path
                $RegSecurityPath “$RegPath\Security”
                Set-ItemProperty path $RegSecurityPath name “NetworkDtcAccess” value 1
                Set-ItemProperty path $RegSecurityPath name “NetworkDtcAccessClients” value 1
                Set-ItemProperty path $RegSecurityPath name “NetworkDtcAccessTransactions” value 1
                Set-ItemProperty path $RegSecurityPath name “NetworkDtcAccessInbound” value 1
                Set-ItemProperty path $RegSecurityPath name “NetworkDtcAccessOutbound” value 1
                Set-ItemProperty path $RegSecurityPath name “LuTransactions” value 1             

                if ($DTCSecurity eq “None”)
                    Set-ItemProperty path $RegPath name “TurnOffRpcSecurity” value 1
                    Set-ItemProperty path $RegPath name “AllowOnlySecureRpcCalls” value 0
                    Set-ItemProperty path $RegPath name “FallbackToUnsecureRPCIfNecessary” value 0
                elseif ($DTCSecurity eq “Incoming”)
                    Set-ItemProperty path $RegPath name “TurnOffRpcSecurity” value 0
                    Set-ItemProperty path $RegPath name “AllowOnlySecureRpcCalls” value 0
                    Set-ItemProperty path $RegPath name “FallbackToUnsecureRPCIfNecessary” value 1
                    Set-ItemProperty path $RegPath name “TurnOffRpcSecurity” value 0
                    Set-ItemProperty path $RegPath name “AllowOnlySecureRpcCalls” value 1
                    Set-ItemProperty path $RegPath name “FallbackToUnsecureRPCIfNecessary” value 0

Enable or Disable K2 blackpearl hostserver logging using PowerShell

If you are on a development machine and you need to frequently enable or disable file based host server logging, the following script will come in handy.

Note: Execution policy has to be changed to run this script. (Set-ExecutionPolicy RemoteSigned –Force)

## set-hostserverlogging.ps1 ##

#Set K2HostServer parameters
[int]$SecondsToWaitForResponse 30
$HostServerLogFile “C:\Program Files (x86)\K2 blackpearl\Host Server\Bin\HostServerLogging.config”

#Check the current setting
if (Test-Path $HostServerLogFile)
$xmldoc (Get-Content $HostServerLogFileas [xml]
$loglevelsetting $xmldoc.SelectSingleNode(‘//LogLocationSettings/LogLocation[@Name=”FileExtension”]/@LogLevel’).‘#text’
write-host(“$HostServerLogFile does not exist”Foregroundcolor Red

Function Restart-K2Service
Write-Host “Stopping and restarting K2 blackpearl Server Service on ‘$k2Host'” Foregroundcolor Red
Restart-Service displayname “K2 blackpearl Server” EA “Stop”
Write-Host “K2 blackpearl Server Service is restarted successfully” Foregroundcolor Green
Write-Host “Trying to open a connection within $SecondsToWaitForResponse seconds”
$success $false
$k2con New-Object SourceCode.Workflow.Client.Connection
while (($x lt $SecondsToWaitForResponseand (!$success))
$success $true
rap {write-host “..Failed. will try again” Foregroundcolor Yellow ;continue}
$success $false
sleep 1;

If ($Success)
write-host “K2 service is running and File logging is $Status” Foregroundcolor Green
if ($Status eq “Enabled”)
write-host “Check the Host Server\Bin directory for the latest log file”
write-host “”
Throw (New-Object System.Management.Automation.RuntimeException “K2 server did not respond in a timely fashion”)

Function Enable-FileLogging
if ($loglevelsetting ne “All”)
$xmldoc.SelectSingleNode(‘//LogLocationSettings/LogLocation[@Name=”FileExtension”]/@LogLevel’).‘#text’ ‘All’
$xmldoc.Save((Resolve-Path $HostServerLogFile))
$Status “Enabled”
write-host(“$HostServerLogFile is updated”Foregroundcolor Green
write-host(“File logging is already enabled. No changes are made to the config file”)

Function Disable-FileLogging
if ($loglevelsetting eq “All”)
$xmldoc.SelectSingleNode(‘//LogLocationSettings/LogLocation[@Name=”FileExtension”]/@LogLevel’).‘#text’ ‘Error’
$xmldoc.Save((Resolve-Path $HostServerLogFile))
write-host(“$HostServerLogFile is updated”Foregroundcolor Green
write-host(“File logging is already disabled. Current setting is $loglevelsetting. No changes are made to the config file”)

Function Select-Item
Param([String[]]$choiceList[String]$Caption=“Please make a selection”[String]$Message=“Choices are presented below”[int]$default=)
New-Object System.Collections.ObjectModel.Collection[System.Management.Automation.Host.ChoiceDescription]
$choiceList foreach $choicedesc.Add((New-Object “System.Management.Automation.Host.ChoiceDescription” ArgumentList $_))}

#Present the choice to the user
Switch (select-item Caption “Set HostServer Logging” Message “Do you want to: ” choice “&Enable”,“&Disable”,“&Cancel” default 0)
{ Enable-FileLogging }
{ Disable-FileLogging }

Exchange Management Service Broker troubleshooting in K2 blackpearl

Points to take note:

  • The following screenshots are taken on K2 Core 4.1 VM with blackpearl 4.5 Update KB001420.
  • Exchange2010 and Blackpearl are installed on the same machine.
  • The logged in user is denallix\administrator.
  • Pre-requisites:
  • KB001189 quotes you need a second service account. This is not compulsory and I have used K2 service account to do all the operations. So only one account is sufficient.
  • All the errors start with “Please make sure the K2 service account has impersonation rights in Exchange” This is a misleading message, so ignore that bit and look for the text after that.


Option 1: Set Authentication Mode to ServiceAccount [DENALLIX\K2Service]

“DENALLIX\K2Service” is member of following AD roles:

With the settings in place, we can create a meeting “On Behalf Of” any user as shown in the next 3 screenshots

Error: The SMTP address has no mailbox associated with it.
Cause: This happens if the “On Behalf Of” field has an invalid email address or the email is not found in Exchange

Error: The SMTP address format is invalid.
Cause: This happens if the “On Behalf Of” field does not have a value in the format of a valid email address


Option 2: Set Authentication Mode to Impersonate [Enforce Impersonation is unchecked]

The following Exchange Management Shell screenshot shows “DENALLIX\K2Service” is part of Exchange’s ApplicationImpersonation role. This was done during installation by the K2 setup manager.

Get-ManagementRoleAssignment Role “ApplicationImpersonation”

If you want to know more about Role Based Access Control (RBAC), please check this:

With the above settings in place, we can create a meeting “On Behalf Of” any user as shown in the next 2 screenshots. The smartobject tester tool is running under the context of logged in user denallix\administrator


Option 3: Set Authentication Mode to Impersonate and check “Enforce Impersonation”

Since the logged in user is denallix\administrator, the Smartobject tester tool ran using those credentials:

Error: The account does not have permission to impersonate the requested user.
Cause: ‘denallix\administrator’ account is not part of Exchange’s ApplicationImpersonation role

new-ManagementRoleAssignment Name “_suImpersonateRoleAsg” Role “ApplicationImpersonation” User “”

Note: After executing the new-ManagementRoleAssignment powershell cmdlet, you must wait 5-10 minutes (depends on exchange setup) for the smartobject to pick up those changes.

After waiting for the exchange role refresh, you should be able to create the meeting with Enforce Impersonation enabled.


Option 4: ‘Run as different user’ option to launch Smartobject tester tool, Set Authentication Mode to Impersonate and check “Enforce Impersonation”

The error is same as explained previously. But if you try to execute the new-ManagementRoleAssignment powershell cmdlet, you will get an error as below:

This is due to the usage of same name “_suImpersonateRoleAsg” in the command. So delete the existing entry and then add the user to the role.

Remove-ManagementRoleAssignment “_suImpersonateRoleAsg”

The above screenshot confirms that _suImpersonateRoleAsg role will be removed which has Administrator as the member. Since the role has now been deleted, you can add “Run as” user to the exchange role.

As mentioned earlier, you must wait 5-10 minutes before trying to create the meeting request.

Windows Azure Affinity Groups

Note: The following content is literally taken from Technet (I like this explanation about Affinity Groups and posting here for my own benefit)

Sometimes people wonder about Affinity Groups in Windows Azure and their benefits since for some; this is nothing more than a way to logically group both Compute and Storage.

So in order to explain this we need to dig a little deep in terms of how Windows Azure Data Centers are created. Basically Windows Azure Data Centers are built using “Containers” that inside are full of clusters and racks. Each of those Containers has specific services, like for example, Compute and Storage, SQL Azure, Service Bus, Access Control Service, and so on. Those containers are spread across the data center and each time we subscribe/deploy a service, the Fabric Controller (which chooses based on our solution configuration where the services should be deployed) can place our services spread across the data center. The fabric controller manages the nodes of a cluster. A higher-level process controls the choice of data center and cluster. This process is called RDFE which is short for Red Dog Front End (Red Dog was the Azure project’s code name). The Azure portal web site and web service talks directly to RDFE

Now one thing that can happen is we need to be very careful in where we create the several services, because if we place the Hosted Service in North Central US and then the Storage Account in South Central US, this won’t be very good both in terms of Latency or Costs, since we’ll get charged whenever we get out of the Data Center. But even if we choose the same Data Center, nothing tells us that the services will be close together, since one can be placed in one end of the Data Center and the other in the other end, and so this will remove the costs and make the latency better, but it would be great to go a little further like placing them in the same Container, or even in the same Cluster. The answer for this is Affinity Groups.

Basically Affinity Groups is a way to tell the Fabric Controller that those two elements, Compute and Storage, should always be together and close to one another, and what this does is when the Fabric Controller is searching for the best suited Container to deploy those services will be looking for one where it can deploy both in the same Cluster, making them as close as possible, and reducing the latency, and increasing the performance.

So in summary, Affinity Groups provide us:

  • Aggregation, since it aggregates our Compute and Storage services and provides the Fabric Controller the information needed for them to be kept in the same Data Center, and even more, in the same Cluster.
  • Reducing the Latency, because by providing information to the Fabric Controller that they should be kept together, allow us to get a lot better latency when accessing the Storage from the Compute Nodes, which makes difference in a highly available environment.
  • Lowering costs, as by using them we don’t have the possibility of getting one service in one Data Center and the other in another Data Center if for some reason we choose the wrong way, or even because we choose for both one of the “Anywhere” options.

Based on this, don’t forget to use Affinity Groups right from the start, since it’s NOT possible after having deployed either the Compute or Storage to change them into an Affinity Group.

To finalize, and since now you can be thinking that this would be very interesting for other services also, no other services are able to take advantage of this Affinity, since neither of them share the same Container.

How to enable HostServer Logging in K2 blackpearl

To change Log level setting, follow the steps below:

  1. Make a copy of C:\Program Files (x86)\K2 blackpearl\Host Server\Bin\HostServerLogging.config
    • The path might vary according to your installation. Backup is just a precaution in case you edit the file incorrectly
  2. Open the config file and Search for the following section

 <ApplicationLevelLogSetting Scope=”Default”>
    <LogLocation Name=”ConsoleExtension” Active=”True” LogLevel=”Debug” />
    <LogLocation Name=”FileExtension” Active=”False” LogLevel=”Debug” />
    <LogLocation Name=”EventLogExtension” Active=”False” LogLevel=”Debug” />
    <LogLocation Name=”ArchiveExtension” Active=”False” LogLevel=”Debug” />

If you want to do a full file trace for all details, edit the FileExtension line in the config file. Change the Active key to True and LogLevel to All.

Warning: For a production setup, you should enable the FileExtension logging and set the LogLevel to “Error”. If you set it to to “All”, the log file can grow pretty quickly into a very large file and should be monitored at regular intervals. It should look like this.

<LogLocation Name=”FileExtension” Active=”True” LogLevel=”All” />

  • Save the config file and restart the K2 host server service.
  • Note that the trace information will get logged into a file called HostServer<current date>_1.log and is located in the same folder.

  • To revert back to your old settings, replace the modified config file with the original copy.
  • Restart the K2 Host server service to stop the file logging.

How to verify K2 blackpearl version

The version numbers are highlighted in the screenshots below.
You have to go to Control Panel –> Programs and Features

In older versions of Windows OS, you might have to click on “View installed updates”.
In the below screenshot 4.5 (4.10060.1.0) is NOT the correct version number. There is an update KB001230 that was installed which is the correct version number.

K2 server version

Export and Import BACPAC using command line

SQLPackage.exe can be used for the following tasks:

  • Extract: Creates a database snapshot (.dacpac) file from an on-premise (local) SQL Server or Windows Azure SQL Database.
  • Export: Exports a on-premise (local) database – including database schema and user data – from SQL Server or Windows Azure SQL Database to a BACPAC package (.bacpac file).
  • Import: Imports the schema and table data from a BACPAC package into a new user database in an instance of SQL Server or Windows Azure SQL Database.
  • Publish: Incrementally updates a database schema to match the schema of a source .dacpac file. If the database does not exist on the server, the publish operation will create it. Otherwise, an existing database will be updated.
  • DeployReport: Creates an XML report of the changes that would be made by a publish action.
  • DriftReport: Creates an XML report of the changes that have been made to a registered database since it was last registered.
  • Script: Creates a Transact-SQL incremental update script that updates the schema of a target to match the schema of a source.

Let’s see some examples

Export the local database to a bacpac file

“C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe” /a:Export /ssn:mysqlserver /sdn:Tailspintoys /tf:C:\DataExtraction\Tailspintoys.bacpac

ssn – Specifies the name of the server that hosts the database.
sdn – Defines the name of the source database.
tf – Specifies a disk file path where the .dacpac or .bacpac file will be written.
If the logged in user is executing the script, Integrated Windows Authentication will be used. So no need to specify the username or password.

Import the bacpac file to SQLAzure

“C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe” /a:Import /sf:C:\DataExtraction\Tailspintoys.bacpac / /tdn:Tailspintoys /tu:mysysadmin@cgrd7z8kac /tp:Pa55w0rd

sf – Specifies a source file to be used as the source of action instead of database.
tsn – Specifies the name of the server that hosts the target database.
tdn – Specifies the name of the target database.
tu – SQL Server user that is used to get access to the target database.
tp – specifies password that is used to get access to the target database.

If you are planning to automate this using C# console application, there is a code sample available here