Installing Java Runtime in Azure Cloud Services with Chocolatey

I recently wrote a blog post about installing Splunk on Azure Web/Worker roles with the help of a startup task. You can see that blog post here. In this blog post I will show you how to install Java runtime in web/worker roles. Azure Web/Worker roles are stateless so the only way to install third party software or tweak windows features on web/worker roles is via startup tasks.

Linux users have had the benefit of tools like apt, yum etc to download and install software via command line. Chocolatey provides you with similar functionality on Windows Platform. If you into DevOps and automation on Windows platform you should check out Chocolatey here. It has nearly 15000 packages already available.

Once you have Chocolatey installed installing java is a breeze. It is as simple as

 choco install javaruntime -y  

The statement above is self explanatory. Option –y  answers y to all the questions including accepting the license so you are not prompted to answer any questions.

I already provided detailed steps to define startup tasks in my  previous blog post. So I will just share the startup script along with the service definition file that shows how to deploy Java runtime in Azure web/worker role with a startup task.

Step 1

Create a startup.cmd file and add it to your worker/web role implementation. It should be saved as “Unicode (UTF-8 without signature) – Codepage 65001”.

Set the “copy to output directory” property of startup.cmd to “copy if newer”

Line 9 checks to see if the startup task ran successfully  and end if it did

Line 16 installs chocolatey

Line 22 install java run time

Line 26 only execute if java was installed successfully and it creates startupcomplete.txt file in approot directory.

1:  SET LogPath=%LogFileDirectory%%LogFileName%  
2:     
3:  ECHO Current Role: %RoleName% >> "%LogPath%" 2>&1  
4:  ECHO Current Role Instance: %InstanceId% >> "%LogPath%" 2>&1  
5:  ECHO Current Directory: %CD% >> "%LogPath%" 2>&1  
6:     
7:  ECHO We will first verify if startup has been executed before by checking %RoleRoot%\StartupComplete.txt. >> "%LogPath%" 2>&1  
8:     
9:  IF EXIST "%RoleRoot%\StartupComplete.txt" (  
10:    ECHO Startup has already run, skipping. >> "%LogPath%" 2>&1  
11:    EXIT /B 0  
12:  )  
13:    
14:  Echo Installing Chocolatey >> "%LogPath%" 2>&1  
15:    
16:  @powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))" && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin  >> "%LogPath%" 2>&1  
17:    
18:  IF ERRORLEVEL EQU 0 (  
19:    
20:       Echo Installing Java runtime >> "%LogPath%" 2>&1  
21:    
22:       %ALLUSERSPROFILE%\chocolatey\bin\choco install javaruntime -y >> "%LogPath%" 2>&1  
23:    
24:       IF ERRORLEVEL EQU 0 (            
25:                 ECHO Java installed. Startup completed. >> "%LogPath%" 2>&1  
26:                 ECHO Startup completed. >> "%RoleRoot%\StartupComplete.txt" 2>&1  
27:                 EXIT /B 0  
28:       ) ELSE (  
29:            ECHO An error occurred. The ERRORLEVEL = %ERRORLEVEL%. >> "%LogPath%" 2>&1  
30:            EXIT %ERRORLEVEL%  
31:       )  
32:  ) ELSE (  
33:    ECHO An error occurred while install chocolatey The ERRORLEVEL = %ERRORLEVEL%. >> "%LogPath%" 2>&1  
34:    EXIT %ERRORLEVEL%  
35:  )  
36:    

 

Step 2

Update the service definition file to define the startup task.

Lines 5 through 19 define the startup task.

Lines 23 to 25 define local storage where Logs will be stored

1:  <?xml version="1.0" encoding="utf-8"?>  
2:  <ServiceDefinition name="AzureJavaPaaS" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition" schemaVersion="2015-04.2.6">  
3:   <WorkerRole name="MyWorkerRole" vmsize="Small">  
4:    <Startup>  
5:     <Task commandLine="Startup.cmd" executionContext="elevated" taskType="simple">  
6:      <Environment>  
7:       <Variable name="LogFileName" value="Startup.log" />  
8:       <Variable name="LogFileDirectory">  
9:        <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/LocalResources/LocalResource[@name='LogsPath']/@path" />  
10:       </Variable>  
11:       <Variable name="InstanceId">  
12:        <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/@id" />  
13:       </Variable>  
14:       <Variable name="RoleName">  
15:        <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/@roleName" />  
16:       </Variable>  
17:      </Environment>  
18:     </Task>  
19:    </Startup>  
20:    <ConfigurationSettings>  
21:     <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" />  
22:    </ConfigurationSettings>  
23:    <LocalResources>  
24:     <LocalStorage name="LogsPath" cleanOnRoleRecycle="false" sizeInMB="1024" />  
25:    </LocalResources>  
26:    <Imports>  
27:     <Import moduleName="RemoteAccess" />  
28:     <Import moduleName="RemoteForwarder" />  
29:    </Imports>  
30:   </WorkerRole>  
31:  </ServiceDefinition>  

 

Step 3

Publish the cloud service to Azure. I enabled remote desktop to be able to verify if the worker role was configured successfully.

Verification

I used Remote Desktop to log into the worker role. I  looked in

C:\Resources\Directory\d063631e14c1485cb6c838c8f92cd7c3.MyWorkerRole.LogsPath and found startup.txt

It had the following content. As you can see below that java was installed successfully.

1:  Current Role: MyWorkerRole   
2:  Current Role Instance: MyWorkerRole_IN_0   
3:  Current Directory: E:\approot   
4:  We will first verify if startup has been executed before by checking E:\StartupComplete.txt.   
5:  Installing Chocolatey   
6:  Installing Java runtime   
7:  Chocolatey v0.9.9.8  
8:  Installing the following packages:  
9:  javaruntime  
10:  By installing you accept licenses for the packages.  
11:    
12:  jre8 v8.0.45  
13:   Downloading jre8 32 bit  
14:    from 'http://javadl.sun.com/webapps/download/AutoDL?BundleId=106246'  
15:   Installing jre8...  
16:   jre8 has been installed.  
17:   Downloading jre8 64 bit  
18:    from 'http://javadl.sun.com/webapps/download/AutoDL?BundleId=106248'  
19:   Installing jre8...  
20:   jre8 has been installed.  
21:   PATH environment variable does not have D:\Program Files\Java\jre1.8.0_45\bin in it. Adding...  
22:   The install of jre8 was successful.  
23:    
24:  javaruntime v8.0.40  
25:   The install of javaruntime was successful.  
26:    
27:  Chocolatey installed 2/2 package(s). 0 package(s) failed.  
28:   See the log for details (D:\ProgramData\chocolatey\logs\chocolatey.log).  
29:  Java installed. Startup completed.   
30:    

I also verified that e:\startupcomplete.txt file was created.

I verified that java was installed in D:\Sun\Java directory

You can get the source code for this entire project from my GitHub Repository https://github.com/rajinders/azure-java-paas.

Posted in Azure, DevOps, PAAS | Tagged , | 2 Comments

Installing Splunk Forwarder in Azure Web/Worker Roles with a Startup Task

Overview

I recently had to install Splunk Universal Forwarder in Azure worker roles. Azure web/worker roles are stateless so the only way to install any software is to do so via Azure startup tasks. Azure startup tasks have been around for many years. MSDN documentation about startup tasks can be reviewed here:

Run startup tasks in Azure

https://msdn.microsoft.com/en-us/library/azure/hh180155.aspx?f=255&MSPPError=-2147217396

Best practices for startup tasks

https://msdn.microsoft.com/en-us/library/azure/jj129545.aspx

One of the best resources about Azure startup tasks is these two blog posts by my friend Chris Clayton.

http://blogs.msdn.com/b/cclayton/archive/2012/05/17/windows-azure-start-up-tasks-part-1.aspx

http://blogs.msdn.com/b/cclayton/archive/2012/05/17/windows-azure-start-up-tasks-part-2.aspx

Even though I will show you how to install Splunk Universal forwarder this approach can be used to install any third party software on a web/worker role.

Details

Requirements

  • We only need to install Splunk forwarder once.
  • We need to install the forwarder using command line
  • We need to leave detailed logs to help us debug any issues that may arise
  • We need to use a reliable location to download the setup files.

A few more decisions need to be made upfront.

How to make the installer available in the web/worker role?

Your choices are:

  1. Include the installer with the source code
  2. Download the installer from an external locations.

Having large deployment package can affect the speed of deployment so I tend to prefer downloading the installer from either Azure blob storage or external location like the vendor website that created the installer file.

Which scripting language to use for the startup task?

You choices are:

1. Combination of a command batch file and a PowerShell script.

2. Do all the installation via a command batch file.

In my case I chose to do use just a command batch file to install Splunk. Batch files have been around for 25 years and I wanted to keep things as simple as possible.

Step 1

Splunk Universal Forward MSI file can be downloaded from splunk.com. However we cannot be certain that download location of the installer will not change in future. So I downloaded the MSI file for Splunk installer and uploaded into blob container in a storage account. Create a Azure storage account or use an existing storage account. I used Azure management portal to create a new storage account. This account should be created in the same region where you will be deploying your cloud service. This is not a requirement but having your storage account with Splunk installer in the same region as your cloud service will increase the download speed.

Create a storage container where you will upload the Splunk installer. I created a public container called Splunk in Azure management portal.

image

Download Splunk Installer and upload it into a storage container. I used Azure management studio to upload the msi file to the newly created container. You can use any tool of your choice to upload the MSI.

Step 2

Create Startup.cmd file and add it to the worker/web role implementation project. This is a standard cmd file however it needs to be saved in “Unicode (UTF-8 without signature) – Codepage 65001” or it will not execute as a startup task.

image

image

Select the newly add startup.cmd file in the solution explorer and set the property “copy to output directory” to “Copy if newer” as shown below.

image

 

Startup.cmd needs to download the Splunk installer. I chose to use Azcopy command line utility. My other option was to use PowerShell azure storage cmdlets to download the file. You can learn more about azcopy here:

https://azure.microsoft.com/en-us/documentation/articles/storage-use-azcopy/

I downloaded azcopy and its dependencies and included them in my worker role implementation project.

Set the property “copy to output directory” to “copy if newer” to make sure azcopy and its dependencies get copied to the role instance during deployment.

image 

You need to define your startup task in the role ServiceDefinition.csdef file. Here are a few things worth mentioning:

  1. You define startup task by adding <Startup> section in the WorkerRole element.
  2. You can define as many tasks as you want.
  3. Task element mentions that command that will be executed during the execution of the startup tasks.
  4. elevationContext defines the level of access the startup task will have
  5. taskType can be simple, foreground or background.
  6. You can define variables which can access Role Environment during the startup task execution.
1:  <?xml version="1.0" encoding="utf-8"?>  
2:  <ServiceDefinition name="AzureStatupTask" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition" schemaVersion="2015-04.2.6">  
3:   <WorkerRole name="MyWorkerRole" vmsize="Small">  
4:    <Startup>  
5:     <Task commandLine="Startup.cmd" executionContext="elevated" taskType="simple">  
6:      <Environment>  
7:       <Variable name="LogFileName" value="Startup.log" />  
8:       <Variable name="LogFileDirectory">  
9:        <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/LocalResources/LocalResource[@name='LogsPath']/@path" />  
10:       </Variable>  
11:       <Variable name="InstanceId">  
12:        <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/@id" />  
13:       </Variable>  
14:       <Variable name="RoleName">  
15:        <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/@roleName" />  
16:       </Variable>  
17:      </Environment>  
18:     </Task>  
19:    </Startup>  
20:    <ConfigurationSettings>  
21:     <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" />  
22:    </ConfigurationSettings>  
23:    <LocalResources>  
24:     <LocalStorage name="LogsPath" cleanOnRoleRecycle="false" sizeInMB="1024" />  
25:    </LocalResources>  
26:    <Imports>  
27:     <Import moduleName="RemoteAccess" />  
28:     <Import moduleName="RemoteForwarder" />  
29:    </Imports>  
30:   </WorkerRole>  
31:  </ServiceDefinition>  

 

Here is the startup.cmd file I used to deploy Splunk Universal forwarder. Here is a summary of the steps:

  1. I check to see if StartupComplete.txt exists. If it exists I exit from the script.
  2. I used azcopy to download the Splunk installer from blob storage to local storage
  3. I execute the installer to install Splunk forwarder
  4. I update inputs.conf to setup monitoring for the application log files
  5. I start Splunk service
  6. If there are no errors I create StartupComplete.txt
  7. If the entire script is successful you need to exit it with return code of 0
  8. If there is a failure you return an errorlevel
1:  SET LogPath=%LogFileDirectory%%LogFileName%  
2:     
3:  ECHO Current Role: %RoleName% >> "%LogPath%" 2>&1  
4:  ECHO Current Role Instance: %InstanceId% >> "%LogPath%" 2>&1  
5:  ECHO Current Directory: %CD% >> "%LogPath%" 2>&1  
6:     
7:  ECHO We will first verify if startup has been executed before by checking %RoleRoot%\StartupComplete.txt. >> "%LogPath%" 2>&1  
8:     
9:  IF EXIST "%RoleRoot%\StartupComplete.txt" (  
10:    ECHO Startup has already run, skipping. >> "%LogPath%" 2>&1  
11:    EXIT /B 0  
12:  )  
13:    
14:  AzCopy\AzCopy.exe /Source:https://rajpublic.blob.core.windows.net/splunk/ /Dest:%TEMP% /Pattern:splunkforwarder-6.2.3-264376-x64-release.msi /Y >> "%LogPath%" 2>&1  
15:    
16:  IF ERRORLEVEL EQU 0 (  
17:    
18:       Echo Installing Splunk Forwarder >> "%LogPath%" 2>&1  
19:    
20:       msiexec.exe /i %TEMP%\splunkforwarder-6.2.3-264376-x64-release.msi AGREETOLICENSE=Yes RECEIVING_INDEXER="10.0.0.68:9997" LAUNCHSPLUNK=0 SERVICESTARTTYPE=auto WINEVENTLOG_APP_ENABLE=1 SET_ADMIN_USER=1 PERFMON=cpu,memory,network,diskspace /quiet >> "%LogPath%" 2>&1  
21:     
22:       IF ERRORLEVEL EQU 0 (  
23:    
24:              
25:            Echo [monitor://D:\logs] >> "D:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf"  
26:            Echo disabled = false >> "D:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf"  
27:            Echo followTail = true >> "D:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf"  
28:            Echo index = main >> "D:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf"  
29:            Echo sourcetype = general >> "D:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf"  
30:    
31:            "D:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" start >> "%LogPath%" 2>&1   
32:    
33:            IF ERRORLEVEL EQU 0 (  
34:                 ECHO Splunk installed. Startup completed. >> "%LogPath%" 2>&1  
35:                 ECHO Startup completed. >> "%RoleRoot%\StartupComplete.txt" 2>&1  
36:                 EXIT /B 0  
37:            ) ELSE  
38:            (  
39:                 ECHO An error occurred while starting Splunk. The ERRORLEVEL = %ERRORLEVEL%. >> "%LogPath%" 2>&1  
40:                 EXIT %ERRORLEVEL%  
41:            )  
42:       ) ELSE (  
43:            ECHO An error occurred. The ERRORLEVEL = %ERRORLEVEL%. >> "%LogPath%" 2>&1  
44:            EXIT %ERRORLEVEL%  
45:       )  
46:  ) ELSE (  
47:    ECHO An error occurred while downloading Splunk forwarder The ERRORLEVEL = %ERRORLEVEL%. >> "%LogPath%" 2>&1  
48:    EXIT %ERRORLEVEL%  
49:  )  
50:    

 

Verification and Troubleshooting

If startup task completed successfully you will see a file called e:\startupcomplete.txt as shown below.

This file is used to determine if Splunk installer was installed successfully. 

image

Verify that Splunk forwarder was installed successfully.

image

Verify that post install configuration was complete successfully.

image

 

Troubleshooting

If your deployment fails or if Splunk in not installed successfully you will follow these steps to troubleshoot issues:

Verify that AzCopy and Startup.cmd were copied to the e:\approot directory.

image

image

 

Look in C:\Resources\temp\xxxxxxxxxxxxxxxxxxxxxxxxx.MyWorkerRole\RoleTemp

You should see the splunk installer in this directory.

 image

Look for the full log file created by the startup.cmd in C:\Resources\Directory\a1be486474e348728ec129e4109d4e13.MyWorkerRole.LogsPath

image

Here is a sample startuplog.txt file that was created when there were no errors.

1:  Current Role: MyWorkerRole   
2:  Current Role Instance: MyWorkerRole_IN_0   
3:  Current Directory: E:\approot   
4:  We will first verify if startup has been executed before by checking E:\StartupComplete.txt.   
5:  [2015/07/03 22:42:35] Transfer summary:  
6:  -----------------  
7:  Total files transferred: 1  
8:  Transfer successfully:  1  
9:  Transfer skipped:    0  
10:  Transfer failed:     0  
11:  Elapsed time:      00.00:00:03  
12:  Installing Splunk Forwarder   
13:    
14:  Splunk> CSI: Logfiles.  
15:    
16:  Checking prerequisites...  
17:       Checking mgmt port [8089]: Loading 'screen' into random state - done  
18:  Generating a 1024 bit RSA private key  
19:  ....++++++  
20:  ..................................................................++++++  
21:  writing new private key to 'privKeySecure.pem'  
22:  -----  
23:  Loading 'screen' into random state - done  
24:  Signature ok  
25:  subject=/CN=RD000D3A909BA6/O=SplunkUser  
26:  Getting CA Private Key  
27:  writing RSA key  
28:  open  
29:       Checking conf files for problems...  
30:       Done  
31:  All preliminary checks passed.  
32:    
33:  Starting splunk server daemon (splunkd)...   
34:    
35:  SplunkForwarder: Starting (pid 2436)  
36:  Done  
37:    
38:  Splunk installed. Startup completed.   
39:    

Code Sample

You can get the source of this sample from my GitHub Repository here: https://github.com/rajinders/azure-startup-task

Summary

Azure startup tasks are the only way to install third party software on your Azure web/worker roles. Failures in startup tasks can lead to role startup issues so you need to log extensively to help you troubleshoot errors. Startup tasks do have access to your role environment.

Posted in Azure, DevOps, PAAS, Windows Azure | Tagged , , | Leave a comment

How to migrate from Standard Azure Virtual Machines to DS Series Storage Optimized VM’s

Background

We are implementing Azure solutions for a few clients. Most of our clients are use cloud services and virtual machines to implement their solutions on Azure platform. For many years Azure platform  only offered just one performance tier for storage. You can see the sizes of virtual machines and cloud services and the disk performance they offer here:

https://msdn.microsoft.com/en-us/library/azure/dn197896.aspx

For standard Azure virtual machines each disk is limited to 500 IOPS per disk. If you needed better performance you had to use disk striping with multiple disk to get better performance. Number of disks one could add to Azure Virtual Machine is constrained by the size of VM. One core allows us to add 2 VHD’s. Each VHD is a page blob with a maximum size of 1 TB. When we were deploying packaged software or custom applications with high IOPS requirements it was challenging to meet the needs of our customers. All this changed with the following announcement by Mark Russinovich where he announced General Availability of Azure Premium Storage.

http://azure.microsoft.com/blog/2015/04/16/azure-premium-storage-now-generally-available-2/

Azure premium storage offers durable SSD storage. Along with premium storage Microsoft also released storage optimized virtual machines called DS Series VM’s. These are capable of achieving up to 64000 IOPS and 524 MB/sec. This will enable many scenarios like NoSQL or even large SQL database that need higher IOPS than the standard Azure virtual machines offer. You can read about the specifications for DS Series VM’s in the link posted above. If you were using a standard Azure VM you can easily scale up or down to another standard virtual machine using Portal, PowerShell and Azure CLI. Unfortunately it is currently not possible to upgrade/migrate a standard Azure virtual machine to a DS Series virtual machine with premium storage. In this blog post I will show you how you can migrate an existing virtual machine to a DS’s series virtual machine with Premium(durable SSD) storage. It will provide a PowerShell script you can leverage to migrate a standard virtual machine to a DS Series virtual machine.

Details

Creating Premium Storage Account

Premium storage account is different than standard storage accounts. If you want to leverage premium storage you need to create a new storage account in the azure preview portal. Account type you need to select is “Premium Locally Redundant”

It is not possible to use the existing Azure management portal to provision premium storage account.

New Storage Account

Here is how you can use PowerShell cmdlet to create premium storage account. As you can see it is similar to how you create standard storage accounts. I was unable to find what value I had to specify for Type and I had to read the actual source code to determine that it was ‘Premium_LRS’

001
002
003
004
005
006
007
008
$StorageAccountTypePremium = ‘Premium_LRS’

$DestStorageAccount = New-AzureStorageAccount -StorageAccountName $DestStorageAccountName -Location $Location -Type $StorageAccountTypePremium -ErrorVariable errorVariable -ErrorAction SilentlyContinue | Out-Null

if (!($?)) 
{ 
    throw “Cannot create the Storage Account [$DestStorageAccountName] on $Location. Error Detail: $errorVariable” 
}

 

Premium storage and DS Series virtual machines are not available in all regions. The complete script I will provide validates your location preference and fails if you specify a location where premium storage and DS Series VM’s are not available.

Creating DS Series Virtual machine is identical to creating standard virtual machines.

Here are a few things I learned about DS Series Virtual machines and Premium storage.

  • Premium storage does not allow us to add  disks smaller than 10 GB. If your VM has a disk smaller than 10 GB the script will fail
  • Default Host Caching option for Premium storage data disks is “Ready Only” as compared with “None” for standard data disks.
  • Default Host Caching option for Premium storage OS disk is “Read Write” which is same as standard OS disks
  • Currently this script only migrates virtual machines to the same subscription. It can be easily extended to support migration to different subscriptions. 
  • It can migrate VM’s to a different region as long as premium storage is available in that region
  • It shuts down the existing source VM before making of copy of the VHD’s for the virtual machine.
  • It validates that virtual network for the destination VM exists but does not validate if subnet also exists
  • It gives new names to the disks in the destination virtual machine
  • Currently I am only copying disks, end points, VM extensions. I am not copying ACL’s and other type of extensions like malware extension
  • I only tested the script with PowerShell SDK Version 0.9.2
  • I tested migrating standard VM in West US to DS Series VM in West US only. I logged into the newly created VM and verified that all disks were present. This is the extent of my testing. My VM with 3 Disk’s copied in 10 minutes.
  • If your destination storage account already exists it has to be of type “Premium_LRS”. If you have an existing account of different type the script will fail. If the storage account does not exist it will be created.

Sample Script

You can access the entire source code from my public GitHub repository

https://github.com/rajinders/migrate-to-azuredsvm

I have also pasted the entire source code here for your convenience.

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
082
083
084
085
086
087
088
089
090
091
092
093
094
095
096
097
098
099
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
<#
Copyright 2015 Rajinder Singh
 
Licensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
 
    http://www.apache.org/licenses/LICENSE-2.0
 
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
#>

<#
.SYNOPSIS
Migrates an existing VM into a DS Series VM which uses Premium storage.
 
.DESCRIPTION
This script migrates an exitsing VM into a DS Series VM which uses Premium Storage. At this time DS Series VM’s are not available in all regions.
It currently expects the VM to be migrated in the same subscription. It supports migrating VM to the same region or a different region.
It can be easily extended to support migrating to a different subscription as well
 
.PARAMETER SourceVMName
The name of the VM that needs to be migrated
 
.PARAMTER SourceServiceName
The name of service for the old VM
 
.PARAMETER DestVMName
The name of New DS Series VM that will be created.
 
.PARAMTER DestServiceName
The name of the Service for the new VM
 
.PARAMTER Location
Region where new VM will be created
 
.PARAMTER Size
Size of the new VM
 
.PARAMTER DestStorageAccountName
Name of the storage account where the VM will be created. It has to be premium storage account
 
.PARAMETER ResourceGroupName
Resource group where the cache will be create
 
.EXAMPLE
 
# Migrate a standalone virtual machine to a DS Series virtual machine with Premium storage. Both the VM’s are in the same subscription
.\MigrateVMToPremiumStorage.ps1 -SourceVMName “rajsourcevm2” -SourceServiceName “rajsourcevm2” -DestVMName “rajdsvm12” -DestServiceName “rajdsvm12svc” -Location “West US” -VMSize Standard_DS2 -DestStorageAccountName ‘rajwestpremstg18’ -DestStorageAccountContainer ‘vhds’
 
 
# Migrate a standalone virtual machine to a DS Series virtual machine with Premium storage. Both the VM’s are in the same subscription
.\MigrateVMToPremiumStorage.ps1 -SourceVMName “rajsourcevm2” -SourceServiceName “rajsourcevm2” -DestVMName “rajdsvm16” -DestServiceName “rajdsvm16svc” -Location “West US” -VMSize Standard_DS2 -DestStorageAccountName ‘rajwestpremstg19’ -DestStorageAccountContainer ‘vhds’ -VNetName rajvnettest3 -SubnetName FrontEndSubnet
 
#>

[CmdletBinding(DefaultParameterSetName=“Default”)]
Param
(
    [Parameter (Mandatory = $true)]
    [string] $SourceVMName,

    [Parameter (Mandatory = $true)]
    [string] $SourceServiceName,

    [Parameter (Mandatory = $true)]
    [string] $DestVMName,

    [Parameter (Mandatory = $true)]
    [string] $DestServiceName,

    [Parameter (Mandatory = $true)]
    [ValidateSet(‘West US’,‘East US 2’,‘West Europe’,‘East China’,‘Southeast Asia’,‘West Japan’, ignorecase=$true)]
    [string] $Location,

    [Parameter (Mandatory = $true)]
    [ValidateSet(‘Standard_DS1’,‘Standard_DS2’,‘Standard_DS3’,‘Standard_DS4’,‘Standard_DS11’,‘Standard_DS12’,‘Standard_DS13’,‘Standard_DS14’, ignorecase=$true)]
    [string] $VMSize,

    [Parameter (Mandatory = $true)]
    [string] $DestStorageAccountName,

    [Parameter (Mandatory = $true)]
    [string] $DestStorageAccountContainer,

    [Parameter (Mandatory = $false)]
    [string] $VNetName,

    [Parameter (Mandatory = $false)]
    [string] $SubnetName
)

#publish version of the the powershell cmdlets we are using
(Get-Module Azure).Version

#$VerbosePreference = “Continue”
$StorageAccountTypePremium = ‘Premium_LRS’

#############################################################################################################
#validation section
#Perform as much upfront validation as possible
#############################################################################################################

#validate upfront that this service we are trying to create already exists
if((Get-AzureService -ServiceName $DestServiceName -ErrorAction SilentlyContinue) -ne $null)
{
    Write-Error “Service [$DestServiceName] already exists”
    return
}

#Determine we are migrating the VM to a Virtual network. If it is then verify that VNET exists
if( !$VNetName -and !$SubnetName )
{
    $DeployToVNet = $false
}
else
{
    $DeployToVNet = $true
    $vnetSite = Get-AzureVNetSite -VNetName $VNetName -ErrorAction SilentlyContinue

    if (!$vnetSite)
    {
        Write-Error “Virtual Network [$VNetName] does not exist”
        return
    }
}

Write-Host “DepoyToVNet is set to [$DeployToVnet]”

#TODO: add validation to make sure the destination VM size can accomodate the number of disk in the source VM

$DestStorageAccount = Get-AzureStorageAccount -StorageAccountName $DestStorageAccountName -ErrorAction SilentlyContinue

#check to see if the storage account exists and create a premium storage account if it does not exist
if(!$DestStorageAccount)
{
    # Create a new storage account
    Write-Output “”;
    Write-Output (“Configuring Destination Storage Account {0} in location {1}” -f $DestStorageAccountName, $Location);

    $DestStorageAccount = New-AzureStorageAccount -StorageAccountName $DestStorageAccountName -Location $Location -Type $StorageAccountTypePremium -ErrorVariable errorVariable -ErrorAction SilentlyContinue | Out-Null

    if (!($?)) 
    { 
        throw “Cannot create the Storage Account [$DestStorageAccountName] on $Location. Error Detail: $errorVariable” 
    } 
   
    Write-Verbose “Created Destination Storage Account [$DestStorageAccountName] with AccountType of [$($DestStorageAccount.AccountType)]”    
}
else
{
    Write-Host “Destination Storage account [$DestStorageAccountName] already exists. Storage account type is [$($DestStorageAccount.AccountType)]”

    #make sure if the account already exists it is of type premium storage
    if( $DestStorageAccount.AccountType -ne $StorageAccountTypePremium )
    {
        Write-Error “Storage account [$DestStorageAccountName] account type of [$($DestStorageAccount.AccountType)] is invalid”
        return
    }
}

Write-Host “Source VM Name is [$SourceVMName] and Service Name is [$SourceServiceName]”

#Get VM Details
$SourceVM = Get-AzureVM -Name $SourceVMName -ServiceName $SourceServiceName -ErrorAction SilentlyContinue

if($SourceVM -eq $null)
{
    Write-Error “Unable to find Virtual Machine [$SourceServiceName] in Service Name [$SourceServiceName]”
    return
}

Write-Host “vm name is [$($SourceVM.Name)] and vm status is [$($SourceVM.Status)]”

#need to shutdown the existing VM before copying its disks.
if($SourceVM.Status -eq “ReadyRole”)
{
    Write-Host “Shutting down virtual machine [$SourceVMName]”
    #Shutdown the VM
    Stop-AzureVM -ServiceName $SourceServiceName -Name $SourceVMName -Force
}

$osdisk = $SourceVM | Get-AzureOSDisk

Write-Host “OS Disk name is $($osdisk.DiskName) and disk location is $($osdisk.MediaLink)”

$disk_configs = @{}

# Used to track disk copy status
$diskCopyStates = @()

##################################################################################################################
# Kicks off the async copy of VHDs
##################################################################################################################

# Copies to remote storage account
# Returns blob copy state to poll against
function StartCopyVHD($sourceDiskUri, $diskName, $OS, $destStorageAccountName, $destContainer)
{
    Write-Host “Destination Storage Account is [$destStorageAccountName], Destination Container is [$destContainer]”

    #extract the name of the source storage account from the URI of the VHD
    $sourceStorageAccountName = $sourceDiskUri.Host.Replace(“.blob.core.windows.net”, “”)
   

    $vhdName = $sourceDiskUri.Segments[$sourceDiskUri.Segments.Length  1].Replace(“%20”,” “) 
    $sourceContainer = $sourceDiskUri.Segments[$sourceDiskUri.Segments.Length  2].Replace(“/”, “”)

    $sourceStorageAccountKey = (Get-AzureStorageKey -StorageAccountName $sourceStorageAccountName).Primary
    $sourceContext = New-AzureStorageContext -StorageAccountName $sourceStorageAccountName -StorageAccountKey $sourceStorageAccountKey

    $destStorageAccountKey = (Get-AzureStorageKey -StorageAccountName $destStorageAccountName).Primary
    $destContext = New-AzureStorageContext -StorageAccountName $destStorageAccountName -StorageAccountKey $destStorageAccountKey
    if((Get-AzureStorageContainer -Name $destContainer -Context $destContext -ErrorAction SilentlyContinue) -eq $null)
    {
        New-AzureStorageContainer -Name $destContainer -Context $destContext | Out-Null

        while((Get-AzureStorageContainer -Name $destContainer -Context $destContext -ErrorAction SilentlyContinue) -eq $null)
        {
            Write-Host “Pausing to ensure container $destContainer is created..” -ForegroundColor Green
            Start-Sleep 15
        }
    }

    # Save for later disk registration
    $destinationUri = “https://$destStorageAccountName.blob.core.windows.net/$destContainer/$vhdName”
   
    if($OS -eq $null)
    {
        $disk_configs.Add($diskName, “$destinationUri”)
    }
    else
    {
       $disk_configs.Add($diskName, “$destinationUri;$OS”)
    }

    #start async copy of the VHD. It will overwrite any existing VHD
    $copyState = Start-AzureStorageBlobCopy -SrcBlob $vhdName -SrcContainer $sourceContainer -SrcContext $sourceContext -DestContainer $destContainer -DestBlob $vhdName -DestContext $destContext -Force

    return $copyState
}

##################################################################################################################
# Tracks status of each blob copy and waits until all the blobs have been copied
##################################################################################################################

function TrackBlobCopyStatus()
{
    param($diskCopyStates)
    do
    {
        $copyComplete = $true
        Write-Host “Checking Disk Copy Status for VM Copy” -ForegroundColor Green
        foreach($diskCopy in $diskCopyStates)
        {
            $state = $diskCopy | Get-AzureStorageBlobCopyState | Format-Table -AutoSize -Property Status,BytesCopied,TotalBytes,Source
            if($state -ne “Success”)
            {
                $copyComplete = $true
                Write-Host “Current Status” -ForegroundColor Green
                $hideHeader = $false
                $inprogress = 0
                $complete = 0
                foreach($diskCopyTmp in $diskCopyStates)
                { 
                    $stateTmp = $diskCopyTmp | Get-AzureStorageBlobCopyState
                    $source = $stateTmp.Source
                    if($stateTmp.Status -eq “Success”)
                    {
                        Write-Host (($stateTmp | Format-Table -HideTableHeaders:$hideHeader -AutoSize -Property Status,BytesCopied,TotalBytes,Source | Out-String)) -ForegroundColor Green
                        $complete++
                    }
                    elseif(($stateTmp.Status -like “*failed*”) -or ($stateTmp.Status -like “*aborted*”))
                    {
                        Write-Error ($stateTmp | Format-Table -HideTableHeaders:$hideHeader -AutoSize -Property Status,BytesCopied,TotalBytes,Source | Out-String)
                        return $false
                    }
                    else
                    {
                        Write-Host (($stateTmp | Format-Table -HideTableHeaders:$hideHeader -AutoSize -Property Status,BytesCopied,TotalBytes,Source | Out-String)) -ForegroundColor DarkYellow
                        $copyComplete = $false
                        $inprogress++
                    }
                    $hideHeader = $true
                }
                if($copyComplete -eq $false)
                {
                    Write-Host “$complete Blob Copies are completed with $inprogress that are still in progress.” -ForegroundColor Magenta
                    Write-Host “Pausing 60 seconds before next status check.” -ForegroundColor Green 
                    Start-Sleep 60
                }
                else
                {
                    Write-Host “Disk Copy Complete” -ForegroundColor Green
                    break 
                }
            }
        }
    } while($copyComplete -ne $true) 
    Write-Host “Successfully Copied up all Disks” -ForegroundColor Green
}

# Mark the start time of the script execution
$startTime = Get-Date 

Write-Host “Destination storage account name is [$DestStorageAccountName]”

# Copy disks using the async API from the source URL to the destination storage account
$diskCopyStates += StartCopyVHD -sourceDiskUri $osdisk.MediaLink -destStorageAccount $DestStorageAccountName -destContainer $DestStorageAccountContainer -diskName $osdisk.DiskName -OS $osdisk.OS

# copy all the data disks
$SourceVM | Get-AzureDataDisk | foreach {

    Write-Host “Disk Name [$($_.DiskName)], Size is [$($_.LogicalDiskSizeInGB)]”

    #Premium storage does not allow disks smaller than 10 GB
    if( $_.LogicalDiskSizeInGB -lt 10 )
    {
        Write-Warning “Data Disk [$($_.DiskName)] with size [$($_.LogicalDiskSizeInGB) is less than 10GB so it cannnot be added” 
    }
    else
    {
        Write-Host “Destination storage account name is [$DestStorageAccountName]”
        $diskCopyStates += StartCopyVHD -sourceDiskUri $_.MediaLink -destStorageAccount $DestStorageAccountName -destContainer $DestStorageAccountContainer -diskName $_.DiskName
    }
}

#check that status of blob copy. This may take a while if you are doing cross region copies.
#even in the same region a 127 GB takes nearly 10 minutes
TrackBlobCopyStatus -diskCopyStates $diskCopyStates

# Mark the finish time of the script execution
$finishTime = Get-Date 
 
# Output the time consumed in seconds
$TotalTime = ($finishTime  $startTime).TotalSeconds 
Write-Host “The disk copies completed in $TotalTime seconds.” -ForegroundColor Green

Write-Host “Registering Copied Disk” -ForegroundColor Green

$luncount = 0   # used to generate unique lun value for data disks
$index = 0  # used to generate unique disk names
$OSDisk = $null

$datadisk_details = @{}

foreach($diskName in $disk_configs.Keys)
{
    $index = $index + 1

    $diskConfig = $disk_configs[$diskName].Split(“;”)

    #since we are using the same subscription we need to update the diskName for it to be unique
    $newDiskName = “$DestVMName” + “-disk-“ + $index

    Write-Host “Adding disk [$newDiskName]”

    #check to see if this disk already exists
    $azureDisk = Get-AzureDisk -DiskName $newDiskName -ErrorAction SilentlyContinue

    if(!$azureDisk)
    {

        if($diskConfig.Length -gt 1)
        {
           Write-Host “Adding OS disk [$newDiskName] -OS [$diskConfig[1]] -MediaLocation [$diskConfig[0]]”

           #Expect OS Disk to be the first disk in the array
           $OSDisk = Add-AzureDisk -DiskName $newDiskName -OS $diskConfig[1] -MediaLocation $diskConfig[0]

           $vmconfig = New-AzureVMConfig -Name $DestVMName -InstanceSize $VMSize -DiskName $OSDisk.DiskName 

        }
        else
        {
            Write-Host “Adding Data disk [$newDiskName] -MediaLocation [$diskConfig[0]]”

            Add-AzureDisk -DiskName $newDiskName -MediaLocation $diskConfig[0]

            $datadisk_details[$luncount] = $newDiskName

            $luncount = $luncount + 1  
        }
    }
    else
    {
        Write-Error “Unable to add Azure Disk [$newDiskName] as it already exists”
        Write-Error “You can use Remove-AzureDisk -DiskName $newDiskName to remove the old disk”
        return
    }
}

#add all the data disks to the VM configuration
foreach($lun in $datadisk_details.Keys)
{
    $datadisk_name = $datadisk_details[$lun]

    Write-Host “Adding data disk [$datadisk_name] to the VM configuration”

    $vmconfig | Add-AzureDataDisk -Import -DiskName $datadisk_name  -LUN $lun
}

#read all the end points in the source VM and create them in the destination VM
#NOTE: I don’t copy ACL’s yet. I need to add this.
$SourceVM | get-azureendpoint | foreach {

    if($_.LBSetName -eq $null)
    {
        write-Host “Name is [$($_.Name)], Port is [$($_.Port)], LocalPort is [$($_.LocalPort)], Protocol is [$($_.Protocol)], EnableDirectServerReturn is [$($_.EnableDirectServerReturn)]]”
        $vmconfig | Add-AzureEndpoint -Name $_.Name -LocalPort $_.LocalPort -PublicPort $_.Port -Protocol $_.Protocol -DirectServerReturn $_.EnableDirectServerReturn
    }
    else
    {
        write-Host “Name is [$($_.Name)], Port is [$($_.Port)], LocalPort is [$($_.LocalPort)], Protocol is [$($_.Protocol)], EnableDirectServerReturn is [$($_.EnableDirectServerReturn)], LBSetName is [$($_.LBSetName)]”       
        $vmconfig | Add-AzureEndpoint -Name $_.Name -LocalPort $_.LocalPort -PublicPort $_.Port -Protocol $_.Protocol -DirectServerReturn $_.EnableDirectServerReturn -LBSetName $_.LBSetName -DefaultProbe
    }
}

#
if( $DeployToVnet )
{
    Write-Host “Virtual Network Name is [$VNetName] and Subnet Name is [$SubnetName]” 

    $vmconfig | Set-AzureSubnet -SubnetNames $SubnetName
    $vmconfig | New-AzureVM -ServiceName $DestServiceName -VNetName $VNetName -Location $Location
}
else
{
    #Creating the virtual machine
    $vmconfig | New-AzureVM -ServiceName $DestServiceName -Location $Location
}

#get any vm extensions
#there may be other types of extensions that be in the source vm. I don’t copy them yet
$SourceVM | get-azurevmextension | foreach {
    Write-Host “ExtensionName [$($_.ExtensionName)] Publisher [$($_.Publisher)] Version [$($_.Version)] ReferenceName [$($_.ReferenceName)] State [$($_.State)] RoleName [$($_.RoleName)]”
    get-azurevm -ServiceName $DestServiceName -Name $DestVMName -Verbose | set-azurevmextension -ExtensionName $_.ExtensionName -Publisher $_.Publisher -Version $_.Version -ReferenceName $_.ReferenceName -Verbose | Update-azurevm -Verbose
}

 

Conclusion

I had to look at many different code samples as well as MSDN documentation to create this script. I am grateful to all the open source samples folks are contributing and this is my way of giving back to the Azure community. If you have questions and/or features requests drop me a line and I will do what  I can to help.

Posted in DevOps, Virtual Machines, Windows Azure | Tagged , , , | 7 Comments

Azure SDK 2.6 Diagnostics Improvements for Cloud Services

I haven’t blogged for a while because of being very busy at work. Things are slowing down a bit so I will try to write more frequently.

History

Azure SDK 2.5 made big changes to Azure diagnostics. It introduced Azure PaaS Diagnostic extension. Even though this was a good long term strategy the implementation was less than perfect. Here were a few issues that were introduced as a result of Azure SDK 2.5

  1. Local emulator did not support diagnostics
  2. No support for using different diagnostics storage account for different environments
  3. Manual editing required to create XML configuration file needed by Set-AzureServiceDiagnosticsConfiguration. This PowerShell cmdlet was required to deploy the diagnostic extension
  4. To make matters worse there was a bug in the PowerShell cmdlet which surfaced when you had a . in the name of the roles.

All these factors made it impossible to do continuous integration/deployment for Cloud service projects.

A few days ago Azure SDK 2.6 was released. I went through the release notes and read up the documentation. I ran tests to see if sanity has been restored. I am glad to report all the issues introduced by SDK 2.5 have been fixed. Here is a summary of improvements.

  1. Local emulator now supports diagnostics.
  2. Ability to specify different diagnostics storage account for different service configuration
  3. To simplify configuration of paas diagnostics extension the package output from Visual Studio contains the public configuration XML for the diagnostics extension for each role.
  4. PowerShell version 0.9.0 which was released along with the Azure SDK 2.6 also fixed the pesky bug that was happening when you had a . in the name of the role.

Here is a document that provides all the gory details for Azure SDK 2.6 diagnostics changes.

https://msdn.microsoft.com/en-us/library/azure/dn186185.aspx

Overview

If you are developing application and still not using continuous integration and continuous deployment  you should be learn more about it. I will use rest of this blog post to show how you can use PowerShell cmdlets to automate the installation and updating of PaaS diagnostics extension for Cloud Services built using Azure SDK 2.6.

Details

I installed Azure SDK 2.6 on my development machine. I install PowerShell cmdlets(version 0.9.0) and Azure CLI as well.

I created a simple Cloud Service Project. I added a web role and a worker role to it.

I  added one more Service Configuration called “Test” to this project.

image 

I examined the properties of the WebRole1 to see what has changed with SDK 2.6

If you select “All Configurations”  you can still enable/disable the diagnostics like you used to do in SDK 2.5

image

When I selected “Configure” button to configure the diagnostics I found that we don’t have to select the diagnostics storage account in the “General” tab like we used to do. Rest of the configuration is same.

image

Returning back to the configuration of the WebRole1 I changed the Service Configuration to “Cloud”.

In the past there was no way to configure diagnostics storage account for configuration type.

But now we can define a different diagnostics storage account for each configuration type.

image

Quick examination of the ServiceConfiguration.Cloud.cscfg confirmed that diagnostics connection string was defined in it.

This makes a lot of sense because rest of the environment specific configuration setting are also defined in the same file.

image 

I did not want to deploy this project directly from Visual Studio because most build servers do not use Visual studio to deploy applications.

First I created a deployment package by selecting the Cloud project and select Package.

image

Selected the “Cloud” Service Configuration and Press “Package” button.

image

The project was built and packaged successfully. It opened up the location where the package and related files were created.

It created a directory called app.publish in the bin\debug directory under the cloud service project.

This is not any different from the past. However there is a new directory called Extensions.

image

Extensions directory has PubConfig.xml file for each role type. You had to create this file manually from diagnostics.wadcfg in the past. These files are needed by PowerShell cmdlets that are used to deploy diagnostics extension.

image

We use AppVeyor for continuous integration and deployment. It uses msbuild to build the projects.

I ran “Developer Command Prompt for Visual Studio 2013” and used the following command to build and package the cloud project.

msbuild <ccproj_file> /t:Publish /p:PublishDir=<temp_path>

I verified that msbuild also created the package and all the related files.

PowerShell Cmdlets for Azure Diagnostics

image

For new Cloud Services there are two ways to apply diagnostics extensions.

  1. You can pass the extension configuration to New-AzureDeployment via –ExtensionConfiguration parameter.
  2. You can create the Cloud Service first and use Set-AzureServiceDiagnosticsExtension to apply the PaaS diagnostics extension.

You can learn about it here.

https://msdn.microsoft.com/en-us/library/azure/dn495270.aspx

I chose method one because it was faster than applying extension in a separate call.

Deploying PaaS Diagnostics Extension for the first time

The following script creates a new Cloud Services, creates the diagnostics configuration and deploys the package which also deploys the PaaS diagnostics extension.

I am setting the diagnostics extension for each Role separately.

At the end of this script I use Get-AzureServiceDiagnosticsExtension to verify if the diagnostics has been installed.

You can also use Visual Studio Server Explorer to view the diagnostics.

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
<#
.SYNOPSIS
Provisions a new cloud service with web/worker role built with SDK 2.6 and applies diagnostics extension
 
.DESCRIPTION
This script will create a new cloud service, deploy cloud service and apply azure diagnostics extension to each role type.
This cloud service has a WebRole1 and WorkerRole2
#>

$VerbosePreference = “Continue” 
$ErrorActionPreference = “Stop”

$SubscriptionName = “Your Subscription Name”
$VMStorageAccount = “storage account used during deployment”
$service_name = ‘cloud service name’
$location = “Central US”
$package = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\DiagnosticsSDK26.cspkg”
$configuration = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\ServiceConfiguration.Cloud.cscfg”
$slot = “Production”
#diagnostics storage account
$storage_name = ‘diagnostics storage account name’
#diagnostics storage account key
$key= ‘storage account key’


# SDK 2.6 tool generate these pubconfig files for each role type
$webrolediagconfig = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\Extensions\PaaSDiagnostics.WebRole1.PubConfig.xml”
$workerrolediagconfig = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\Extensions\PaaSDiagnostics.WorkerRole1.PubConfig.xml”

#Print the version of the PowerShell Cmdlets you are currently using
(Get-Module Azure).Version

# Mark the start time of the script execution
$startTime = Get-Date 

#set the default storage account for the subscription
Set-AzureSubscription -SubscriptionName $SubscriptionName -CurrentStorageAccountName $VMStorageAccount

if(Test-AzureName -Service $service_name)
{
    Write-Host “Serivice [$service_name] already exists”
}
else
{ 
    #Create new cloud service
    New-AzureService -ServiceName $service_name -Label “Raj SDK 2.6 Diagnostics Demo” -Location $location
}

#create storage context
$storageContext = New-AzureStorageContext –StorageAccountName $storage_name –StorageAccountKey $key

$workerconfig = New-AzureServiceDiagnosticsExtensionConfig -StorageContext $storageContext -DiagnosticsConfigurationPath $workerrolediagconfig -role “WorkerRole1”
$webroleconfig = New-AzureServiceDiagnosticsExtensionConfig -StorageContext $storageContext -DiagnosticsConfigurationPath $webrolediagconfig -role “WebRole1”

#deploy to the new cloud service and diagnostics extension
New-AzureDeployment -ServiceName rajsdk26diagdemo -Package $package -Configuration $configuration -Slot $slot -ExtensionConfiguration @($workerconfig,$webconfig)

# Mark the finish time of the script execution
$finishTime = Get-Date 

#Display the details of the extension
Get-AzureServiceDiagnosticsExtension -ServiceName $service_name -Slot Production

 
# Output the time consumed in seconds
$TotalTime = ($finishTime  $startTime).TotalSeconds 
Write-Output “The script completed in $TotalTime seconds.”

 

 

Update PaaS Diagnostics Extension

I wanted to see how we can update diagnostics extension so I made these changes to my project.

I added a new worker role to the same project. I also changed the the configuration of diagnostics.

Typically an extension is only deployed once. To deploy the extension again you have two option:

  1. You can either change the name of the extension
  2. You can remove the extension and install it again

I chose the second option.

Here is what this script does:

It removes the PaaS Diagnostics extension from the cloud service

It creates PaaS diagnostics configuration for each role.

It updates the Cloud Service and applies PaaS diagnostics extension to each role including the new worker role Hard.WorkerRole.

Having a . in the name used to break the Set-AzureServiceDiagnosticsExtension. It is nice to see it is working now

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
<#
.SYNOPSIS
Updates an existing Cloud service and applies azure diagnostics extension as well
 
.DESCRIPTION
This script removes diagnostics extension, updates cloud service, applies azure diagnostics extension to each role type.
This cloud service had a WebRole1 and WorkerRole2 initially. I added a new role called Hard.WorkerRole
I put . in the name because SDK 2.5 Set-AzureServiceDiagnosticsExtension had a bug where . in the name broke it.
#>

# Set the output level to verbose and make the script stop on error
$VerbosePreference = “Continue” 
$ErrorActionPreference = “Stop” 

$service_name = ‘Cloud service name’
$storage_name = ‘diagnostics storage account’
$key= ‘storage account key’
$package = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\DiagnosticsSDK26.cspkg”
$configuration = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\ServiceConfiguration.Cloud.cscfg”

#Print the version of the PowerShell Cmdlets you are currently using
(Get-Module Azure).Version

# Mark the start time of the script execution
$startTime = Get-Date 

#remove the old diagnostics extension
Remove-AzureServiceDiagnosticsExtension -ServiceName $service_name -Slot Production -ErrorAction SilentlyContinue -ErrorVariable errorVariable
if (!($?)) 
{ 
        Write-Error “Unable to remove diagnostics extension from Service [$service_name]. Error Detail: $errorVariable” 
        Exit
}

$storageContext = New-AzureStorageContext –StorageAccountName $storage_name –StorageAccountKey $key
$webrolediagconfig = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\Extensions\PaaSDiagnostics.WebRole1.PubConfig.xml”
$workerrolediagconfig = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\Extensions\PaaSDiagnostics.WorkerRole1.PubConfig.xml”
$hardwrkdiagconfig = “C:\Git\DiagnosticsSDK26\DiagnosticsSDK26\bin\Debug\app.publish\Extensions\PaaSDiagnostics.Hard.WorkerRole.PubConfig.xml”
 

#create extension config
$workerconfig = New-AzureServiceDiagnosticsExtensionConfig -StorageContext $storageContext -DiagnosticsConfigurationPath $workerrolediagconfig -role “WorkerRole1”
$webroleconfig = New-AzureServiceDiagnosticsExtensionConfig -StorageContext $storageContext -DiagnosticsConfigurationPath $webrolediagconfig -role “WebRole1”
$hardwrkconfig = New-AzureServiceDiagnosticsExtensionConfig -StorageContext $storageContext -DiagnosticsConfigurationPath $hardwrkdiagconfig -role “Hard.WorkerRole”

#upgrade the existing code and apply diagnostic extension at the same time
Set-AzureDeployment -Upgrade -ServiceName $service_name -Mode Auto -Package $package -Configuration $configuration  -Slot Production -ErrorAction SilentlyContinue -ErrorVariable errorVariable -ExtensionConfiguration @($workerconfig,$webconfig, $hardworkconfig)
if (!($?)) 
{ 
        Write-Error “Unable to upgrade Service [$service_name]. Error Detail: $errorVariable” 
        Exit
}

# Mark the finish time of the script execution
$finishTime = Get-Date 

#Display the details of the extension
Get-AzureServiceDiagnosticsExtension -ServiceName $service_name -Slot Production

 
# Output the time consumed in seconds
$TotalTime = ($finishTime  $startTime).TotalSeconds 
Write-Output “The script completed in $TotalTime seconds.”

 

Summary

Azure SDK 2.6 has addressed most of the issues related to deploying diagnostics to Cloud Services that were introduced by SDK 2.5. Cleanest way to update diagnostics extensions is the remove the existing diagnostics extension and setting it again during the deployment.  I tested deploying Diagnostics extension individually on each role it took 3-4 minutes to deploy each extension so if  you have a large number of roles your deployment times may increase. In my case with 3 role types it was taking 12 minutes for the script to run. When I used –ExtensionConfiguration parameter of New-AzureDeployment and Set-AzureDeployment it took only 5 minutes for the entire script to run.

Posted in Automation, Azure, DevOps, PowerShell | Tagged | 6 Comments

NLog Target for Azure ServiceBus Event Hub

NLog is a popular open source logging framework for .Net applications. It writes to various destinations via Target. It has a large number of Targets available available. I created a NLog Target that can send message to Azure ServiceBus EventHub. You can get the source code and documentation here: https://github.com/rajinders/nlog-targets-azureeventhub

I also created a NuGet package which you can download from here: https://www.nuget.org/packages/NLog.Targets.AzureEventHub/

If you already know how to use NLog it will take you a few minutes to start using the target.

Feel free to use it and let me know if you have any suggestions for improvements.

You may be wondering why would anyone would to send logs to Azure Event Hub. Most applications use logging frameworks to write application logs. These logs are not only helpful in debugging issues they are also a source for business intelligence. There are already successful companies like Splunk, Logentries and Loggly who provide cloud based log aggregation services. If you wanted to create your own log aggregation service without write a lot of code you can do so in Azure platform. You can send you log messages to EventHub with NLog or Serilog targets for EventHub. You can leverage Azure stream analytics service to process your log streams. You can even send these logs to Power BI to create dashboards. Both Azure Event Hub and stream analytics are highly scalable. Scaling up can be achieved by simple configuration changes.

Posted in Azure, EventHub, NLog, ServiceBus | Tagged , , , | 2 Comments