Linux + PowerShell + AzureCLI in DevOps Pipelines

A couple of months ago I had an exchange with Donovan Brown on Twitter about using AzureCLI & PowerShell in Azure DevOps pipelines on Linux hosted agents. As is often the way I didn’t have the time to take the advice I received and experiment straight away. As we ramp into a new project I realized that I had project where trying this approach made sense.

So, I was eager to try this out and dived in, adding an Azure PowerShell task to my pipeline to try blending the lovely Json responses from Azure CLI with the power of proper objects in my scripts. So, attempt #1:

Let’s run some PowerShell in a Linux pipeline
Version 3 of the Azure PowerShell task is Windows only 😦

So, a quick change to toggle the task to use Task version 4 (aka Preview)

And, much after anticipation….. ERROR: Please run 'az login' to setup account. 😦 Much disappointment. But Donovan said he does it, so how does this work?
Taking a look at the logs for an Azure CLI task shows me some thing interesting.

[command]/usr/bin/az cloud set -n AzureCloud
[command]/usr/bin/az login --service-principal -u *** -p *** --tenant *** 
[
   {
     "cloudName": "AzureCloud",
     "id": "0ff1ce000-1d10-t500-86f4-28b59b7dbf13",
     "isDefault": true,
     "name": "Not really the name :P",
     "state": "Enabled",
     "tenantId": "***",
     "user": {
        "name": "***",
        "type": "servicePrincipal"
     }
   }
]

And then inspecting the task options I can see a way of accessing those variable in an Azure CLI task.

Ohhh this looks promising….

Couple this with some cunning use of the task.setvariable command and I can bubble these parameters out for use in the rest of my pipeline:

# Echo the authentication detail out to variables for use in a PowerShell step
echo "##vso[task.setVariable variable=servicePrincipalId]$servicePrincipalId"
echo "##vso[task.setVariable variable=servicePrincipalKey]$servicePrincipalKey"
echo "##vso[task.setVariable variable=tenantId]$tenantId" 

Which then get passed into PowerShell script as arguments, $(servicePrincipalId) $(servicePrincipalKey) $(tenantId) and used like so:

 az login --service-principal `
          -u $servicePrincipalId `
          -p $servicePrincipalKey `
          --tenant $tenantId 

Hey presto, Azure CLI calls from PowerShell on a Linux hosted build agent 😀
Hopefully someone out there will find this useful.

Advertisements
Posted in Azure, Deployment, Development, DevOps, PowerShell | Leave a comment

Using Azure DevOps to build your create-react-app packages

I’ve recently delved into the world of React and started using create-react-app, sometimes called CRA, to bootstrap up my project and build tool-chain. As you start to build more complex applications than HelloWorld or TODO.js sooner or later you’re going to want to separate some of your configuration from your code. This is incredibly good practice. For example your application might be dependent on a RESTful service, this service might have both production and test URLs. Now if you’re “doing it right”™ then you’ll have these URLs as configuration.

CRA has really good support for separating your code and config. This support is predicated around setting environment variables or using .env files to hold the configuration for a particular environment. I don’t want to have .env files in source control, even if those values that wind up in JavaScript will be out there in plain text for anyone to read. To facilitate this I make use of environment variables controlled by my build environment, in this case Azure DevOps.

For a simple example let’s run with idea of having our app depend on a service with test and production URLs. To accommodate this we might simply set up an environment variable and configure a simple http client to use it.

const customApi = axios.create({
    baseURL:process.env.REACT_APP_API_URL
});
export default customApi;

Now all that needs be done to build for test and production is to set the value of the REACT_APP_API_URL environment variable to match the desired target. In Azure DevOps any variables that you define for the build pipeline are exposed to your tasks as environment variable. Thus configuring a pipeline variable will allow you to build for your target environment.

But what about if you would like to build test and production package in a single pipeline? Now you’ll need to use a obscure little bit of wizardry specific to Azure DevOps to change the value of the REACT_APP_* environment variable by echoing it out inside a special block of text. It’s documented here:
https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=vsts&tabs=yaml%2Cbatch#set-a-job-scoped-variable-from-a-script.

So, what I have done is configure a second pipeline variable:

Then in a script task use the magical echo to overwrite the value of REACT_APP_API_URL for the remainder of the pipeline execution

echo "##vso[task.setvariable variable=REACT_APP_API_URL]$(_prod_REACT_APP_API_URL)"

Now I can call build a second time in the same pipeline. Here’s the complete overview for my pipeline:

There you have it, a single build pipeline for a React application producing packages for both test and production environments.

Posted in Azure, Deployment, DevOps | Tagged , | Leave a comment

SPTechCon: Hit the ground running with the Microsoft Graph

I had the pleasure of presenting at SPTechCon in Boston today. It was a fun talk with a great audience.

As promised here is my Postman collection and slides:

Posted in Community, Conferences, Development, Microsoft Graph, Uncategorized | Leave a comment

Disabling TLS 1.0 on Windows 10

I’m setting up a new PC and as usual I’m installing PoshGit from Chocolatey. This time it failed trying to download the zip file. Turns out that GitHub has disabled TLS 1.0 connections which Windows 10 still ships with enabled.

The documentation on how to disable this this is reasonably good: https://docs.microsoft.com/en-us/windows-server/security/tls/tls-registry-settings#tls-10

It is a small edit to the registry, just create the keys to denote if you’re disabling the Client or Server TLS 1.0 protocol, or use the DisabledByDefault option for a little flexiblity, which is what I used.

Frankly I’m a bit surprised that this isn’t the default setting when you install Windows but I suspect that it causes few headaches for the average user.

Posted in Environment Setup, Security | Leave a comment

North American Collaboration Summit presentation and samples

Last week I was in Branson, Missouri to present at the North American Collaboration Summit, a great conference which has grown out of the strong community that exists because of the SharePoint Saturday movement.

I talked about building chat bots, some of the considerations around making your bots not suck and gave a couple of demos at opposing ends of the spectrum, at the simple end I showed using QnA Maker to make an FAQ bot while at the complex end I demoed the ability to authenticate with BotAuth, integrate with Azure Functions, and the Microsoft Graph.

I’ve put the base code for both the Bot and the supporting Azure Functions up on GitHub for you to look at. I’m going to keep iterating on this code and making it suck less.

Posted in Uncategorized | Leave a comment

Microsoft Graph Community Call Recording

The Microsoft Graph team are awesome. Not only are they providing the Microsoft Graph but they are also really responsive and great to work with. Now they are looking to spread their love even further with a monthly community call.

If you missed the first call on December 5th, don’t worry, the recording and slides are now available. The next call is scheduled for January 2nd 2018, so start your year right with a dose of love from the Microsoft Graph team.

Posted in Community, Development, Microsoft Graph | 2 Comments

Docker on Windows – Angular Development

I’ve been on a bit of a Docker kick lately. Mostly to help my team reduce the time it takes to get up and running on a project. The end goal here being that, where possible, we’ll have containerized development and deployment environments.

A lot of this drive has come from needing to pick up a few older projects and encountering a number of issues getting these projects up and running. Some of these projects needed older versions of node, grunt or gulp installed to work correctly. Had the development environments for these projects been properly containerized a lot of the issues encountered could have been mitigated

As we’re using Angular and @angular/cli for a few of our front-end projects I’ve started there.

There are a number of public images available on Docker Hub that can provide a containerized runtime for you toolchain. I’ll use the teracy/angular-cli image in my examples here as it’s pretty well supported, allows for running tests, and is kept up to date by it’s maintainer.

In this post I’ll walk through getting a containerized dev environment set up and a few changes that you will need to make to your Angular project to have it run well under this arrangement.

Spin up your container

First of all make a new directory on the command line, move into and spin up your container…

mkdir my-new-app
cd my-new-app
docker run -it --rm -p 4200:4200 -p 49153:49153 -v ${pwd}:/my-new-app teracy/angular-cli /bin/bash

Assuming that you don’t already have the teracy/angular-cli image already this will pull down the image and launch a new container for you. This command also mounts the directory on your host machine in which you want to store your source code in the file system of the container as /my-new-app.

Next we’ll use the @angular/cli tools to spin up an new Angular project.

Scaffold your Angular App

ng new my-new-app –-skip-install
cd my-new-app
npm install

For some reason the npm install step in ng new always fails for me, so this sequence dodges that issue. That will take a little while as it pulls down all the necessary node modules from npm. Let’s take advantage of this time to make some changes.

Make ng serve work as expected

  • Open the my-new-app using your editor of choice from the host machine.
  • Open the package.json file
  • Change the start entry in the scripts object to read
    ng serve –host 0.0.0.0 –poll 2000
    

This change binds the dev web server to listen to all requests on the default port (4200) and ensures that file changes made from the host machine are detected from the container for the purposes of automatic rebuilds.

Inside the container run up the development server by running this command:

npm start

From your host machine you can now go to http://localhost:4200 and you’ll see the skeleton app served to your browser. If you make changes to the code you’ll see a rebuild triggered when you save your changes.

All good so far. If this is all your want to do then you can even use my extremely light weight image gavinbarron/ng-cli-alpine.

Configure for running tests

However if you want to run tests then there’s still a little more to do and my image won’t meet your needs. Let’s get the tests running successfully.

Open the karma.conf.js file and add the section below after the browsers entry:

customLaunchers: { 
    Chrome_no_sandbox: {
        base: 'Chrome',
        flags: ['--no-sandbox']
    }
},

Save all your changes and run in the container (Use Ctrl + C to stop the web server if you need to):

ng test --browsers Chrome_no_sandbox

Conclusion

I’ve seen you how easy it is to get a development environment for Angular up an running in a Docker container. You could take this further with docker-compose and a .yaml file as outlined in this great post that helped me get up and running: http://blog.teracy.com/2016/09/22/how-to-develop-angular-2-applications-easily-with-docker-and-angular-cli/

Posted in Development, DevOps, Docker | 2 Comments