Slot switching Azure Website from VSTS

In my last post I covered off how to set up a simple release process that deploys to an Azure Website in classic mode.

That process involved an explicit web deploy to each slot. What might be useful for some of you is to actually swap slots. There are some advantages to this, one of the principle reasons is that due to the way slot swaps work you can avoid a cold start of the website.

Setting up for this is reasonably straightforward, all this is involves is having a simple PowerShell script to do the swap for you in your code base which you ensure is in the output from your build and calling that from a task in your release process.

A lot of what I’ll cover here is based on the great documentation provided by the VSTS team,  https://www.visualstudio.com/en-us/docs/release/examples/azure/deployment-slots-webapps. However I’ll give you a few tips on how you can use variables in your release process and do it in the context of the process that I setup in my last post.

The script itself is trivial:

 
param (
   [string] $AzureWebsiteName,
   [string] $From,
   [string] $To
)
Switch-AzureWebsiteSlot -Name $AzureWebsiteName -Slot1 $From -Slot2 $To -Force -Verbose
 

I have this added to the root folder of my project in the example here.

Once you have this commited in source control you need to make a couple of adjustments to your build and release processes.

Open up the build process that you have defined in edit mode, I’ve gone back into the Simple WebDeploy build that I built in the last post, click on Add build step.

image

In the dialog select Utility and then Add a Copy files task.

image

Select the new task and configure it as shown here. Of course if you have your script in another folder you’ll want to set the Source Folder appropriately. I choose to use *.ps1 here so that as I add other PowerShell scripts that I want to use during release they’ll come along in future.image

Then re-order the tasks by dragging and dropping so that the Copy files to task occurs before the Publish Artifact task:

image

Click on Save then OK in the pop-up dialog, add a comment if you like.  Then click Queue new build and accept the defaults in this dialog and click OK.

You should probably wait for this build to complete and then click on Releases in the second level navigation.

Click on the little down arrow next to the Release definition that you want to modify and select Edit:

image

Because we set up the old release process to treat each slot as a separate environment which the official docs from the VSTS team does not advise doing let’s bring this process in line with their recommendations. First let’s delete the “Production” environment.

image

Click OK in the dialog

image

Click on Add tasks, then Add the Azure PowerShell task and close the Task catalog.

image

Because we’re going to be using a few values in common between the two tasks that we have let’s configure some variables. Select the Deploy step and copy the value in the Web App Name input, replace this with a variable name $(webapp.Name)

image

We have a couple of options as to the scope that we can configure variables at, either for the entire release definition, which is available in the set of pivots just under the release definition name, or for a single environment. In this case as the value of the variable here would  change per target environment let’s create a variable for the target environment. Click on the ellipsis in the environment and then Configure variables

image

Click Add variable and enter the new variable and value. Strangely when I opened this I found some variables that I’d not setup. so I assume that they were created when I initially created the environment, feel free to delete any existing variables that are not being used in your definition. Then click OK to save your new variable.

image

Click on the Azure PowerShell script task, select the Azure subscription, the script path and enter the Script Arguments as below:

-AzureWebsiteName $(webapp.Name) -From dev -To production

image

At this point you could save the release definition and have a successful deployment, but let’s look at some more of the power that we have as we use variables.

We can set up our slot names as variables if we were so inclined, actually we can configure pretty much anything as a variable and if you have a definition scoped variable it can use one defined at the environment level. So let’s set up the script path and the script Arguments as variables.

Copy the value of the script path and enter $(script.swap.path) then click on the Variables link just under the release definition name. Click on Add Variable and enter the corresponding name and value. Repeat this process for the Script Arguments value using a variable named $(script.swap.args).

The variables pane should now look like this:

image

And the script task should look like this:

image

Time to test our the process. Save the definition and add a new release. If you navigate into the release and go to the logs pivot you can watch the deployment progress, or if it;s done by the time you get to the logs click on the Azure PowerShell step in the list and see that the nested variables have successfully been expanded and your swap slot script executed as expected

image

If you needed to add a new environment now all that would be necessary, once the target site and slots are created, would be to clone the exiting environment in the release definition and change the value of the webapp.Name variable in the new environment.

As you can see setting up swap slots is straight forward to set up and by using variables it becomes trivial to add new environments or change deployment targets.

Advertisement
Posted in Azure, Deployment, Development, DevOps | Leave a comment

Minimal path to awesome–VSTS Release Management and Classic mode Azure Website

I’ve been doing a whole lot of work with Visual Studio Team Services, formerly Visual Studio Online, and making heavy us of the Release Management feature as I discussed recently on the Microsoft Cloud Show Episode 146.

As this can be a wee bit daunting I’m going to do a few blog posts to help you get started with this tool. So this doesn’t blow out to an epic boil the ocean post I’m going to assume you have a few things:

  • A VSTS subscription. These are free for small teams, up to 5 people I think, so you can try this at home
  • A Team Project set up using git as the source control system of choice, because friends don’t let friends use TFS
  • git installed on your local machine
  • An Azure Subscription that you are an Administrator of. It’s pretty easy to get a trial subscription with $200 credit if you’re wanting to try this at home or without incurring costs for evaluation purposes
  • An Azure Web App with at least a Standard license and a second deployment slot configured in your subscription.

So, you might need to get a few of those sorted out, don’t worry you can pick back up here once you have those 5 things.

There are a few tasks that we need to do

  • Get the code into a VSTS Team Project backed by git. I suppose if your really wanted to use TFS the steps beyond this one would all be the same too…..
  • Set up a simple build process
  • Connect VSTS to Azure using classic mode
  • Setup a simple release process

Get the code into your VSTS Team Project

You’ll note that I didn’t mention Visual Studio. So that’s not a blocker for you I’ve put that code together, if you want to use your own project, great, infact I’d prefer it but I need code to build this so I just went File > New > Project > ASP.NET 4.6.2 Web site > SPA and pushed all that to github.

Fire up your command prompt of choice. I personally use PowerShell with PoSHGit installed. Then clone the sample repo locally.

git clone https://github.com/gavinbarron/SPA-Template.git vsts-rm-mpta

This will pull down the repository into a new folder called vsts-rm-mpta

Change directory into the newly aquired repo

cd vsts-rm-mpta

Remove the link to the github repository that you acquired this code from

git remote remove origin

Now we need to get this code into your VSTS team project, If you click on the Code tab in the VSTS UI you’ll be presented with a page that helps here if there’s no code in there.

image

Towards the bottom of the page you’ll see the information you need:

image

So you can just copy and paste those from your VSTS instance.

git remote add origin <url-of-your-vsts-git-repo>
git push –u origin –all

These commands establish the link between your local repository and the one that backs your VSTS Team Project and push the code from all branches in the local repo to the remote repository called origin. Note the name origin is a commonly used convention, if you really wanted to that remote could have the name Slartibartfast.

image

Now the code is being in your VSTS repo, let’s create a simple build process.

Set up a simple build process.

We’re going to configure the most minimal build process possible. This is not what you should be doing in the real world. For a real project I’d at the very least have task in our process to run unit tests. I’m deliberately omitting that step for the sale of brevity.

In VSTS click on Builds

image

Note that your VSTS instance may still have the older top navigation that has Build and Release as separate entries at the top level.

Click the big “New definition” button

image

Choose the Visual Studio base template as this give us most of what we want and click Next.

image

You can choose to make this build a CI build so that is kicks off every time your commit changes to your repository by checking the box as I have, this is optional though. Then click create.

image

Now that we have a base let’s set up our build.

First of all delete the Test Assemblies, Publish symbols path and Copy Files to tasks.

image

Next we’re going to re-configure the Build solution task so that it creates a Web Deploy package which we’ll use in the Release process. Click on the Build solution task

Use the … to the right of the solution text box and use the file explorer dialog to find the vsts-rm-mpta1.csproj

Set the MSBuild Arguments value to this

/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:PackageLocation=$(build.stagingDirectory)

This creates a web deploy package which we will use in the release process writing it directly into the folder which is used in the Publish Artifact task.

Click on the Variables item in the Build Definition navigation.

image

On the variables tab we need to edit the BuildConfiguration and BuildPlatform to use Release and AnyCPU as shown here. This is because we’re building from the project file rather than the solution file. With these edited click the save button

image

Give your new definition a name and click OK.

image

Click the Queue new build button to test out the build.

image

Accept the defaults and click OK

image

image

Excellent, we have a web deploy package ready to push out to our website.

Connect VSTS to Azure in Classic Mode

In the top navigation click on the Services link in the hover menu under the cog. Note that if your tenant is using the old navigation then click on the cog and in the following page click the Services link in the top navigation.

image

Click on the New Service Endpoint dropdown and then Azure Classic

image

You could choose to use a Credentials based connection if you like, I prefer a certificate based connection as it’s not going to become invalid if the password of the user account used changes. Select the Certificate Based Radio button and click on the publish settings link.You may need to sign in to your Azure account.

image

Choose, the Azure Active Directory that is associated with the Azure subscription that you want to link in the drop down and click Submit

image

Open the .publishsettings file that downloads, copy over the corresponding values, click verify connection and once the connection is verified click OK.

image

image

Now with a connection established between Visual Studio Team Services and your Azure Subscription you’re ready to create a release process.

Create a Release Process

Click on Releases in the Build & Release hover menu, if your tenant is using the older menu this will be a separate entry in the top navigation.

image

Click on New definition

image

Select Azure Website Deployment and click Next

image

On the next pane of the dialog you can choose to turn on Continuous Deployment. This will result in each successful build of the linked build definition triggering an automated release. As you only have one build definition at this stage it’s automatically selected as the default, so just click on Create

image

Click on the Pencil icon and give your release definition a new name.

image

In the configuration pane for the Deploy Website to Azure task select your newly configured connection in the Azure Subscription (Classic) drop down. Choose the region that you deployed your target Azure Web App in. In the Web App Name type in the name of your Web App, the values that appear in the drop down are only the web site associated with the default service plan in the selected region so don’t be surprised if you don’t see the target website listed.

Enter the name of the slot that you created when your set up your Web App.

Use the file explorer to locate the vsts-rm-mpta1.zip file in the build output.

image

The additional arguments value can be cleared

image

Delete the Run Tests task

image

Click on Environment 1 and re-name it to Dev (or whatever you like)

image

Click on the ellipsis (…) in the environment card and choose Clone Environment

image

In the add new environment dialog let’s assign an approver so that we have a sign off process before deploying to production for our simple process and then click Create

image

Rename the environment to Production

image

Clear the value set in the Slot setting for the Deploy Website to Azure task

image

Click Save and click OK in the dialog.

image

Open the + Release drop down and click on Create Release

image

Select the version of your build that your want to deploy with this release. You can also see here that you can change the deployment triggers for each of the environments that you have configured, in this case we’ll leave the options as configured in the release process and click Create.

image

Click on Release 1 to see this release in action.

image

After the release into the Dev environment has succeeded you can see that there is an approval to deploy to production pending. The users listed as approvers will also receive an email notifying them that there is a deployment pending approval too.

image

At this point take a moment to verify that the slot you configured has had the code deployed successfully

image

Click on Approve or Deny and choose to approve the Production deployment. You’ll note here that you can differ the deployment to a later date, this can be extremely useful if your customer requires a 3am deployment (although you’ll probably want to wake up and verify that all is good after the deployment)

image

After a while the deployment will complete and you’ll see this reflected in the summary page

image

Congratulations, you’ve successfully configured a simple release process using VSTS Release Management.

Conclusion

As you can see although we covered a lot of configuration this basic process could easily be added to almost any existing Visual Studio Team Services based website build process. We’ve barely even scratched the surface of what is possible with this platform

There are plenty of other ways that this could be configured, for example we could have our Production environment not use a web deploy task but run a script to swap the dev and production slots. We could use Azure Resource Manager to provision or update our environments. We might want to set Application Configuration specific to the environment that is being deployed into. I’m planning on covering each of these three scenarios in future blog posts.

Please let me know if you found this useful, if there’s anything that you think needs clarification or improvement.

Posted in Azure, Deployment, Development, DevOps | 1 Comment

The future of SharePoint development is cloudy

I’ve seen the future of SharePoint development and it’s decidedly cloudy. And that’s a good thing, no, that’s a great thing for the health of the SharePoint developer community.

A few months back I was fortunate enough to be invited to attend a SharePoint Dev Kitchen on the Microsoft campus in Redmond, Washington. This was a fantastic three day event to get some developers hands on with the new development approach that the SharePoint engineering team have been working hard on. First I have to thank Dan Kogan, Adam Harmetz and the rest of the team for putting on such a great event. This was probably one of the best technical events I’ve ever attended.

Over three days we got to explore a new way of building WebParts, custom lists and applications on SharePoint. All while the engineering team looked on and worked with us when we ran into issues. The engineers also took the time to understand why we were taking particular approaches and discuss what we perceived as the necessary for a successful v1 of this new development approach.

So, about this new way of developing against SharePoint. If you didn’t catch the #FutureofSharePoint event and haven’t picked up on the not so subtle hints that have been coming from those in the know it’s a JavaScript centred way of enhancing SharePoint. Before you freak out too much, the CSOM APIs are not going away, you can still write Full Trust Code, you can still make use of the Add-In model, all that great guidance in the PnP is still incredibly valid, this is just another tool in the toolbox.

But you know what? I think that it’s going to be a much better option in many cases. For example, in the case of WebParts, under the Add-In model those are actually IFrames to some code either hosted in another Site Collection or on a whole other server depending on the Add-In flavor you’re using. In the new model, well, that code is just another script in the actual page you inserted that WebPart, which comes with the benefit of easy access to any of the JSOM APIs and REST endpoints for the site.

What we got to play with was a custom Yeoman Generator which created one of three project types: WebPart, Custom List or Custom Application. For those folks coming from a pure .NET background the best way to think of this is a command line too that performs the same role as the multiple steps you walk through in Visual Studio when you hit File > New Project and walk through the steps to set up a SharePoint or Office Add-In. This sets up a project that uses TypeScript as the language of choice and a Gulp task runner to help with your dev workflow. Most importantly the tooling doesn’t force you to use any particular front end framework or libraries to build your WebParts or Applications. The engineering team showed us examples using React but you can use AngularJS if that’s your preference, heck I’d be willing to bet using Aurelia shouldn’t be too hard either.

From my perspective this strategy of providing developers working against SharePoint with a means of leveraging all that modern web development has to offer is a great thing. Offering developers more choice about the languages, tooling, and frameworks that they are able to use is a healthy thing. On the note of choice, just because the version of the tooling I played with set you up with a TypeScript project doesn’t mean that you can’t use ES6 or plain old JavaScript either, due to the extensible nature of Gulp you can change out the parts of the default build set up that you don’t want for options you do. This model of development is incredibly open and allows you use pretty much any JavaScript tools or libraries you want.

You’ll note that I’m calling the future cloudy, that’s because all of the new innovations are going to land in SharePoint Online / Office 365 first. For me as a developer that means that’s where I want to be working by preference. I want the shiny new toys as soon as possible with all the challenges and joy that comes with that decision. And if your client hasn’t got a compelling reason to keep their SharePoint instances On-Premises then they should probably be looking a moving into Office 365 to get all the benefits of the new engineering effort and new features sooner rather than later, or in the case of some features, never.

This change is a huge shake up for developers in SharePointlandia, for too long SharePoint developers have been stuck working with the previous version of the .NET toolset, now they have a chance to step up an move to the cutting edge albeit with a slightly different set of tools to what they have traditionally been used to. Use this opportunity to embrace the change, learn new skills and discover the joy of abandoning Visual Studio.

Posted in Development, News, SharePoint | 2 Comments

Migrating questionable date strings

I’ve been having a lot of fun with data migration lately /s

Anyway I have an evil source database that has free-form strings that allegedly represent dates. I look at this data and think to myself, this is why we use the corect data types and have validation, but I digress….

This data contains such wonderful dates as ‘N/A’, ‘feb 2009’, ‘2008’, ‘09/28/09’ and ‘19/11/2014’

So we have partial dates, mixed date formats and things that just are not dates, which I’d like to be dates or NULLs after migration.

I’ve crafted a bit of SQL that works in my case to perform the conversions that I need. Because of the varing formats of my inputs TRY_PARSE and TRY_CONVERT have differing levels of success, and to add to this when you feed an empty string into TRY_CONVERT you’ll get back ‘1900-01-01’ which I don’t want.

SELECT [Date] as SourceColumn,
    ISNULL(
        ISNULL(TRY_PARSE([Date] as DATE),
            TRY_PARSE([Date] as DATE USING 'en-GB')
        ),
        CASE WHEN LEN([Date])=0 THEN NULL ELSE TRY_CONVERT(Date, [Date]) END
    ) as OutputData
FROM [source_table]

The code works as TRY_PARSE returns NULL if it can’t successfully extract a date.

Combined with the IFNULL function this allows us to wrap extra parse attempts using other cultures and then using a CASE Statement to ensure I get NULL if the input string is ‘’

Inside the CASE I optionally use TRY_CONVERT which handles the cases where I have inputs like ‘2009’ and ‘feb 2009’ or other text/partial dates.

Here’s a sample output from the query shown above

SourceColumn OutputData
19/11/2000 2000-11-19
04-21-2010 2010-04-21
2010 June 2010-06-01
Dec-01-2010 2010-12-01
feb 2011 2011-02-01
2008 2008-01-01
xxxxxxxx NULL
NULL
6/10 2016-06-10

Hopefully this saves someone else some time in the future, or if you know of a better way of doing this please do let me know.

Posted in Development, SQL | Tagged | Leave a comment

Don’t get burned by Redis ConnectionMultiplexer; A sample wrapper

Every had a latent bug go undetected and then jump up and bite you?
Yeah, not the nicest feeling.

I’ve been working on a pretty interesting project that now makes use of Redis to provide a caching layer as the system uses multiple servers and does some reasonably heavy computation to prepare the data for use in the front end. For reference there are numerous REST calls to get the basic data and then a raft of search queries that are executed (again via REST) to build up a data payload of about 1MB. Anyhow given this complexity we cache this. previously we just used the old school System.Web.HttpRuntime.Cache. While this worked it had a few limitations, most notably ensuring consistency of cached data across multiple servers is all but impossible. So we elected to change out implementation to use a Redis cache server, which we can easily provision in Azure, WIN!

So we implemented a cache wrapper object to abstract away the complexity of connecting to the cache etc. Actually we borrowed the wrapper object we’d implemented on another project…

If you read the How to Use Azure Redis Cache article there is some great guidance in there on how to set up a connect to your cache. There’s a single line in that article that is EXTREMELY important; “The connection to the Azure Redis Cache is managed by the ConnectionMultiplexer class. This class is designed to be shared and reused throughout your client application, and does not need to be created on a per operation basis.” In fact this class MUST be shared in a single instance manner. Now if you’re using an IOC container, such as Unity or Ninject, all you need do is ensure that your IOC container has an instance of your wrapper class to treat as a singleton and then this criteria is met.

Now, the project that we borrowed that wrapper class into isn’t using a IOC container, unlike where it came from. The net result was that we wound up creating a ConnectionMultiplexer instance every time our RedisCache wrapper object was instantiated meaning that we slowly added more and more open client connections to the Redis server. Being that this code was running on IIS the app pool was recycling nightly, as they do, and closing all of those open connections…
So we didn’t notice our problem, until the number of calls into the code that talked to Redis reached a certain level, at which point the Redis server came to a grinding halt with a load metric of 100% 😐

Full credit to the Azure Support team, they have been super responsive and helped me resolve the issue we had with our code. Personally I’d love the Azure team to include a full class listing in that article, or linked from it, that handles the connections properly. But until such time as they do so I’m going to provide one here which they came up with in the thread I work with them.

public class RedisCache : IRepositoryCache
{
  private static ConfigurationOptions _configurationOptions;
  private readonly CachePrefix _prefix;

  public RedisCache(ConfigurationOptions configurationOptions, CachePrefix prefix)
  {
    if (configurationOptions == null) throw new ArgumentNullException("configurationOptions");
    _configurationOptions = configurationOptions;
    _prefix = prefix;
  }


  private static IDatabase Cache
  {
    get
    {
      return Connection.GetDatabase();
    }
  }

  private static readonly Lazy<ConnectionMultiplexer> LazyConnection 
    = new Lazy<ConnectionMultiplexer>(() => ConnectionMultiplexer.Connect(_configurationOptions));

  public static ConnectionMultiplexer Connection
  {
    get
    {
      return LazyConnection.Value;
    }
  }
  public void ClearItem(string key)
   {
     key = _prefix + key;
     if (key == null) throw new ArgumentNullException("key");
     Cache.KeyDelete(key);
   }

  // Other cache access methods ommited for brevity
}

The key things that make this implementation work is that the _configurationOptions member and the wrappers around the ConnectionMultiplexer are static and therefore shared among all instances of this class.

Once I got this version of the code up into production then the number of open connections to the Redis server dropped right off and hasn’t grown out of control since 🙂

Anyway, hopefully this helps someone else avoid making the same mistake we did.

Posted in Azure, Best Practice, Development | 23 Comments

Update the installed certificate for an Identity Provider

If you use ADFS or some form of federated identity in SharePoint eventually you’re likely to need to update the certificate you have installed. This is because SharePoint holds a copy of the public certificate to verify the incoming SAML Claims tokens. Thankfully it’s reasonably painless and requires no downtime for SharePoint.

Here’s a script I’ve used to get this job done quickly and painlessly.

Add-PSSnapIn Microsoft.SharePoint.PowerShell
$cwd = Resolve-path .
$certPath = Join-Path $cwd "NewCert.cer" 
$cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2("$certPath") 
Get-SPTrustedRootAuthority "Trusted Root Authority Name" | Set-SPTrustedRootAuthority -Certificate $cert 
Set-SPTrustedIdentityTokenIssuer -Identity "Trusted Token Issuer Name" -ImportTrustCertificate $cert  
Posted in Uncategorized | Leave a comment

Browsing localhost using “Project Spartan” AKA “Microsoft Edge”

I’m using Windows 10 as my primary OS now, maybe that’s a bit nuts but hey 10162 is a lot more stable than some of the early builds, and the driver support is pretty good now too.

I fired up an Angular project I’d been working on and found I couldn’t load the site using Edge. The site loaded as expected in Chrome and IE but wouldn’t work for Edge, after few seconds I saw this lovely error screen.
edge cant load localhost

After some searching I found this thread on TechNet which revealed that we need to add an loopback exemption for Edge, or Spartan, depending on which build of Windows 10 you’re running.

CheckNetIsolation LoopbackExempt -a -n=microsoft.microsoftedge_8wekyb3d8bbwe
CheckNetIsolation LoopbackExempt -a -n=microsoft.windows.spartan_cw5n1h2txyewy

You may also need to open a new tab after adding the exemption in order to successfully browse to you locally hosted website.

What about that package name used in the command used?
Well, take a look under C:\Windows\SystemApps and you can see the full package name listed there. I would imagine that if you wind up building your own apps that require access to localhost then you’ll need to add specific exemptions that use the full name of the app package.

Posted in Mirosoft Edge, Windows 10 | Leave a comment

Page mode and JavaScript

So I ran into a scenario recently where a customer had a jQuery script making some DOM modification which they wanted and everything looked good.

Until you went to edit the properties of some web parts. Unfortunately due to the DOM manipulation that the script was doing it was impossible for a user to edit these web parts.Not all that helpful.

Given that this is on a custom page layout the answer is just to add a EditModePanel with the attribute PageDisplayMode=”Display” surrounding the offending script tag, job done, the script is no longer in the page in design, AKA edit, mode.

But what about when the script still needs to do some changes or is being injected via a script editor web part?

The solution is luckily very simple. SharePoint kindly puts a hidden input field into the page while it’s in design mode.

<input name="MSOLayout_InDesignMode" id="MSOLayout_InDesignMode" type="hidden" value="1"/>

All you need to do is check this with a single line of jQuery and use this to control your logic flow.

if($('#MSOLayout_InDesignMode').val() !== “1”){
    //do display mode only stufff
}

Nice and easy, two simple methods of having scripts which only run in the display mode of your choosing.

Posted in Uncategorized | Leave a comment

SharePoint MVPs do an AMA

Are you a redditor?

If you are start collating some questions for the SharePoint MVP AMA which will be held on October 30 6am (October 29 at 1pm EST according to the post in /r/sharepoint). If you’re not a redditor then just come along and lurk.

There will be a lot of well known SharePoint and Office 365 MVPs participating so this is a great time to ask those burning questions.

See you there 🙂

Posted in Uncategorized | Leave a comment

Why do I need to use SPWeb.AllowUnsafeUpdates?

We have a customer who has a couple of custom feature bound onto a WebTemplate that was giving them grief when they attempted to provision sites from this template via PowerShell but worked just fine when using the web UI.

What the team was experiencing was that during the activation of certain features errors with the message “The security validation for this page is invalid”. Now the fix is simple, set AllowUnsafeUpdates=true during the custom feature activated code.

But why do we need to do this?

First let’s look at what’s happening in the context of making these changes via the web UI. In SharePoint there is a FormDigest control, this control places some security validation information into the page which is included in the POST back to the server. SharePoint uses this information to verify that this request to change the contents of its databases does correspond to a request from a page that was served up by SharePoint.

Now when we attempt to make these changes from custom code that’s getting executed from PowerShell there’s no page, no POST and no form digest information bundled along. So SharePoint attempts to verify the form digest a.k.a “security validation” and in the interests of self protection quite rightly throws an exception. This behaviour of SharePoint wanting it’s changes to come from a web browser can also present via the message “Updates are currently disallowed for GET requests”.

Of course because what we’re trying to do here is a valid use case SharePoint supports disabling these checks via the AllowUnsafeUpdates property on the SPWeb object. Now because setting this property to true opens up potential security risks you shouldn’t just set it to true and leave it that way, just toggle it for while you need to make these changes and flip it back to the way it was.

bool unsafeUpdates = web.AllowUnsafeUpdates;
web.AllowUnsafeUpdates = true;
//make some changes;
web.AllowUnsafeUpdates = unsafeUpdates;

Hopefully this has helped to explain the WHY behind the use of AllowUnsafeUpdates that you’ll often see in custom server side code.

Posted in Uncategorized | 2 Comments