Donald On Software

Just my thoughts, my interests, my opinions!

Security Configuration for Teams

Typically if it does not matter if team member’s can view the work of other teams or maybe they even work across teams which is usually the case, then having Contributor access at the TFS Project is all that is needed and desired. However, there may be those situations where you find that you need to guard data from each team so that the other teams cannot see the source or the work items of the other team and yet be within the same TFS Project so that we can get good cross team reporting that makes sense.

This post will take you through the steps that you will need to take in order to put that level of security in place.

Creating Teams

You create the teams and administrate the security from the TFS admin page. You would need to be a Project Administrator in order to create teams and you would have to be a Collection Administrator to create TFS Projects. Assuming that you have the appropriate permission we start from the normal TFS web access page and click on the gear icon on the very far right of the page.

~

Then just click on the New team button to create a new Team.

~

When creating a team it is important not to put them into any of the built in TFS Security groups that exist. These groups are setup from the TFS Project level and their rights and permissions filter all the way down to include all the teams. The end result is that you add a member into one team and they would still see the work and source from all the other teams because they got their permissions from the TFS Project level.

~

When you create the team make sure that you set the permissions to (Do not add to a security group) and although it is not saying this what happens is that this team also gets its own TFS Security Group with that name. This means that anyone we add to this team (provided they did not get higher permissions by being a member of some others team that does have a higher elevated security group) they would only have access to the things that we have given permission to for this team.

Before we move on to set the actual security we will have to set up the security for this team from the perspective of the TFS Project. There are a few things that we would have to set here otherwise the team members would not be able to even see their team. You do this by starting from the root team (this would match the name of the TFS Project) in the admin page. While still in the page where you created the team click on the Security tab.

~

Here you want to select your new team and then allow the permissions at the TFS Project level. You might be tempted to not set the View project-level information but doing that would not allow them to even see the project let alone get to their team. Things you defiantly don’t want to allow is the ability to Delete team project or edit that project-level information that sort of thing should be reserved for someone like the Project Administorators.

~

Area Path

The next thing that we need to tackle is the area path. In TFS starting from TFS 2012, the area path is what represents the team. Work items in the area path of the team is what we are able to use to keep the work items only visible to the appropriate team.

~

When this security screen first pops up you can see all the security groups that are from the Project level and it is important to note that if you want to restrict any users you want to make sure that they do not fall into any of these groups otherwise it will leave you wondering why they are able to access things that you did not give them permission.

~

The first thing you will want to do is to add the team security group to the area.

~

Find your team security group (it will existing from the creation of the team) and click the Save changes button.

~

With the new TFS group selected you will see on the right that nothing is set by default. Click on all the permissions that you want to grant to the users of this group and then click on the Save changes.

~

Version Control Security

Version control security works in a similar way that we had going with the areas. To start, the security is placed at a folder and then the permissions would be set on each of the folders for the team that has permission to access that folder and down (recursive).

~

The first step is to right click on the folder where you want to apply the security then go down to Advanced from the context menu that pops up and finally click on the Security tab.

~

When this folder opens up for the first time the group for the team will not be in the list of roles that have permissions. First thing you will need to do on this screen is to click on the Add button and choose the menu option of “Add TFS group”.

~

Next you will need to select the team group and add the permissions that you want this new group to have and finally click on the Save changes.

~

That is really all it takes to setup security at the team level. The thing to keep in mind is that the members should not be members of any of the default roles, as you can see from the image above all these roles have some sort of permission to at a minimum read (Readers role). If you follow this pattern where the members are only members of their team, then they would only see source that their team group can see. It would be like the other source would not even be there.

Shared Folders Security

For each of the teams to be able to show query tiles on their home page, those queries must exist in the Share Queries area. Because each team will have different needs and reporting on items that are different from other teams they should have their own folder area that only their team can see. One of the ways we can manage this is to create a query folder for each of the teams under the Share Query folder and then add security specific to each team.

Start in the Shared Queries folder, you can do this in either Web Access or with Visual Studio. Web Access is shown here as everyone will have access to this tool but the steps are very similar to this to do this in Visual Studio. Here we start from the home page and click on the View queries link

~

Expand the Shared Queries folder to expose all the folders and out of the box queries. Then right click onto the Shared Queries folder and select “New query folder”.

~

Enter the name of the team for this query folder. After it has been created right click while on the Team Folder and select Security…

~

Click on the Add dropdown control and the “Add TFS group” selection. This will open another dialog box so that we can add the Donald Team group to this folder.

~

Find or enter the name of the Team and then click on the Save changes button.

~

With the team security group selected you can select the permissions that they are allowed to have. Typically this would be the Contribute and the Read permissions. Then click on the Save changes button.

~

Now going back to that Shared Query view, you want to look at what this looks like from the view that a member who is only a member of this team would see. They can only see their team folder under Shared Queries, even the defaults are not visable.

~

Active Directory Groups

One final discussion in this area of Security and that is showing how the Active Directory Groups play into this whole thing. The TFS Groups are used to manage the permissions but instead of adding any individuals to the Group you add the AD Group instead.

It pretty much has to be done this way because TFS automatically makes a TFS Group at the time that the new team is created. Another way that this could have been done was by using a TFS Group and give it the permissions directly but the way that TFS works, this is the cleaner way to go because the TFS Group is going to be created regardless.

Start from the home page of the Team and make sure that you are in the team that you want to add the active directory groups. Next click on the Manage all members link which will open up a new window.

~

In this window click on the Add… dropdown and choose “Add Windows user or group”. This is where you would add the Active Directory (AD) group to be used to manage the actual users. From this point on as you add or remove people from the AD Groups they would get or loose the rights that were assigned to the appropriate team.

~

My New 3 Rules for Releases

Everyone of my products have an automated build and a properly managed release pipeline. At the time I just thought business as usual as I was always on my way to having a well performing DevOps operation in my personal development efforts. Well something happened in the way that I started approaching things which you don’t really plan, things will just start to happen when you get into a situation where everything is automated or at least they should and that is what this post is about.

I don’t have to wait

One of the first things that I did notice was that I didn’t have the feeling like I needed to wait until this big plan of mine to do a release. In the past I was using the Epic work item to plan out the finished features the I would need to complete to get the next release out. I even noticed before I had all these steps automated that plans would change quite often. The priorities and the well-meaning releases would take a turn to become something different like finding a critical bug that could affect the current customers. I would want to release that bug or feature as quickly as possible.

Before everything was automated, these things bothered me but there wasn’t an easy way to just get the release out there as there were still enough manual steps that you want to limit these. However, now there is no reason to get a build that has a complete bug fix or feature and push it down the pipeline and get it released into production. However, if this rush to production is now suddenly available to me isn’t there the possibility that something that wasn’t quite ready get into production by accident? That is why I came up with these 3 new rules that I set for myself that need to be followed before the build can be pushed into production.

My New 3 Rules for Releases

  1. Don’t allow any builds that came from any branch other than Master (git) or Main (tfvc) into production. If it is not Master then it should just be rejected in the deployment steps.
  2. A build that is released with success into Production, will be locked indefinitely with regards to the retention policy.
  3. The build number must incremented any time that we successfully released into production.

What follows are the ways that I automated these 3 rules and made them part of my operation. Now there is never a fear that something might get deployed into production that really should not. I can push things into production when it is important to do so and sometimes I might delay a release because there is no big benefit and saves the customers from having to download and install a release that could be packaged up with a later release. The point being that a release can be made any time it needs to and no more of this long range planning which never happens the way you expected anyway.

No Builds into Production that did not come from Master

As you may have gathered from some of my earlier posts, my personal projects have pretty much all landed up in git repositories that are remotely hosted in Visual Studio Team Services which is Microsoft’s cloud implementation of TFS. With that I am following a very typical git like workflow. Every Product Backlog Item or Bug starts with a new feature or bug branch. This is really nice as it gives me a nice level of isolation and knowing that my work will not affect the working code. It also gives me the flexibility to fix an important bug or PBI that changed in priority and know that the code I tore apart will not affect the outcome.

This also gives me the opportunity to test this code, confirm that it is complete and give it one more look through as the only way code from a branch can get into master is through a pull request. The pull request has a number of options with it as well such as squashing all the commits into a single commit (so I get a very clean and clear way of answering the question, how did you add this feature.) and deleting the branch after the merge.

Master is always the branch that represents production or ready for production. I wanted the code only to come from master because this is where all the branches come back to. Having a rule like this makes sure that the merge will always happen and that nothing gets left out. I have seen some very complicated branching structures when working with clients and something that I have seen quite often is that branches did not always get merged back to where they should. There would be these complicated discussions about where the code that goes to production should really come from. Here I have eliminated all the complexity by having a rule that says you can’t push a build that did not come from master into Production.

Now, how do you enforce this automatically? Well I could not find a task that would help me with this but I did know how I could do this with a simple PowerShell script.

$branch = "$Env:BUILD_SOURCEBRANCHNAME"

if ($branch -ne "master") {
    Write-Host "Cannot deploy builds from the $branch branch into Production" 
    Write-Error ("Can only deploy builds from the master branch into Production")
    exit 1
}
else {
    Write-Host "Carry on, this build qualifies for a deployment into Production"
}
Implementing the Master Branch Only Rule

Using a PowerShell task at the top of a release for the Production environment as an inline script to implement this rule. If for some reason I pushed a build that came from some other branch this task will fail and not go any farther. In my world I typically have one build definition that is by default pointing to the master branch but I override that when I am working on one of my feature branches to get feedback on how the code is building and deploying. Which I really like because I am using the very same build and deployment scripts that I would use when going into production. So you can see how a build from one of these branches could accidentally get into production if I did not have this very elegant rule enforcement.

Locking A Released Build

During the process of development, there are several builds and deployments are happening all the time. However, most of these I don’t really care about as their only real value is to give feedback that the application was still able to build and deploy as it always has. So one thing I never want to do is to lock down a build that came from anything other than the master branch. I used to have a task on the build definition that would lock down any build that was created from the master branch. However this is not always a good rule to live by either as there have been times when the deployment of a master branch did fail while going through the release pipeline and other times it might not have failed but there was a conscious decision to hold off on a release but was merged into master to be added with a few more features.

What I needed was a task that would update the build with an infinite lock on the build when ever it was successfully deployed into Production. For that task I did find one in the Microsoft Market Place that did exactly that. This task is part of a small collection of BuildTasks written by Richard Fennell who is a long time ALM MVP. In the Market Place it is called “Build Updating Tasks” and if you search for that, “Richard Fennel” or “Black Marble” I am sure you will find it.

I have this task near the end of my Prod deployment and set the Build selection mode to “On primary build artifact” and done. Works like a charm, when I deploy to production and it was successful it will find that build and set its retention to keep forever. I no longer have to think about making sure I don’t lose those builds that are in Production.

Increment the Build number

This rule has really allowed me to move freely into my new DevOps approach and no longer have this dependancy of the long planned release which I explained earlier did not ever get released the way I thought that it would. Things and priorities change, that is life. In my build definition I have a set of variables. One called the MajorMinorNumber and the other is the BuildNumber. These combined with the TFS revision number on the end gives me the version number of my release. So in the build definition under the general sub tab my Build number format looks similar to:

Product-v$(MajorMinorNumber).$(BuildNumber)$(rev:.r)

Now lets break this down a little. The MajorMinorNumber change rarely as they would represent big changes in the application. This follows something close to semantic versioning in that if there is going to be a breaking change I would update the Major Number, if there was going to be a big change but would remain backwards compatible then the minor number would be incremented. In the case where I am just adding some new features that are additive to the application or fixing some bugs then the build number would be incremented. The 4th number which is the revision is left for TFS to make guarantee that we always have a unqiue build number.

In the past I have been known for using a date like version number for applications that I didn’t think would really matter. However, I have even noticed with them that there is some very important information that gets lost. If I had a daily build going on and so the day part of the version number would increment everyday even though I might still be working on the same PBI or Bug. Instead I want to have a new build number after I have a successful deployment into Production. This means that I have customers out there who may have upgraded to a newer version and with that I can even produce some release notes as to what was part of that release. But I did not want to go and increment the build number in the build everytime this happened, I wanted this to be automatic as well.

The solution for this is using the another special task that is part of the last extension that we installled. There is a task called “Update Build Variable” and I have this as the very last task for the deployment into my Prod Environment. Very simple to setup, the Build selection mode is: “Only primary build artifact” the Variable to update: “BuildNumber” and the Update mode is “Autoincrement”.

Now after a successful deployment into Production and my build number is incremented and ready to go for either my next long planned set of feature or getting out that really quick important fix or special feature that I just needed to get out there.

My Experience with Git Sub-modules

I just replaced my phone with a new Microsoft Lumina 950 XL which is a great phone. In my usual fashion of checking out the new features of my phone I wanted to see how my web sites looked.

The operating system of this phone is the Mobil version of Windows 10 and of course is using the new browser called Edge. Well it seems that my blog did not look good at all on this new platform and was in fact not even close to being workable. Even though I had the fonts set to the smallest setting, what was displayed were hugh letters so hardly any words fit on a line and was just crazy looking. However, I noticed that other web sites looked just fine especially the ones that I recognized and truely being built around the bootstrapper framework.

I was also surprised as to how many other web sites look badly in this browser with the same problems that I had. Anyway I may address some of that in a later post but right now, what I wanted to find out is if I changed the syle of this blog would it solve my problem. If I just changed the theme or something could it be possible that my site would look great again. This was all very surprising to me as I had tested the responsiveness of this site and it always looked good, just don’t know why my new phone made it look so bad.

New Theme, based on Bootstrapper

Looking for different themes for Hexo was not a problem, there are many of them and most of them are even free. I am really loving the work that I have done working with the Bootstrapper Framework so when I found a Hexo theme that was built around the Bootsrapper Framework, you know I just had to try it. Well this theme looked great a lot simpler looking theme than what I was using which was really the default theme with a few customizations. The new theme was also open source and in another git hub repository. The instructions said to use some sub-module mumbo jumbo to pull the source into the build. Well now I was curious as there was something that I saw on the build definition when working with git repositories, a simple check box that says include sub-modules. Looks like it is time to find out was git sub-modules is all about.

Welcome to another git light bulb moment.

What is a git sub module.

The concept of a git sub module is a whole new concept for me as a developer that has been using for the most part, a centralized version control system of one sort or another for most of my career. I then looked up the help files for these git sub modules and read a few blog posts, and it can get quite complicated but rather then going through all that it can do let me explain how this worked for me to quickly update the theme for my blog. In short, a git sub module is another git repository that may be used to prove source for certain parts for yet another git repository without being a part of that repository.
In other words, instead of having to add all that source from this other git repository and adding it to my existing Blog git respoitory it instead has a reference to that repository and will pull down that code so that I can use it during my build both locally and on the build machine. And the crazy thing is it makes it really easy for me to keep up with the latest changes because I don’t have to manage that it is pulling the latest from this other repository through this sub module.

I started from my local git repository and because I wanted this library in my themes folder I navigated to that folder as this is where hexo is going to expect to see themes. Then using git-posh (PowerShell module for working with git) I entered the following command.

1
git submodule add https://github.com/cgmartin/hexo-theme-bootstrap-blog.git

This created the folder hexo-theme-bootstrap-blog and downloaded all the git repository into my local workspace and added a file called .gitmodules at the root of my Blog bit repository. Looking
inside the file, it contains the following contents:

1
2
3
[submodule "themes/bootstrap-blog"]
path = themes/bootstrap-blog
url = https://github.com/cgmartin/hexo-theme-bootstrap-blog.git

When I added these changes to my staging area by using the add command:

git add .

It only added the .gitmodules file and of course the push only added that file as well to my remote git repository in TFS. Looking at the code of this Blog repository in TFS there is no evidence that this theme has been added to the repository, because it has not. Instead there is this file that tells the build machine and any other local git repositories where to find this theme and to get it. The only thing left was to change my _config.yml file to tell it to use the bootstrap-blog theme and run my builds. Everything works like a charm.

I really don’t think that there is any way that you can do something like this using centralized version control. Humm, makes me wonder, where else can I use git sub modules?

Some MSDeploy Tricks I've Learned

In an earlier post I talked about Hexo the tool I use for this Blog. In that post I talked about how delighted I was with this process except for one thing that did bother me and that was the deployment to the Azure website. For this process I was using FTP to push the files from the public folder to Azure. Instead I was hoping for an MSDeploy solution but that is harder than it sounds especially when you are really not using a Visual Studio Project and MSBuild to create the application.

In this post I will take you on my journey to find a working solution that does enable me to deploy my blog as a MSDeploy package to the Azure website.

What is in the Public Folder

First off I guess we should talk about what is in this folder that I call Public. As I have mentioned in my Hexo Post, the Hexo Generate command takes all my posts written in simple markup and creates the output that is my website and places it in a folder called public.

It is the output of this folder that I wish to create the MSDeploy package from. This is quite straight forward as I already knew that you can use MSDeploy to not only deploy a package but also create one. This will require knowing how to call MSDeploy from the command line.

Calling MSDeploy directly via Command Line

The basic syntax to create a package using MSDeploy is to call the program MSDeploy.exe then the parameter -verb and the verb choice is pretty much always sync. Then you pass in the parameter -source and this one we are going to say where the source is and finally the -dest which we tell it where to place the package or where to deploy the package to if the source is a package.

Using Manifest files

MSDeploy is very powerful with so many options and things you can do with it. I have found it difficult to learn because as far as I have found, there is no good book or course that you can take that will really take you into any real depth to learn this tool. I did come across a blog: DotNet Catch that covers MSDeploy quite often. It was there that I did learn about creating and deploying MSDeploy packages using Manifest files.

In this scenario I have a small xml file that says where the content is found and for that I write out a path to where the public folder is on my build machine. I call this file: manifest.source.xml

1
2
3
4
5
<?xml version="1.0" encoding="utf-8"?>
<sitemanifest>
<contentPath path="C:\3WInc-Agent\_work\8\s\public" />
<createApp path="" />
</sitemanifest>

With the source manifest and an existing application that I want to package up sitting in the public folder at the disclosed location, I just have to call the following command to generate an MSDeploy package. If you are calling this from the commandline on your machine then this should all be on one line.

1
2
3
4
"C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe" 
-verb:sync
-source:manifest="C:\3WInc-Agent\_work\8\s\msdeploy\manifest.source.xml"
-dest:package=C:\3WInc-Agent\_work\8\s\msdeploy\blog.zip

If you are calling this from TFS you would use the commandline task and in the first line called Tool you would put the path to the msdeploy.exe program. The other two lines would be one line and entered into the Arguments box.
Build Task to Create Package from Manifest file

Now in order for that to work I need a similar xml file that is used for the destination file to tell MSDeploy that this package is a sync to the particular website. This file I called: manifest.dest.xml

1
2
3
4
5
<?xml version="1.0" encoding="utf-8"?>
<sitemanifest>
<contentPath path="Default Web Site" />
<createApp path="Default Web Site" />
</sitemanifest>

The syntax to call this blog.zip package and the destination manifest file is:

1
2
3
4
"C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe"
-verb:sync
-source:package="C:\3WInc-Agent\_work\8\s\msdeploy\blog.zip"
-dest:manifest="C:\3WInc-Agent\_work\8\s\msdeploy\manifest.dest.xml"

This works great except that I cannot use the xml files when deploying to my Azure websites as I do not have that kind of control on them. It is not a virtual machine that I can log onto or use a remote PowerShell script against to do my bidding and this package won’t deploy onto that environment without it. I need another approach to getting this to work the way I need it to.

Deploy to Build IIS to create a new MSDeploy package

This next idea that I came up with is a little strange and I had to get over the fact that I was configuring a web server on my Build Machine but that is exactly what I did do. My build machine is a Windows Server 2012 R2 virtual machine so I turned on the Web Server Role from the Roles and Features Service. Then using the above set of commands that I called from a Command Line task just like the test I used to create the package from the public folder I Deployed it to the Build Machine.

At this point I could even log into the build machine and confirm that I do indeed have a working web site with all my latest posts in it. I then called MSDeploy once more and created a new Blog.zip package from the web site.

1
2
3
4
"C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe"
-verb:sync
-source:iisApp="Default Web Site"
-dest:package="C:\3WInc-Agent\_work\8\s\msdeploy\blog.zip"

The resulting blog.zip was easily deployed to my Azure website without any issue what so ever. As you may have noticed that I have the blog.zip file with the exact same name and place as the old one. There was no need to keep the first one as that was just used to get it deployed to the build machine so that we could creat the one that we really want. In order to make sure that went smoothly I deleted the old one before I called this last command which is also a command line task in the build definition.

Success on Azure Website

In my release definition for the Azure web site deployment I just needed to use the built-in out of the box task called “Azure Web App Deployment” point it to where it could find the blog.zip file and tell it the name of my Azure web site and it took care of the rest.

Deploy the zip package to Azure

How I Use Chocolatey in my Releases

I have been using Chocolatey for a while as an ultra easy way to install software. It has become the prefered way to install tools and utilities from the open source community. Recently I have started to explore this technology in more depth just to learn more about Chocolatey and found some really great uses for it that I did not expect to find. This post is about that adventure and how and what I use Chocolatey for.

Built on NuGet

First off I guess we should talk about what Chocolatey is. It is another packaged technology based on NuGet. In fact it is NuGet with some more features and elements added to it. If you have been around me over the last couple of years, I have declared that NuGet is probably one of the greatest advancements that we have had in the dot net community in the last 10 years. Initially introduced back in 2010, it was a package tool to help resolve the dependencies in open source software. Even back then I could see that this technology had legs and indeed it did as it has proven to resolve so many hard development problems that we have worked on for years to resolve. That being able to have shared code within multiple projects that does not interfere with the development of the underlying projects that depend on them. I will delve into this subject in a later post as right now I want to focus on Chocolatey.

While NuGet was really about installing and resolving dependencies at the source code level as in a new Visual Studio project, Chocolatey takes that same package structure and focuses on the Operating System. In other words I can create NuGet like packages (they have the very same extension as NuGet *.nupkg) that I can install, uninstall or upgrade in Windows. I have a couple of utility like programs that run on the desktop that I use to support my applications. These utilities are never distributed or a part of my application that I distribute through click-once but I need up to date version of these on my test lab machines. It has always been a problem with having some way to get these installed and up to date on these machines. However, with the use of Chocolatey this is now an easy solution and a problem that I no longer have.

Install Chocolatey

Let’s start with how we would go about installing Chocolatey. If you go to the Chocolatey.org web-site there are about 3 ways listed to download and install the package all of them using PowerShell.
This first one assumes nothing, as it will Bypass the ExecutionPolicy and has the best chance of installing on your system.

1
@powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))" && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin

This next one is going to assume that you are an administrator on your machine and you have set the Execution Policy to at least RemoteSigned

1
iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))

Then this last script is going to assume that you are an administrator, have the Execution Policy set to at least RemoteSigned and have PowerShell v3 or higher.

1
iwr https://chocolatey.org/install.ps1 -UseBasicParsing | iex

Not sure what version of PowerShell you have? Well the easiest way to tell is to bring up the PowerShell console (you will want to run with Administrator elevated rights) and enter the following:

1
$PSVersionTable

Making my own Package

Okay so I have Chocolatey installed and I have a product that I want to install so how do I get this package created? Good question so lets tackle that next. I start by using file explorer, go to your project and create a new folder. In my case I was working with a utility program that I called AgpAdmin so at the sibling level of that project I made a folder called AgpAdminInstall and this is where I am going to build my package.

The file structure

Now I would bring up PowerShell running as an administrator and navigate over to that new folder that I just created and enter the following Chocolatey command.

1
Choco New AGPAdmin

This will create the nuspec file with the same name that I entered in that New command as well as a tools folder which will contain two powershell scripts. There are a couple of ways that you can build this package as the final bits don’t even need to be in this package. They could be referenced in other locations where they can be downloaded and installed. There is a lot of documentation and examples that you can find to do that. I would say that most of the Chocolatey packages that can be found on Chocolatey.org are done this way. I found that they mention that the assemblies could be embedded but I never found an example and that was the way that I wanted to package this so that is the guidance I am going to show you here.

Lets start with the nuspec file. This is the file that contains the meta data and where all the pieces can be found. If you are familiar with creating a typical NuGet spec this should all look pretty familiar but there are a couple of things that you must be aware of. In the Chocolatey version of this spec file you must have a projectUrl (in my case I was pointing to my VSTS implementation dashboard page. You must have a packageSourceUrl (in my case I pointed to my source url to my git repository) and a licenseUrl which needs to point to a page that describes your license. I never needed these when building a NuGet package but are required in order to get the Chocolatey package built. One more thing we need for the nuspec file to be complete is the files section where we tell it what files need to be included in the package.

There will be one entry there already which is to include all the items found in the folder tools and to place it within the nuget package structure of tools. We want to add one more file entry where we add a relative path from where we are to include the setup file that is being constructured up one folder and then then down 3 folders through the AGPAdminSetup tree and the target also being within the nuget package structure of tools. This line is what embeds my setup program into the Chocolatey package.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
<?xml version="1.0" encoding="utf-8"?>
<!-- Do not remove this test for UTF-8: if “Ω” doesn’t appear as greek uppercase omega letter enclosed in quotation marks, you should use an editor that supports UTF-8, not this one. -->
<package xmlns="http://schemas.microsoft.com/packaging/2015/06/nuspec.xsd">
<metadata>
<!-- Read this before publishing packages to chocolatey.org: https://github.com/chocolatey/chocolatey/wiki/CreatePackages -->
<id>agpadmin</id>
<title>AGPAdmin (Install)</title>
<version>2.2.0.2</version>
<authors>Donald L. Schulz</authors>
<owners>The Web We Weave, Inc.</owners>
<summary>Admin tool to help support AGP-Maker</summary>
<description>Setup and Install of the AGP-Admin program</description>
<projectUrl>https://donald.visualstudio.com/3WInc/AGP-Admin/_dashboards</projectUrl>
<packageSourceUrl>https://donald.visualstudio.com/DefaultCollection/3WInc/_git/AGPMaker-Admin</packageSourceUrl>
<tags>agpadmin admin</tags>
<copyright>2016 The Web We Weave, Inc.</copyright>
<licenseUrl>http://www.agpmaker.com/AGPMaker.Install/index.htm</licenseUrl>
<requireLicenseAcceptance>false</requireLicenseAcceptance>
</metadata>
<files>
<file src="..\AGPAdminSetup\bin\Release\AGPAdminSetup.exe" target="tools" />
<file src="tools\**" target="tools" />
</files>
</package>

Before we move on to the automated steps that we want to implement so that we don’t even have to think about building this package every time, we will need to make a couple of changes to the PowerShell scripts that are found in the tools folder. When you open this powershell script it is well commented and the variable names used are pretty clear in describing what they are for. You will notice that it seems to be ready out of the box to get you to provide a url where it can get your program to install. I want to use the embedded solution so un-comment the first $fileLocation line and replace the ‘NAME_OF_EMBEDDED_INSTALLER_FILE’ with the name of the file you want to run and I will also assume that you have it in this same tools folder (in the compiled nupkg file). In my package I did create an install program using the wix toolset which also gives it the capability to uninstall itself automatically. Next I commented out the default silentArgs and the validExitCodes found right under the #MSI comment. There is a long string of commented lines that all start with #silentArgs and what I did was un-comment the last one and set the value as ‘/quiet’ and un-comment the validExistCodes line right below that so the line looks like this:

1
2
silentArgs = '/quiet'
validExitCodes= @(0)

That is really all that there is to it. The rest of this script file should just work. There are a number of different cmdlet’s that you can call and they are all shown in the InstallChocoletey.ps1 file that appeared when you called the Choco new command and they are all commented fairly well. I was creating the Chocolatey wrapper around an Install program so I chose the cmdlet “Install-ChocolateyInstallPackage”. So to summarize the PowerShell Script ignoring the commented lines the finished PowerShell script looks a lot like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ErrorActionPreference = 'Stop';

$packageName= 'MyAdminProg' # arbitrary name for the package, used in messages
$toolsDir = "$(Split-Path -parent $MyInvocation.MyCommand.Definition)"
$fileLocation = Join-Path $toolsDir 'MyAdminProgSetup.exe'

$packageArgs = @{
packageName = $packageName
unzipLocation = $toolsDir
fileType = 'EXE' #only one of these: exe, msi, msu
url = $url
url64bit = $url64
file = $fileLocation
silentArgs = '/quiet'
softwareName = 'MyAdminProg*' #part or all of the Display Name as you see it in Programs and Features. It should be enough to be unique
checksum = ''
checksumType = 'md5' #default is md5, can also be sha1
checksum64 = ''
checksumType64= 'md5' #default is checksumType
}

Install-ChocolateyInstallPackage @packageArgs

One thing that we did not cover in all this is the fileType value. This is going to be an exe, msi or msu depending on how you created your setup file. I took the extra step in my wix install program to create a bootstrap which takes the initial msi and checks the prerequists such as the correct version of the dot net framework and turns that into an exe. You will need to set this to the value of your install program what you want to run.

Another advantage to using an install package is that it knows how to uninstall itself. That means I did not need that other PowerShell script that was in the tools directory which was the chocolateyuninstall.ps1 file. I deleted mine so that it would use the automatic uninstaller that is managed and controlled by windows (msi). If this file exists in your package than Chocolatey is going to run that script and if you have not set this up properly will give you issues when you run the Choco uninstall command for the package.

Automating the Build in TFS 2015

We want to make sure that we place all these two files folders and the nuspec file into source control. Besides having this is a place where we can repeat this process and keep track of any changes that might happen between changes we will be able to automate the entire operation. Our goal here is to make a change which when we check in the code change of the actual utility program will kick off a build create the package and publish it to our private Chocolatey feed.

To automate the building of the chocolatey package I started with a Build Definition that I already had that was building all these pieces. It built the program, then created an AGPAdminPackage.msi file and then turned that into a bootstrapper and gave me the AGPAdminSetup.exe file. Our nuspec file has indicated where to find the finished AGPAdminSetup.exe file so that it will be embedded into the finished .nupkg file. Just after the steps that compile the code, run the tests, I add a PowerShell script and switch it to run inline and write the following script:

1
2
3
4
# You can write your powershell scripts inline here. 
# You can also pass predefined and custom variables to this scripts using arguments

cpack

This command will find the .nuspec file and create the .nupkg in the same folder as the nuspec file. From there the things that I do are to copy the pieces I am interested in having in the drop and place them into the staging work space $(Agent.BuildDirectory)\b and then for the Copy Publish Artifacts I just push everything I have in staging.

Private Feed

Because Chocolatey is based on Nuget technology it works on exactly the same principal of distribution which is a feed but it could also be a network file share. I have chosen the private feed as I need this to be a feed that I can access from home, the cloud, and when I am on the road. Okay so you might be in the same or similar situation as myself so how do you setup a Chocolatey Server? With Chocolatey of course.

1
choco install chocolatey.server -y

On the machine that you run this command on, it will create a chocolatey.server folder inside of a folder off of the root drive called tools. Just point IIS to this folder and you have a Chocolatey feed ready for your packages. The packages actually go into the App_Data\packages folder that you will find in this ready to go Chocolatey.server. However I will make another assumption that this server may not be right next to you but on a different machine or even the cloud so you will want to publish your packages. To do that you will need to make sure that you give the app pool modify permissions to the App_Data folder. This in the build definition after the Copy Publish Artifact add another PowerShell script to run inline and this time call the following command:

1
2
3
4
# You can write your powershell scripts inline here. 
# You can also pass predefined and custom variables to this scripts using arguments

choco push --source="https://<your server name here>/choco/" --api-key="<your api key here>" --force

That is it really you have a package in a feed that can be installed and upgraded with just a simple Chocolatey command.

Make it even Better

I went one step farther to make this even easier and that was to modify the chocolatey configuration file so that it looks in my private repository first before looking at the public one that is set up by default. This way I can install and upgrade my private packages just as if they were published and exposed to the whole world but it is not. You find the chocolatey.config file in the C:\ProgramData\Chocolatey\config folder. When you open the file you will see an area called sources and probably one source listed. Just add an additional source file give it an id (I called my Choco) and the value should be where your chocolatey feed can be found and set the priority to 1. That is it but you need to do this to all the machines that are going to be getting your program and all the latest updates. Now when ever you are doing a build to about to run tests on a virtual machine you can call have a simple powershell script do it for you.

1
2
3
4
5
6
7
choco install agpadmin -y
Write-Output "AGPAdmin Installed"

choco upgrade agpadmin -y
Write-Output "AGPAdmin Upgraded"

Start-Sleep 120

The program I am installing is called agpadmin and I pass the -y so that it skips the confirm as this is almost always part of a build. I call both the install and then the upgrade as it does not seem to do both but it just ignores the install if it is already installed and will then do the upgrade if there is a newer version out there.

Hope you enjoy Chocolatey as much as I do.

Who left the Developers in the Design Room

This post is all about something that has been starting to bug me and it has been bugging me for quite a while. I have been quiet about this and have started the conversation with different people at random and now it is finally time I just say my piece. Yes this is soap box time and so I am just going to unload here. If you don’t like this kind of post, I promise to be more joyful and uplifting next month but this month I am going to lay it out there and it just might sound a bit harsh.

Developers are bad Designers

I come from the development community with over 25 years I have spent on the craft and originally I got there because I was tired of the bad workflows and interfaces that people who thought they understood how accounting should work, just did not. I implemented a system that changed my workload from 12 hour days plus some weekends to getting everything done in 10 normal days. Needless to say I worked my way out of a job, but that was okay because that led me to opportunities that really allowed me to be creative. You would think that with a track record like that I should be able to design very usable software and be a developer, right?

Turns out that being a developer has given me developer characteristics and that is that we are a bit geeky. As a geeky person, you tend to like having massive control and clicking lots of buttons, but this might not be the best experience for a user that is just trying to get their job done. I once made the mistake of asking my wife, who was the Product Owner of a little product that we were building, what the message should be when they confirm that they want to Save a Student. Her remarks threw me off guard for a moment when she asked why do I need a save button? I made the change so just save it, don’t have a button at all.

Where’s the Beef

Okay, so far all I have enlightened you with is that I am not always the best designer and that is why I have gate keepers like my wife who remind me every so often that I am not thinking about the customer. However, I have noticed that many businesses have been doing a revamping of their websites with what looks like a focus on mobile. I get that but the end result is that it is harder for me to figure out how to use their site and somethings that I was able to do before are just not possible anymore. You can tell right away that the changes were not based on how a customer might interact with the site, I don’t think the customer was even considered.

One rule that I always try to follow and this is especially true for an eCommerce site is that you need to make it easy for the customer if you want them to buy. Some of the experiences that I have had lately almost leave you convinced that they don’t want to sell their products or do business with me. For some of these I have sought out different vendors because the frustration level is just too high.

Who Tests this Stuff?

That leads right into my second peeve in that no one seems to test this stuff. Sure the developer probably tested their work for proper functionality and there might have even been a product owner who understood the steps he needed to take after talking to the developer and proved to him or herself that the feature was working properly. That is not testing my friend, both of these groups of people test applications the very same way, it’s called the Happy Path. No one is thinking about all the ways that a customer may expect to interact with the new site. Especially when you have gone from an older design to the new one, ah, no one thought of that and now your sales numbers are falling because no one knows how to buy from you.

Testers have a special gene in their DNA that gives them the ability to think about all the ways that a user may interact with the application and even attempt to do evil things with it. You want these kind of people on your side, it is best to find it while it is still under development than having a customer find it and worse yet you get hacked which could really cost you financially as well as trust.

In my previous post “Let the Test Plan tell the Story” I laid out the purpose of the test plan. This is the report that we can always go back to and see what was tested and how much of it was tested and so on. I feel that the rush to get a new design out the door is hurting the future of many of these companies because they are taking the short cuts of not designing these sites with the customer in mind and eliminating much of the much needed testing. At least that is how it seems to me, my opinion.

Let the Test Plan Tell the Story

This post has been the result of some discussions that I have had lately when trying to determine the work flow for a client but this often comes up with others in the past but what I had never used as an argument was the role of the test plan in all this. Besides being an eye opener and an aha moment for the client and myself I thought I would explore this thought a little more as others might also find this helpful in understanding and getting better control of your flows.

What is this flow?

There is a flow in the way that software is developed and tested no matter how you manage your projects. Things typically start from some sort of requirement type of work item that describes the business problem and what the client desires to do and should include some benefit that the client would receive if this was implemented. Yea I just described the basics of a user story which is where we should all be by now when it comes to software development. The developers and testers and whoever else might be contributing to making this work item a reality start breaking down this requirement type into tasks that they are going to work on to make it happen.

The developers get to work as they start writing the code and completing their tasks while the testers start writing test cases that they will use to either prove that the new requirement is working as planned or if it has not and simply is not working. These test cases would all go into a test plan that would represent the current release that you are working on. As the developers complete their coding the testers will start testing and any test cases that are not passing is going to go back to the developers for re-work. Now how this is managed is going to depend on how the teams are structured. Typically in a scrum team where you have developers and testers on the same team this would be a conversation and the developer might just add more tasks because this is work that got missed. In some situations where the flow between developers and testers is still a separate hand off, a hold out from the waterfall days, then a bug might be issued that goes back to the developers and you follow that through to completion.

As the work items move from the business to the developers they become Active. When the developers are code complete the work items should become resolved and as the testers confirm that the code is working properly they become closed. Any time that the work item is not really resolved (developer wishful thinking) the state would move back to Active. In TFS (Team Foundation Server) there is an out of the box report called Reactivations which keeps track of the work items that moved from resolved or closed back to active. This is the first sign that there are some serious communication problems going on between development and test.

With all the Requirements and Bugs Closed How will I know what to test?

This is where I find many teams start to get a little weird and over complicate their work flows. I have seen far to many clients take the approach of having additional states that say where the bug is by including the environment that they are testing it in. For instance they might have something that says Ready for System Testing or Ready for UAT and so on. Initially this might sound sensible and the right thing to do. However, I am here to tell you that this is not beneficial, and loses the purpose of the states and this work flow is going to drown you in the amount of work that it takes to manage this. Let me tell you why.

Think of the state as a control on how developed that requirement or bug is. For instance it would start off as New or Proposed, depending on your Process template, from there we approve it by changing the state to approved or active. Those that use active in their work flow don’t start working on it until it is moved into the current iteration. The process that moves it to approved also moves it into a current iterationn to start working on it but they then move the state to committed when they start working on it. At code completion the active ones go to resolved where the testers will then begin their testing and if satisfied will close the work item. In the committed group they always work very close to the testers who have been testing all along here so when the test cases are passing then the work item moves to done. The work on these work items are done, so what happens next is that we start moving this build that represents all the work that has been completed and move it through the release pipeline. Are you with me so far?

This is where I typically hear confusion, as the next question is usually something like this: If all the requirement and bug types have been closed how do we know what to test? The test plan of course, this should be the report that tells you what state that these builds are in. It should be from this one report, the results of the test plan that we base our approvals for the build to move onto the next environment and eventually to production. Let the Test Plan Tell the Story. From the test plan we can not only see how the current functionality is working and matches our expectations but there should also be a certain amount of regression testing going on to make sure features that have worked in the past are still working. We get all that information from this one single report, the test plan.

Test Plan Results

The Test Impact Report

As we test the various builds throughout the current iteration as new requirements are completed and bugs fixed the testers are running those test cases to verify that this work truly is completed. If you have been using the Microsoft Test Manager (MTM) and this is a dot net application, you have turned on the test impact instrumentation through the test settings we have the added benefit of the Test Impact Report. In MTM as you update the build that you are testing it does a comparison to the previous build and what has been tested before. When it detects that some code has changed near the code that we previously tested and probably passed it is going to include those test cases in the test impact report as tests that you might want to rerun just to make sure that the changes that were made do not affect your passed tests.

Test Impact Results

The end result is that we have a test plan that tells the story on the quality of the code written in this iteration and specifically lists the build that we might want to consider to push into production.

Living on a Vegan Diet

In all my blog posts that I have written over the years I have never talked about health or a healthy lifestyle. This will be a first and you as a technology person might be wondering what has living a Vegan Lifestyle have anything to do with software. After all the blog title is “Donald on Software”.

For years I would go through these decade birthdays and just remark how turning thirty was just like turning twenty except I had all the extra knowledge called life. Going from thirty to forty, same thing but things took a turn when I moved into my fifties. I have had doctors notice that my blood pressure was a bit elevated. I took longer to recover from physical activates. Felt aches I never noticed before and I promised my wife that I would live a long, long time and that wasn’t feeling all that convincing. I didn’t have the same get up and go that I had known before.

A Bit About My Family

My wife and step daughter have been vegetarian/vegans for many years. I was open to other types of food like plant based meals and would eat them on occasion when we were at a vegan restaurant or that was what was being cooked at home. However, I travel a lot so most of my food would be from a restaurant where I could eat anything I wanted. This went on for several years, I was taking a mild blood pressure pill every day. This was keeping my blood pressure under control but there were other things that it appeared to be affecting as well in a negative way.

The Turning Point for Me

During Thanksgiving weekend in November 2014, Mary (my wife) and I watched a documentary on Netflix called “Forks over Knives”, and at the end of that I vowed never to eat meat again and start moving towards a Vegan lifestyle.
The documentary is about two doctors one that came from the medical field and one from the science side of things and their adventure to unravelling the truth about how the food that we eat is related to health. One of the biggest studies that has ever been done is called “The China Study” and is a 20 year study that examines the relationship between the consumption of animal products (including dairy) and chronic illnesses such as coronary heart disease, diabetes, breast cancer, prostate cancer and bowel cancer.

Not only reducing these numbers but now that the toxic animal products were out of our system, our bodies would start to repair some of this damage that we have always been told could never be repairable naturally.

Getting over the big Lie

Yes there is a very large lie that we have all believed to be the truth because we assumed that it came from the medical field and sanctioned by the government to be the truth. That being the daily nutritional guide. This is the guide that told use to eat large amounts of meat and dairy products to give us energy and strong bones but this did not come from any medical study this came from the agriculture and dairy industries to sell more products.

Most of that animal protein that we take in our body rejects, there is very small amounts that it actually uses. Now common sense would tell me if my body is rejecting all this animal based protein it is working extra hard and something is going to break down in the form of disease and other difficulties especially as we get older. Oh wait, they now make a pill for that so we can continue to live the way we always have. So now we are not only supporting an industry that never had that big of a market before but now we are spending billions of dollars every year to pharmaceutical companies as well in order to correct the mistakes we made with the things we eat. One thing that I did learn in physics is that one action creates another and opposite reaction so this is not solving anything either just keep making it worse and now health care costs are through the roof with bodies that normally know how to heal themselves.

Now for the Good News

I know I got you all depressed and disappointed as I just dissed your favorite food and called it bad and toxic but there is a happy ending here. I felt like you are right now for about five minutes and then decided to say “NO to Meat”. If you get a chance I would encourage you to look up that documentary “Forks over Knives” as one other thing that disturbed me was the way they were harvesting these animals and called it ethical or within the approved guidelines. These animals were under stress and that stress goes into the meat and you wonder why everyone seems so stressed, I know there is a relationship here.

Anyway, the good news is my latest checkup with my doctor. I am currently on no medication what so ever and my blood pressure numbers are very normal and very impressive for a guy my age. I did a stress test and was able to reach my ideal heart rate easily and effortlessly and I feel great. If I had any plaque buildup it is certainly repairing itself as I feel great. Still can’t seem to lose the 15 pounds I have been working on for the last couple of years but I know I will accomplish that soon enough. I am done with meat and all animal proteins as in milk, eggs, honey and I am going to live a long, long time and feel great. Won’t you join me?

Migrate from TFVC to Git in TFS with Full History

Over the last year or so I have been experimenting and learning about git. The more I learned about this distributed version control the more I liked it and finally about 6 months ago I moved all my existing code into git repositories. They are still hosted on TFS which is the best ALM tool on the market by a very, very, very long mile. Did I mention how much I love TFS and where this product is going? Anyway, back to my git road map as this road is not as simple as it sounds because many of the concepts are so different and at first I even thought a bit weird. After getting my head around the concepts and the true power of this tool there was no turning back. Just to be clear I am not saying that the old centeralized version control known as TFVC is dead, by no means there are somethings that I will continue to use it for and probably always will like my PowerPoint slides, and much of my training material.

Starting with Git

One thing about git is that there is just an enormous amount of support and its availability on practically every coding IDE for every platform is just remarkable. What really made things simple for me to do the migration was an open source project on CodePlex called Git-TF. In fact how I originally used this tool was that I made a separate TFS Project with a git repository. I would work on that new repository and had some CI builds to make sure things kept working and then when I finished a feature I would push this back to the TFVC as a single changeset however because I always link my commits with a work item in the TFVC project it had a side effect that I was not expecting. If you opened the work item you would see some commits listed in the links section. Clicking on the commit link would open up the code in compare mode to the previous commit so you could see what changes were made. Of course this only works if you are looking at work items from web access.

Git-TF also has some other uses and one of those is the ability to take a folder from TFVC and convert that into a git repository with full history. That is what I am going to cover in this post. There are some rules to this that I would like to lay down here as best practises as you don’t want to just take a whole TFVC repository and turn it into one big git repository as that just is not going to work. One of the things to get your head around git is that those respoitories need to be small and should be small remember that you are not getting latest when you clone a repository you are getting the whole thing which includes all the history.

Install Git-TF

One of the easiest ways to install Git-TF on a windows machine is via Chocolatey since it will automatically wire up the PATH for you.

1
choco install git-tf -y

No Chocolatey or you just don’t want to use this package managment tool you can follow the manual instructions on CodePlex https://gittf.codeplex.com/

Clean up your Branches

If you have been a client of mine or ever hear me talk about TFS you will certainly have heard me recommending one collection and one TFS Project. You would also have heard me talk about minimizing the use of branches for when you need them. If you have branches going all over the place and code that has never found it’s way back to main you are going to want to clean this up as we are only going to clone main for one of these solutions into a git repository. One of the things that is very different about the git enhanced TFS is that a single TFS project can contain many git repositories. In fact starting from TFS 2015 update 1 you can have a centralized version control TFVC and multiple git repositories in the same TFS project which totally eliminates the need to create a new TFS project just to hold the git repositories. We could move the code with full history into a git repo of the same project we are pulling from.

In our examples that we are pulling into the git repository we are doing this from the solution level as that is where most people using Visual Studio have been doing for decades however the git ideal view of this would be to go even smaller to a single project per repository and stitch the depenancies together for all the other projects through package management through tools like NuGet. Right now that is out of scope for this posting but will delve into this in a future post.

Clone

Now that we have a nice clean branch to create your git repository it is time to run the clone command from the git-tf tool. So from the command line make a nice clean directory and then be in that directory as this is where the clone will appear. Note: if you don’t use the –deep switch you will just get the latest tip and not the full history

1
2
3
mkdir C:\git\MySolutionName
cd c:\git\MySolutionName
git-tf clone https://myaccount.visualstudio.com/DefaultCollection $/MyBigProject/MyMainBranch --deep

You will then be prompted for your credentials (Alt credentials if using visualstudio.com). Once accepted, the download will begin and could take some time depending on the length of your changeset history or size of your repository.

Prep and Cleanup

Now that you have an exact replica of your team project branch as a local git repository, it’s time to clen up some files and add some others to make things a bit more git friendly.

  • Remvoe the TFS source control bindings from the solution. You could have done this from within Visual Studio, but its just as easy to do it manually. Simply remove all the *.vssscc files and make small a small edit to your .sln file removing the GlobalSection(TeamFoundationVersionControl) ...
    EndGlobalSection in your favorite text editor.
  • Add a .gitignore file. It’s likely your Visual Studio project or solution will have some files you won’t want in your repository (packages, obj, ect) once your solution is built. A near complete way to start is by copying everything from the standard VisualStudio.gitignore file into your own repository. This will ensure all the build generated file, packages, and even your resharper cache folder will not be committed into your new repo. As you can imagine if all you used was Visual Studio to sling your code that would be that. However with so much of our work now moving into more hibrid models where we might use several different tools for different parts of the application tying to manage this gitignore file could get pretty complicated. Recently I came across an online tool at https://www.gitignore.io/ where you pick the OS, IDEs or Programming Language and it will generate the gitignore file for you.

    Commit and Push

    Now that we have a local git repository, it is time to commit the files, add the remote (back to TFS), and push the new branch (master) back to TFS so the rest of my team can clone this and continue to contribute to the source which will have full history of every check-in that was done before we converted it to git. From the root, add and commit any new files as there may have been some changes from the previous Prep and Clean step.
    1
    2
    git add .
    git commit -a -m "initial commit after conversion"

We need a git repository on TFS that we want to push this repository to. So from TFS in the Project that you want this new repository:

Create a new Repository
  1. Click on the Code tab
  2. Click on the repository dropdown
  3. Click on the New Repoisotry big “+” sign.
Name your Repository
  1. Make sure the type is Git
  2. Give it a Name
  3. Click on the Create button.
Useful Git Information

The result page gives you all the information that you need to finish off your migration process.

  1. This command adds the remote address to your local repository so that it knows where to put it.
  2. This command will push your local repository to the new remote one.

That’s it! Project published with all history intact.

A New Start on an Old Blog

It has been quite a while since I have posted my last blog so today I thought I would bring you up to speed on what I have been doing with this site. The last time I did a post like this was back in June of 2008. Back then I talked about the transition that I made going from City Desk to Microsoft Content Management System which evenually was merged into SharePoint and from there we changed the blog into DotNetNuke.

Since that time we have not created any new content but have moved that material to BlogEngine.Net and this really is a great tool but not the way I wanted to work. I really do not want a Content Management system for my blog, I don’t want pages that are rendered dynamically and the content pulled from a database. What I really wanted were static pages and the content for those pages be stored and built the same way that I build all my software, stored in Version Control.

Just before I move on and tell you more about my new blog workflow I thought I would share a picture from my backyard and that tree on the other side of the fence is usually green it does not change colors every fall but this year the weather has been cooler than usual, so yes we sometimes do get fall colors in California and here is the proof.

Hexo

Hexo is a static page generator program that takes simple markup and turns it into static html pages. This means I can deploy this anywhere from a build that I can generate it just like a regular ALM build because all the pieces are in source control. It fully embrasses git and is a github open source project. I thought that moving my blog to Hexo would help me in too ways, besides giving me the output that I am really looking for but also to use as a teaching tool on how the new Build system that is part of TFS 2015 fully embraces other technologies outside of dotNet and the Visual Studio family. From here I check-in my new blogs into source control and that triggers a build which puts the source into a drop folder which is then deployed to my web site which is hosted on Azure.

As of this post I am using FTP in a PowerShell script which is used to deploy the web site which is not ideal. I am working on creating an MSDeploy package that can then be deployed directly onto the Azure website that is hosting this blog.

The Work Flow

The process begins when I want to start a new blog. Because my git repositories are available to me from almost any computer that I am working with I go to the local workspace of my Blog git repository checkout the dev branch and at the command line enter the following command

1
hexo new Post "A New Start on an Old Blog"

This will place a new md file in the _post folder with the same name as the title but the spaces replaced by hyphens (“-“). After that I like to open the folder at the root of my blog workspace using Visual Studio Code. The thing that I like about using Visual Studio Code as my editor is that it understands simple markdown and will give me a pretty good preview as I am working on it and if my screen is wide enough I can even have one half of the screen to type in the raw simple markdown and the other half to see what it looks like.

The other thing that I like about this editor is that it understands and talks git. Which means I can edit my files and save them and Visual Studio Code is going to inform me that I have uncommitted changes so I can add them to staging and commit them to my local repository as well as push them to my remote git repository. Above you may have noticed that before I began this process I checked out the dev branch which means that I do not write my new posts in the master branch and the reason for that is that I have a continious integration trigger on the build server that is looking for anything that is checked into the master on the remote git repository. Because I might start a blog on one machine and finish it on another I need some way to keep all these in sync and that is what I use the dev branch for. Once I am happy with the post I will then merge the changes from dev into master and this will begin the build process.

Publishing the Post

Once I am happy with my post all I need to do is to merge the dev branch into Master and this starts the build process. Which is really just another Hexo command that is called against my source which then generates all the static pages, javascript, images and so on and puts it into a public folder.

1
hexo generate

It is the content of this folder that then becomes my drop artifacts. Because the Release Manager also has a CI trigger after the build has been sucessful it will begin a Release pipeline to get this drop into my web site. My goal is to get this wrapped up into an MSDeploy package that can then be deployed directly onto my Azure web site. I am still working on that and will provide a more detailed post on what I needed to do to get that to happen. In the meantime, I need to make sure that my Test virtual machine is up and running in Azure as one of the first things that this Release Manager pipeline will do is to copy the contents of the drop onto this machine. Then it calls a CodedUI test which really is not testing it will run my PowerShell script that will FTP the pages to my Azure web site. It needs to do this as a user and the easiest way without me having to do this manually is to run the CUI to do it and complete it.

Summary

So there you have it, I have my blog in source control so I have no dependancy of a database and all the code to generate the web site and my content pages are in source control which makes it really easy if I ever need to make a move to a different site or location or anything like rebuild from a really bad crash. As an ALM guy I really like this approach and what would be even better was having a new pre-production staging site to go over the site and give it a last and final approval before it goes live to the public site.