Donald On Software

Just my thoughts, my interests, my opinions!

Publishing a PowerBI Report to an Azure DevOps Dashboard

I have been working with Power-BI lately using data from my Azure DevOps Service. There are provided for us a number of pre build views that you can use in both the Cloud and the on premise version of Azure DevOps Server Update 1 and higher. But what really got me going was a set of templates that are available in the Marketplace called FlowViz. Basically when you install this extension if gives you two Power BI templates, one for Scum and one for Agile. You load the data using an OData connection and you get this beautiful report of 5 pages with about about 4 charts on each page. It really is amazing.

Although this is all pretty cool, I am viewing this from within the Power BI Desktop tool. Where I would like to view this report is on a Dashboard page on my Azure DevOps Instance. That is what this post is all about, getting it from this Power BI Desktop tool to getting it onto an Azure DevOps Dashboard page.

  1. First we will start with downloading the FlowViz template from the Marketplace. This will consist of two .pbit files which are Power BI templates, the site has some pretty good documentation on getting this setup for your environment so I won’t go over that information here. Follow the instructions on the marketplace page.
  2. There is one thing that did take me a little while to figure out and that was the authentication with an OData connection string. The trick is to use the Basic Authentication tab. In there for the User name I just put in my usual Junky name because that part doesn’t even matter, but I use a Personal Access Token (PAT) for the password.
  3. The charts will fill up like magic because the formulas and data that we are pulling in is all setup to give us some very unique and power data about our work items.
  4. The next step is to publish the report to your Power BI account in the cloud. When you click on the File menu bar at the top it opens a selection of menu items. Click on the “Publish” menu item . This opens another window with button that says “Publish to Power BI” This will publish your report up to your Power BI in the cloud account, assuming that you have one. There are free and paid plans available. I am doing all this using the free plans, so if you don’t have an account then set one up.
  5. After you click on the “Publish to Power BI” button you will see “Select a destination” which should default to “My workspace”. If you are using a community free version, this is the only workspace that you can have. Click on the Select button.
  6. After it finishes publishing up to the Power BI cloud you should see a link that says to open this report in Power BI. Click on this link and it will take you to your Power BI account in the cloud.
  7. Now that we have the report on the web version of Power BI it is time to share this with a dashboard on Azure DevOps. Right now everything looks like it did in the Power BI desktop except that it is in the web. Under the file menu you should see an item called “Publish to web”, click on it and a dialog box pops up with a link that includes a token link and an Html snippet that you can paste into a blog or a website in the form of an iframe. We are going to use the first one, the url so copy that whole line.
  8. In order for you dashboard to display the contents of your url and tokens we need to install the iframe Dashboard Widget from the Market Place. You can get this from
  9. You will want to start with an empty new dashboard in Azure DevOps as this report will take up the whole page. Add the iframe Dashboard Widget that you installed onto your instance of Azure DevOps onto your Dashboard. In the configuration just paste in the url with the token that we copied in a previous step and set the width and hight. I set my Width to 8 and my Height to 6 and it fit in there perfectly.
  10. Now you have your report showing up in an Azure DevOps dashboard and all the functionality works as well you can go from page to page, all the data is there.

How I Work With Work Item Templates

Azure DevOps (aka TFS, VSTS) has always had this ability to save the content of a work item as a template. In older versions of Team Foundation Server this was more obvious once you downloaded tools like the “TFS PowerTools” but then it was in your face; there were templates. Today in Azure DevOps they still exist, are just a little harder to find and you can have a different set of templates for each team.

I have taken advantage of this team separation and together with an Azure DevOps extension “1-Click Child Links” use this tool to create some repeatable tasks that happen every time I create a new blog post and want to get that work done within my current sprint. This makes it really easy for me to see my burn down charts and a clear idea of where I am in the process of getting another post out there.

Where Do I Find the Templates?

This all begins on the Project Settings page. Start with being on the Project were you want to add these templates.

In my case I have a bunch of Projects that are used for experiments and I do all my development and blog posts in one Project.

Then at the bottom on the left hand side of the screen you should see an icon that looks like a gear.

Click on this and it will open the Project Settings. Before we get too far, you will need to be a Project Administrator of this project to implement these changes.

Next, on the Project Settings list you should see a menu item called Team configuration just under the Boards main heading. When you click on that a settings page for teams will show up on the right. Before you get too excited to want to click on the Templates sub menu that you can see, make sure that the correct team has been selected first. This is done by the what may not seem so obvious but is a dropdown control at the end of the navigation bread crumb. Select the team and then click on the Templates sub menu.

Why Am I Writing Blogs in a Git Repo?

In case you haven’t been following me on a regular basis you might be wondering what is going on here. I am talking about managing my blog posts like it was software development and you might be wondering why? This web site content is all in a git repo and it goes through a normal CI/CD pipeline just like I would when writing software. The only difference is that instead of compiling down to a dll or an exe, I am generating a static web site that then gets delivered to my web server in test and to an app service in Azure where you are reading this from. There are many static web site generation tools out there that do this kind of thing and the one I am using is called Hexo. You can see more about the beginnings of this journey from an earlier blog I wrote. A New Start on an Old Blog

Creating The Templates

Before we get into creating the templates let me explain what it is that I am trying to accomplish here. When ever I come across a topic that I think would make a good blog post I create a Product Backlog Item (PBI) this would be the equivalent of a User Story if you are using the Agile Process template or the Requirement if you are using the CMMI Process template. From there I have about 7 tasks that are exactly the same and the same amount of estimated time to complete, publish and get the word out that I have a new blog post out there.

What I want are seven (7) child task work items that can be created automatically for me from the PBI that contains the topic name. As you can see on the screen in front of you all the various work items that you can create a template for. I want to create tasks so make sure that “Task” is selected. Then click on the “New template” button.

This will open a New template window. The first thing you will want to do is to give your step a name. This is so that you know what this step is for, the task will not be named this. To keep them in order I number mine. Then you start filling out the form which will show one field that you can select and give a value to. For any additional fields like on the “Add new field” button. One field you absolutely have to fill in is the Title field as this is a required field in all the process templates. After you have filled in the values you want set for the template click on the Save button.

Keep adding templates until you have all your steps completed. Here we can see that I have seven steps that I follow when working on a new Blog Post. Okay now that we have our templates in place, how do we use them? Lets look at that next.

First make sure that you have the extension that you will need for this. Go to the Marketplace and enter “1-Click Child-Links” into the search box and you should see the extension. Install this onto your system.

With the extension installed. Go to the backlog list of the Blog team (remember that each team has their own set of templates) and open the PBI, or User Story, or Requirement (depending on your process template) and click on the ellipse button which will display a menu and you should see the “1-Click Child-Links” menu item. Clicking on this will add the seven (7) child tasks that we setup as templates earlier. That is how I work with work item templates in Azure DevOps.

Linking WorkItems to Commits

This post is about linking work items to a git commit within Azure DevOps. Doesn’t it do this almost automatically, you may be thinking? Well that was why I was surprised when I linked a work item using Visual Studio and then also tried to link the work items using the # symbol and the work item Id and found that the commit was not linked. I am almost always on top of the changes that are released almost every 3 weeks in the release notes for each sprint. However, this change seemed to have slipped in without me noticing.

A few months ago I created a new Project and then imported the git repositories from an older project. When I noticed that the work items that I link to from the individual tasks were not creating links to the commits that contained that work. I am not sure when this change took place but this is now part of the optional settings for the repository.

Start by going to the Project Settings which is found at the bottom of the menu pillars. After this Project Settings menu opens up go down to the Repositories. This will expand the list of git repositories in this project. Each repository has a set of 3 sub menu tabs [Security, Options, Policies] Click on the Options tab and there you can see that there is an option for linking work items to commits. Make sure that this is on for each of the git repositories.

Integration Testing DotNet Core

It is pretty easy and straight forward to create a dotnet core application and run your unit tests during a build process. However, I also have some integration tests that I run in my Dev and QA environments where I actually hit my test database. This is not something that you would run in the build process as I do not have a database on the build machine and this is not something that would be worth the trouble to install one, especially if you are using a hosted build agent.

dotnet test

This is a test that I would run as part of my deployment, after I have installed the application in one of my testing environments I want to know that the database is working correctly before I start running even slower tests like my automated functional graphical user interface tests. The process that you used to run the dotnet core unit tests doesn’t work quite the same. Let’s go over the normal steps that you would take in your build process to build and test, then we can cover why running integration tests during a deployment is not going to work quite the same.

dotnet build
dotnet test

These are the actual commands that are used to build and then test the dotnet core application. In both of these situations it is expecting source code that it is building and then testing. However, when you are deploying to an environment we are using the finished packages of the artifact to install the new version. Remember we build once and deploy to every environment.

dotnet test /?
Usage: dotnet test [options] <PROJECT | SOLUTION> [[--] <RunSettings arguments>...]]

<PROJECT | SOLUTION> The project or solution file to operate on. If a file is not specified, the command will search the current directory for one.

If you look at the help for dotnet test you see that it expecting either a project or a solution which are both part of source not the finished compiled code. This makes it fairly clear that the dotnet test is not going to be the way to go to run any integration tests during deployment of an environment.

dotnet vstest

If you run the help on just the dotnet command there is another command that might be the answer to all of this.

dotnet -h
Usage: dotnet [runtime-options] [path-to-application] [arguments]
SDK commands:
add Add a package or reference to a .NET project
build Build a .NET project
build-server Interact with servers started by a build
clean Clean build outputs of a .NET project
help Show command line help
list List project references of a .NET project
msbuild Run Microsoft Build Engine (MSBuild) commands
new Create a new .NET project or file
nuget Provides additional NuGet commands
pack Create a NuGet package
publish Publish a .NET project for deployment
remove Remove a package or reference from a .NET project
restore Restore dependencies specified in a .NET project.
run Build and run a .NET project output
sln Modify Visual Studio solution files.
store Store the specified assemblies in the runtime package store.
test Run unit test using the test runner spedified in a .NET project
tool Install or manage tools that extend the .NET experience.
vstest Run Microsoft Test Engine (VSTest) commands

Well that looks like it might have some promise. First thing we should do is just confirm what it is expecting for parameters.

dotnet vstest /?
Usage: vstest.console.exe [arguments] [Options] [[--] <RunSettings arguments>...]]
Run test from the specified files or wild card pattern. Spearate multiple test file names or pattern by spaces. Set console logger verbosity to detailed to view matched test files.
Examples: mytestproject.dll
mytestproject.dll myothertestproject.dll
testproject*.dll my*project.dll

I am pretty sure that this is going to work so I include the compiled test assemblies in the artifact package of the build and use that command to run them in deployment. However, that does not seem to work just like that because I am starting to see errors related to missing dependency files. These are not files that I created but things it is expecting from the framework.

Somewhere along the way I did see some documentation mentioning something about published files. Which I was not sure what they were getting at. I run the dotnet publish command against the web site that I am deploying which makes sense as this is something that we have always done in order to get the pieces that are needed for the web site and this is usually zipped up. Turns out you do the same thing against the test project which includes all those dependencies that I was missing. My commands would look something like this:

dotnet publish <integrationTestLocation> --output <ArtifactStagingLocation>\Tests

That folder would then contain all the files I need to run these tests in a new environment. Then all I would need to do from a command line task in the release pipeline is run the following command.

dotnet vstest <integrationTestLocation>\Integration.Test.dll /logger:trx

Run integrated test run and reports to Azure DevOps just like any other full framework test has done in the past.

Easy Configuration Updates During Deployment

In a proper CI/CD setup where we are building once and deploying that build to various environments as it travels down the pipeline towards production there is almost always a set of configuration files that are different for each environment. Over the years there have been a number of different techniques that have been used to manage this like never deploying the web.config file during a website deployment or storing the various configurations and copy them to their locations at the time of deployment. The problem with either of these techniques is that if there is a change in the configuration you need to track down all these config files and update them and because of the mindless copy and paste that would occur with that exercise, it really opens you up for the possibility of errors and because one of these environments could be production, the risk factor is very high.

Configuration Transformation

Along the way better solutions did appear like the web.config that had a version for each environment. What actually went in the files where just the things that were different for each environment done using Transform commands. The biggest problem with this was that you needed something like a build to drive it and so in order to get a build with the proper QA web.config file I would have to create a separate QA build which kills the whole build once and deploying to many environments policy that you want to keep in place.

There was also a transformation that could take place if you used MSDeploy or WebDeploy to deploy your website. This is the recommended way to build and deploy a website to IIS. You simply pass in the arguments “/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true“ and the outcome would be a zip file of the content, a batch file and a SetParameter file. The problem with the SetParameter file is that although it was pretty good at getting your default connection setup, if you had other connections and things that you wanted it was a very complicated process in order for you to implement this in a complicated web site.

Token Replacement

Another approach was to tokenize your configuration files with values that you want replaced. I have used this technique for many years on my own projects. Since the purchase of Release Manager there was introduced to the Azure DevOps community the idea that a replaceable variable would begin by double underscore and end with a double underscore. For example:


Then a tokenize replacement tool would replace these tokens with real values from your variables table in the build or release pipelines. There is an extension task called “Replace Tokens” and it looks for tokens like this matches them with the variable in the variables table in this case that would be MyVariable and replace it with the value for that scope (the scope is the environment that you are deploying to like Dev, QA, Staging, Prod).

Like I said, this works quite well and I was happy with this except that the tokens were stored in my source control and I don’t have a build process updating my configurations as I am running these programs locally while writing code or debugging an application. I used to have little logic methods that looked to see if what was returned was a token then pass in my local connection string. Now I am doing something different in my work area then what is happening to the application in other environments.

A Better Approach and So Easy

The rest of this post is going to explore an even better way and it is so easy, no complicated transform rules to learn or create. I will setup a sample web application and walk you through the process. There are a number of assumptions that I a going to make so you can determine if you fall into the same sort of scenario then this solution should work for you as well right out of the gate. If your situation is a little bit different you might need to make a few adjustments.

  1. I am using Azure DevOps Service
  2. The web application is an ASP.Net MVC full framework (uses web.config)
  3. The web application could be a ASP.NET Core (uses appsettings.json)
  4. The target can be an IIS running in a virtual machine
  5. The target can be an Azure App Service.

This is what we are going to cover here. Deploying to the targets is exactly the same, there is a little different approach that you take between the full framework and the dotnet Core applications and I will cover them both.

The web.config files

For the older web sites that are built using the full .net framework, the configuration is managed by the web.config file. In my sample application I have a web.config file that has two different connection strings and they are different for each of my environments that I am deploying to.

As I am developing this web site I would have these connection strings pointing to either my local database such as it is showing or this could be a shared database that you are using with the rest of your team for development. Bottom line here is that you setup the connection string as you need it to work in your workspace.

A Word About the build

Make sure that when you create the build that you are creating an MSDeploy package, which really is just a zip file will all the pieces needed for MSDeploy to deploy your web site to your target. If you are creating a new build definition, which the choice of templates comes up choose the ASP.NET (if you are using the older full framework web application) or the ASP.NET Core (if you are building the new dotnet core web application). This will give you all the necessary tasks to get you up and going quickly. When you do that it already populates the MSBuild arguments with all the appropriate arguments to create the package for you.

If you already have a build definition setup and it is not creating the MSDeploy package, then add the following arguments to the MSBuild task.

  • /p:DeployOnBuild=true
  • /p:WebPublishMethod=Package
  • /p:PackageAsSingleFile=true
  • /p:SkipInvalidConfigurations=true
  • /p:PackageLocation=”$(build.artifactStagingDirectory)

Just list this one after the other separated by a space and you are going to land up with a zipped up package in your artifactStagingDirectory. In the new definition that was created from the template it also included a Publish Artifact task which then takes the output from artifactStagingDirectory and pushes it into the final build artifact. We will need that for our deployment steps which is where the point of this post takes place. If you do not have a Artifact Publish task then you will need to add one at the end of your build definition.

If you search for Publish build artifacts you should see the task. You can pretty much leave the defaults as they are just make sure that the Path to publish is : $(build.artifactstagingdirectory)

The Release definition

I am assuming that you probably already have a release definition setup to deploy this web application to a couple of environments that may or may not include Production. Likely because you had issues with dealing with the different configurations especially between your testing environments like Dev and QA and the Production environment which is usually wildly different from the other two. I would also assume that is why you are reading this post so that you can implement these changes and have the peace of mind that each environment will get the correct connection strings as they travel through the pipeline.

In this next section this is where things differ between the older ASP.Net full framework and dotnet Core. I will cover each of these in their own section and you just need to follow the instructions to the type of web app you are deploying.

ASP.NET (older full framework)

For the legacy AGP.Net full framework the connection string is stored in the web.config file. So lets look at that.

In this sample you see that I have two valid connection strings in the connectionStrings section of this web.config file. One is called “DefaultConnection” and the other is called “SecondConnection” It is important to note the name of these two connection strings because we will need them in the next part as add them to the variables table in the release definition.

Variable Table

In case you may not be aware of what I am talking about, in an Azure DevOps Release Definition there is a menu item at the top that says Variables. When you click on it a page opens up similar to the following.

Except that you won’t have the entries that I already have here. One of the differences between the variable table that is in the Build Definition compared to the Release Definition is the addition of the Scope. The Scope represents the environment that this change will be applied to. If you have a variable that is updated the same for all the environments then you can leave it to the default scope which is Release and that will apply to all the environments unless there is one for a particular environment. Specific scope variables will override the default Release ones. So that means I can repeat the variable names that I have for each of the connection strings for each environment that I am deploying to like Dev, QA, Staging, Production and as it goes through the pipeline the correct connection string will be applied.

IIS Web App Deployment Task

Now, before you run off and try this out there is one more thing we need to do to this definition before it is ready to go. The magic which makes all this happen is the task “IIS Web App Deploy” When you are searching for the tasks I just enter IIS and that should be enough to see this task bubble to the top of the list.

In the property settings of this task you want to expand the File Transforms & Variable Substitution Options. Make sure the checkbox XML variable substitution has been checked as this is what will kick off the updating of the web.config with the correct values.

This just makes things so much simpler and easy to manage no more of these situations where we are managing connection strings in a bunch of places or having to write complicated transform rules, you don’t even need to tokenize the web.config file it all just happens almost like magic.

ASP.NET (newer Core)

For the newer ASP.NET Core, things are a little different because the web.config if there even is one is much cleaner as ASP.NET Core is trying to follow a convention over configuration even more so then it has in the past. You are going to find that the connection strings are stored in a json file called appsettings.json. So the principle is similar the actual syntax is a bit different. So lets start by looking at a possible appsettings.json file and go from there.

It is pretty clear that I do have two connection strings setup here but how we manage this in the variables table is a bit different.

Variable Table

When we add these values to the variable table you will notice that I have included the parent node that holds these connection string as the prefix followed by a “.” then the actual connection string name. This is something that is different than dealing with the web.config file.

You will notice in this example I have given you a bigger example of a release definition where we have a different connection string for each of the different scopes (deployable environments in our pipeline).

IIS Web App Deployment Task

One more slight twist that we need to address is in the IIS Web App Deployment task. This one is probably a bit more obvious as we are transforming a different type of file so there would be a change in the properties of this task for that.

The only thing you need here is to expand the File Transforms & Variable Substitution Options and add the name of the json file. In this case it is the appsettings.json file and the task will take care of it for you.

Teams, SubTeams and Area Path's

Back in around the 2012 release of TFS (now known as Azure DevOps) we were introduced to the concept of Teams. This was a logical breakdown of a single project that typically could represent an organization or at least a division. I come from the camp of one collection and one project as this gives you the most value and best experience and even though the product has evolved to be even more flexible then it has I think this is still the best approach to take today. If you are unfamiliar with the one Project to rule them all here are some excellent blog posts to get you up to speed on the concept:
Why You Should Use a Single TFS Team Project
One Team Project to rule them all


Almost as soon as we were given teams many of my customers started asking about sub teams if we could break a team up into smaller components but still have them rolling up to a parent or an overall team. Well right from the beginning of Teams we were sort of able to do this because the team is really defined by the Area Path. The Area Path is one of two controls that we have in TFS that is a tree control and capable of nested items.
Since the 2012 of TFS, when you create a Team Project it creates a Team of the same name as the Project. This means that any team that you create is really a nested child of this parent team.
The default behavior of creating a team is to create an area path with the same name as the team just under the root name of the Project which is parent team. If we want to create any sub teams we need to do things a little different from the default way that we create them. Lets walk through those steps.

  1. First be sure you are in the Project and not at the Organization level and at the bottom of the menu you will find a gear icon called Project settings. Click on the icon.
  2. Under the General section of this menu you should see a menu item called Teams. Click on that, the line becomes a click item when you hover over it.
  3. Right under the Header of this page you should see what could be a button called “New team”. Click on the button.
  4. We are going to call this team “Web” as this will be our parent for the web development that we shall perform. Fill in the Team name and give it a description. Note: for the creation of the parent team we leave the checkbox checked that says “Create an area path with the name of the team.” In later steps when creating the sub-teams it is important that we uncheck that box. More on that a little later. Click on the Create team button.
  5. Still on the Project settings page, under the Boards section of this menu you should see a menu item called Project configuration. Click on that, the line becomes a click item when you hover over it.
  6. This by default takes you to the Iterations management area. Up close to the top of this page beside the Iterations you should see Areas. Click on that, you mouse should turn into a hand when you hover over the word Areas.
  7. Here you will see that the Web area is listed which happened when we created the Web Team (remember the checkbox to create an area path). Now select the Web in the area and click on the New Child button.
  8. I am going to call this new Area name with Team-1 and click on the Save and close button.
  9. With the Team-1 still selected we are going to create another child at the same level as this one and is going to be a sibling to this so instead of the New child button we click on the New button as this will create a new Area at the same level as the currently selected area.
  10. I am going to call this new Area name with Team-2 and click the Save and close button.
  11. Still on the Project settings page, under the Boards section of this menu you should see a menu item called Team configuration. Click on that, again the line becomes a click item when you hover over it.
  12. You should still be on the Web Team, click on the Areas link and there you should see the area path of the Web. You won’t see the child area paths in this view but we do want their values to roll up to this master web team page. Click on the elipse button right beside the text default area.
  13. In the menu that pop’s up select the item that says “Include sub areas”
  14. After you clicked that you should see the “sub-areas are included” message on the right of the default area. This is what you want to look for when you are trying to determine why some master teams are not getting their work items rolled up to the top.
  15. Now we are ready to go back to the teams page and create those other two teams that we created an area path for. Back on the Project Settings under the General area click on the Teams button.
  16. Click on the New team button
  17. This time in the Team name call this “Web-Team-1” because these teams will be listed as a list and the only way to know first hand which are parents and which are children is by naming convention and you will want to keep the parent together with their children and the ony way to do that is to make sure that the parent is the beginning part of the name of the team.
  18. This next part is very important to observe, make sure that you un-check the Create an area path with the name of the team check box. We already have an area path and we will assign that area path to the team in a couple of steps from now. Enter a discription and then click on the Create team button.
  19. Click on the New team button once more.
  20. Name this team “Web-Team-2”, put in a description and remember to un-check the Create an area path with the name of the team check box and then click on the Create team button.
  21. Now that we have the teams created we need to attach them to the correct area path. First click onto the Web-Team-1 link so that we are on that team.
  22. Click on the Team configuration link.
  23. You will see a warning message and this warning is related to exactly what we are going to fix right now. Currently those two sub-teams are in an invalid state.
  24. Click on the Areas sub tab at the top of this section.
  25. Lower on this page you should see a big green cross called Select area(s), click on this.
  26. This opens a new box and you want to expand the carets until you see the Team-1 area. Click on Team-1.
  27. Click on the Save and close Button.
  28. Repeat the same thing for Team-2 start from Task #21 and hook it up to the Team-2 area path.

At this point you will have two sub-teams who’s backlog list will roll up to the master web team. On the web team you will see all the work items from all the sub-teams below it. With this configuration you get your own dashboard and kanban board as well. Let’s say that you like a similar benefit like separate backlog lists but don’t want the overhead of additional dashboards and kanban boards and sub-team names that like flat on the full list of teams. With the new filtering of the backlog lists this is now possible. Lets look at that solution next.

One Team to Manage my Sub-Teams

To work with the existing things that we already have I am just going to delete the two sub-teams but leave all the area paths in place and configured as they were.

  1. Deleting a team is just as easy as it was to create them. While we are still on the Project Settings page we want to click on the Teams button so that we see our list of Teams.
  2. Click on the ellipse button and on the menu that pops up click on Delete”
  3. A warning window pops up and on here you click the Delete team button.
  4. Repeat the same operation to Delete Team-2
  5. So that we can see some data for there various sub teams we should add some data so that we can see how the filtering works on the one Web Team backlog list. On the Boards menu click on Work Items.
  6. Click on the New Work Item drop down and choose Bug.
  7. Give this bug a title of “This is a team-1 bug” and then select the correct area path which should be \Web\Team-1 as you probably realized, even though we deleted the team, the area path still stays. Then click the Save button.
  8. Repeat the process and make another similar bug for Team-2
  9. Now with two bugs from two different teams lets go the backlog list for the Web. Click on the Backlog menu item under the Boards menu and then expand the All teams backlogs so you can click on the Web backlogs item.
  10. You will see both of these bugs listed here because back in our previous exercise where we were creating teams we set the default behavior of the areas to roll up to the parent which in this case is Web.
  11. Now, lets say we just want to see the bugs from Team-1 which is what you would see if you still had a Team-1 team. Over in the upper right hand corner there is an icon that looks like a funnel. This enables the Filter tool bar.
  12. Click on the Area dropdown and you see that I see my Team-1 and Team-2 (these are my sub-area paths).
  13. Click on the Team-2 checkbox, instantly the filter kicks in and you only see Team-2 work items.
  14. This will also work on the Kanban boards you have the same kind of filtering setup here as well. The thing that I like about this method for handling the sub-teams is that if I had several teams under my parent team I could combine a couple of teams together in any combination that I need which is something I would not be able to do on the fly with the sub-team configuration.

These are the two ways that I can see working with Teams and Sub-Teams right now. If you don’t need the luxury of a separate burn down chart and capacity planning at the sub-team level I think you get everything you need right here with the single major area and manage the smaller breakdown of work using the nested area path. Just remember to turn on the rolling up of sub areas up as the default is not to.

Red Gate tools vs SQL Server Data Tools

Recently I have been tasked with showing a development team how to version their SQL databases using the Red Gate tools. Normally I mentor and give guidance in these kinds of projects using the SQL Server Data Tools but because their databases were so large they found that the SSDT approach would just not work for them. They did find in their own experiments that the Red Gate tools did not impose these limitations and worked quite well dispite their database size.

I was faced with the task of learning about a technology that I was not that familiar with. I would provide them with guidance from an ALM and DevOps perspective. In working with this tool I thought it would be a good exercise to blog my experience and show how these two tools are similar in what they do and yet have slightly different approaches.

Installing Red Gate Source Control

To start things off you will need to install the Red Gate Source Control for SQL Server Management Studio (SSMS). This is probably the big difference between the two approaches were the SSDT approach all centers around Visual Studio to manage the code and the source control, Red Gate does this in SSMS. The following link: will get you to the Red Gate page where you can download a 28-day free trial or buy a license. Installation is straight forward, just run the executable that you downloaded which will add a couple of items in the toolbar of SSMS.

Adding a Database to Source Control with Red Gate

In this walk through I am using git as my repository and in fact my remote repository is going to be in Visual Studio Team Services (VSTS). We need to create a git repository before we can grab the code from our database. We will start with creating a local repository and push this up to VSTS afterwards. I use a variety of tools when it comes to git and for this first step the easiest way to get an empty git repository created is with the command prompt and my favorite way of doing that is in PowerShell with the Posh Git Extension module.

mkdir C:\git\Widget\dbScript
cd C:\git\Widget
git init

For the Red Gate solution we don’t use Visual Studio at all but instead fire up SSMS. If the SQL Source Control tab is not visible when the application finishes loading, click on the SQL Source Control button found in the tool bar, which will load up the screen. The next part is really simple, you can select the database from the left side of the IDE and do a right click and select “Link database to source control” from the context sensitive menu. Or if the SQL Source Control tab’s sub menu is set to Setup, as you click on Databases that are not in source control will give you the option to Link to my source control system. Either way it will get you to the same page. Select that first option (Link to my source control system) and click on the Next button.

The screen should now show you a selection of repository types that it supports. Even though we are going to VSTS (TFS in the cloud) we don’t want to use that Source control system type but instead select Git. For the folder point it to the location of the dbScript folder. (C:\git\Widget\dbScript) and click on the Link button. Click on the OK button to close the confirmation screen and you should see that your database icon in SSMS is now green indicating that it is linked to Source Control. There is no source in this container quite yet.

Click on the Commit sub tab of the SQL Source Control and there you will see a list of SQL files that just linking to the source control has found out of the database. These files however, are not in your git repository workspace yet. You will need to enter a commit “Initial Commit” is a good one for the first commit and press the Commit button. The files are now in the folder you designated and checked into your local git repository. If we had the remote setup there might have been a visible option to push these changes up to the remote. Instead we will do this manually now.

Go to your account on VSTS and create a new git repository call it Widget just to keep things consistent. Don’t add any extras to this like the README or the .gitignore options that are available to you. We want a clean empty repository that we can push to easily. Click on the Create button. After the repository is created it will take you to the Widgit is empty page.

Click on the copy button for the “or push an existing repository from command line” section of this page. Go to the PowerShell command window that you used to create the local git repository and paste it in and run the two lines. This will push your local repository back up to the remote server and now you have your database versioned in Source Control. We will come back to this in a minute, but first lets see what we need to do to get a SQL Database into Source Control using the SSDT.

Adding a Database to Source Control with SSDT

For this exercise we start in Visual Studio and create a new Project. For the Project type select “SQL Server Database Project” which you will find under the Other Languages in side the SQL Server item of the New Project type tree. We could use the very same database that we used in the previous exercise “Widget” and now that we have a git repository started for our experiment we can push it there as well. So to make sure we have a nice clean separation of these two approaches lets make a new folder in our git repository called SSDT and make this the same level as the dbScript folder that we used in the Red Gate experiment.

Call this Project Widget and the location should be C:\git\Widget\SSDT if you have been following along. Uncheck both the Create directory for solution and Create new Git repository and click on the OK button. At this point all we have done is created a project in Visual Studio and set the location but there is no Database source. We are missing a step so lets go and do that now.

In Visual Studio right click on the Widget project and from the context menu that appears select Import and Database. This opens a dialog box where you can select a connection to the database of the schema we want to import. From there you can find the same Widget database that we used for the Red Gate solution. Once a database has been selected the Start button becomes enabled so click it. This will startup a wizard that will go through the database and pull all the things it find about the database. This is similar to what the Red Gate tool did when we link a database to Source Control. When it completes click on the Finish button. You will now see a bunch of folders with files, which are source code from the database. It will be structured a little different than the Red Gate tool but the same principal. I also had about 120 errors and that was because the Widget database that I am using is a Red Gate sample database and it has a number of unit tests that it uses to demonstrate the unit testing capabilities of the Red Gate Unit tester and it is not compatible to what I am pulling into Visual Studio there are a number of references that are outside the pure SQL approach. I just deleted the folder called tSQLt and all those errors went away with it.

If you go to the Team Explorer tab and click on the Changes tab you will see all the files were you just need to enter a comment like Initial Commit for SSDT and click on the Commit all and Sync as we are in the same git repository that we setup for the Red Gate solution so it will push these changes up to that same VSTS repository.

All done, now you have the Database in source control from two different solutions but overall the same similar results, the tools that you use are different. For SSDT we used Visual Studio and for the Red Gate solution we used SSMS. Lets go back to our Red Gate Solution and see how we can get an Artifact built in the build system that we can use to deploy.

Creating the Artifact from Red Gate Source

Okay so now we have our database source in Source Control lets make sure our target database is up to date. To do that we will start with an artifact which will be a NuGet package that represents the snapshot of the schema from the latest set of changes. We are going to do this on the VSTS build system. If you are not familiar with how builds work in VSTS there are several blogs and articles out there that cover this topic in depth for now I am just going to cover the steps we need to take to produce our artifact, the build.

We start by creating a new empty build as there is no template (yet) for the Red Gate Project. When we open this new Build Definition the first thing we will need to do is set the settings in the Process section which is right at the top of the build definition. Here we can name our Build Definition, I have called mine “WidgetShop DLM”. Then you need to set the Agent queue, here I have selected the Hosted VS2017 but if you have a private agent queue you certainly could use that. The only thing you will need to do is to make sure that the Red Gate tool, DMAutomation (which is part of the SQL Toolbelt) is installed on that machine.

The second thing to do is to select our git repository where the source code for our Widget database lives. So a few things I have set here is that the source is VSTS Git, my project which I am calling Demo and my repository called WidgetShop. The default branch is set to master and that is fine for now. Leave the rest of the settings as they are.

You should have an Agent Phase section (probably called Phase 1), we won’t change anything about the settings of this section, but I will mention that in a single Build Definition you could have several different Phase’s and each Phase could use a different Agent queue if you desired, the default is to inherit from the definition (the one we set in the Process area). What we will do in this Phase is to add the Redgate DLM Automation 2:Build task and you do that by clicking on the green “+” symbol. The easiest way to find the task we are looking for is to enter the letters “DLM” in the search box. If this set of tasks has never been installed in your instance of VSTS you will be able to do that from this page, otherwise just select the “Redgate DML Automation 2:Build” by hovering over the item and clicking the Add button.

There are just a few settings that we need to change here.

  • Operation: “Build a database package from Redgate SQL Source Control”
  • Database folder: “Database scripts folder is a sub-folder of the VCS root” If you followed my steps and placed the source in the dbScripts folder then this would be the choice. The other choice is if you did not put the source in a folder then you would make the other choice of it being in the root.
  • Subfolder Path: “dbScripts” Actually you can use the ellipse button on the right of this text box and navigate to the actual folder in the git repository.
  • Output NuGet package ID: “WidgetShop” This will be the name of the NuGet package that gets created.
  • Temporary server type: “SQL LocalDB (recommended)” Since we are using a hosted agent and it has full versions of Visual Studio installed I know that this option is available to me and simple to do. If you have a large database that is using the full feature of SQL Server then you might want to point to an actual SQL Server and when you change to that option it will prompt your for the Server Name, Database, Authentication method and if you use SQL authentication a place for you to provide the user name and password.
  • Show advanced options and confirm that the Publish Artifact checkbox is checked.

Save you definition and Queue a build. After the build completes you can click on the link of the specific build where displayed in the sub menu is an item named Artifacts. As you drill though that using the explorer you will see that you now have a WidgtShop.1.0.nupkg which is the package that we will be using to deploy these changes to our environments but first lets see how you would do these same process with and SSDT project.

Creating the Artifact from the SSDT Source

We start this process by creating a new build definition and the first thing that the wizard is going to ask you is the repository that this project is in. Again we are using the same WidgetShop git repository in our Demo project this is exactly the same step that you did with the Red Gate Project. For the SSDT project we take a slightly different approach for the selection of the build template. Here we select the .NET Desktop and click on the Apply button. This populates the buid definition with 7 tasks that are ready to go. Right out of the gate, the Agent queue is probably pointing to “Hosted VS2017” and that is what we want for this demonstation but if you have a private build server you certainly are welcome to that.

At this point we can just run the build as this template will know how to find the project and build it and publish our artifact for us. One thing you might notice here is that there is no mention of a database so it is using MSBuild to create this package directly from the source in source control.

In the end an artifact is created so if we click on the build number of the build after it has completed and click on the Artifacts sub menu, using the Artifacts explorer we walk down the tree and at the end of that rainbow we will see probably about 4 files but the one we are really interested in is the Widget.dacpac file. This is the file that represents a snapshot in time of the database schema at the time of this build and this is the file we will use to do our deployments which we will do next.

Deploying the Red Gate Artifact

Similar to the build steps that we used to create the artifact, Red Gate provides another task that we use in the release definition called “Redgate DLM Automation 2: Release”. Lets go over the steps on how to implement that into an environment. In my environment I am using Deployment Groups to deploy to a virtual machine that I have setup in Azure but this is exactly the same process that you would use to deploy to a private machine in your network. I am not going to go into details about setting up Deployment Groups but for more information follow this link:

We start this part of the exercise by clicking on the Create Release Definition button and for the template select the Empty process, which is right at the top of the list of available templates. Then you can give your environment a proper name other than “Environment 1” like Dev, Stage, or Test to signify what this environment represents. Next you will want to add the Artifacts from the build, so click on the + Add link beside the Artifacts header. Here you will select the Project (in my case this is Demo) then select the build definition. If you have been following the names I have been giving things then this should be the “WidgetShop DLM”. The Default version will default to the Latest and that is what we want and accept the Source alias that is provided. Click on the Add button.

Now we will start to add the tasks to the release in our environment so either click on the Tasks dropdown and select your task or click on the link in the environments where it probably says 1 phase, 0 task. Either way this will bring you to the same place where you will see your environment block (another chance to rename your environment), and probably an Agent phase (which is the default). We don’t want the Agent phase so click on it to select it and then on the right hand side of the screen click on the Remove link.

This will take you back to the Environment block with a red message that your Environment should have at least one phase. Click on the ellipse button to the right of this block and select “Add deployment group phase” and in the Deployment group select the Deployment group that you setup. That is all you need to do for this part. Next we need to add some tasks into this Deployment Group and you do that by clicking on the “+” button on the Deployment group phase block. Enter DLM in the search box and if you have the “Redgate DLM Automation 2: Release” installed select it and click on the Add button. If you have not installed it you can select it in the Marketplace area and click on the Install button first then add the task.

With the task added to the Deployment group phase, we configure it with the following settings.

  • Operation: choose “Deploy database changes from a package” from the dropdown control.
  • Package path: The best way to select this required file is to use the ellipse button at the end of the text box and navigate to the file probably called “WidgetShop.1.0.nupkg”
  • Target SQL Server instance: I have used “.\SQLExpress” because that is what I have installed on my machine you should but the instance name or “.” if this is the Default instance on this Server.
  • Tarket database name: For this I named mine “WidgetCI”. One other thing to note is that this database needs to exist before this tool can work. It can be an empty database, that is fine but it must exist or else this task will fail.
  • Authentication method: I choose “SQL Server authentication” from the drop down as I am not in a domain so windows authentication would have been harder to pull off. When you choose SQL Server authentication it will prompt you for Username and Password.
  • Usernme: Enter the username for your SQL Server
  • Password: Enter the variable name for the password $(SQLPassword) and in the Variables section create a new variable SQLPassword and in the value put in the actual password and then click on the lock button to hide the actual password.

With that you can kick off a new release and the deployment to this environment will begin.

Deploying the SSDT Artifact

To deploy the SSDT dacpac file we follow the same steps to get an environment and get to the Deployment Group phase. Then click on the “+” button on the deployment group and in the search window I just typed SQL and up at the top appeared the task I was looking for “SQL Server Database Deploy”

We just need to configure a few things on this task to complete this:

  • Deploy SQL Using: “Sql Dacpac” select this from the dropdown list.
  • DACPAC File: Click on the ellipse button and walk down the linked artifacts tree until you get down to the Dacpac file you are going to deploy.
  • Specify SQL Using: “Server” select this from the dropdown.
  • Server Name: could leave this as localhost if it is the default server instance on this machine. I used “.\SQLExpress” as that is the server and instance name.
  • Database Name: I named this database “WidgetSSDT” just to make this deployment won’t override my Redgate one. This database does not need to exist as it will be created if not found.
  • Authentication method: I choose “SQL Server authentication” from the drop down.
  • SQL User name: Enter the user name for your SQL Server
  • SQL Password: Because I used the same user name and password I just entered the $(SQLPassword) variable that I setup with working with the Redgate tool task.

With that you can kick off a new release and the deployment to this environment will begin.


As you can see the two tools have a very similar model just a different approach from a different starting point. With SSDT we start in Visual Studio and with the Redgate tools we are starting in SQL Server Management Studio. I am sure for some DBA’s that have lived most of their career in SSMS they will likely lean towards the Redgate solution. That is okay, it does provide a good ALM/DevOps solution but you must be careful as well. The version control is not a true git version control but does more of a sync operation. For instance in a true git client I should be able to switch from one tool to another and because the tools should be reading directly from the repository you should be able to start in one tool and finish in another. However the Redgate source control does not work this way. It builds other workspaces outside of the git repo to manage some of these changes. what may happen is if you make a change outside of SSMS, the source control in SSMS may not see your changes and things could fall out of sync. This is also a reason why they don’t have good support for branching. You can still use branching if you create the branch first and check it out before you start to work on your changes.

Linking the Iterations to all your Teams

I am sure that there are several development teams out there that work similar to me. I am a big fan of the one TFS Project to rule them all and then using teams to separate the work. In my case I am working on my own but support several products, so I have a team for each of the products even though I am pretty much the only member on all those teams. This gives me great visibility at the root where I can see everything that is going on and in many cases I might have several products that I am working on in the same sprint.

You are probably a larger organization than I and you might actually have teams associated with the Teams but similar to me you are all sharing the one set of iterations. This way everyone in the TFS Project are all on the same sprint which really makes the whole one TFS Project to rule them all really powerful. We can easily jump around and see how progress is going overall and yet still have the individual teams with their separate backlog list and burn down charts.

Adding New Iterations

The problem appears when ever I add additional iterations to my TFS Project. This doesn’t happen all that often, maybe once a year or so but once I have the Iterations defined I have to go to each team and select them into their Iteration pool. Doing this manually can be quite tedious, and I did it this way for a while, “I only do this once a year”, was my justification. However I would be thinking that there should be an easy way to automate this. Well today, I thought I would tackle this and either write some scripts to accomplish this or find a good solution out there that maybe someone else has already accomplished.


Turns out there is an open source project out there that does exactly that. I just needed to create some scripts to load and then call the functions to go through all my teams and set them up with all the iterations that I have. The nice thing about this script is that if an iteration already exists for that team it just displays the information without changing it and adds the ones that are missing. First things first, lets get this open source project which is in GitHub and I would suggest cloning it.

git clone

With the repository cloned to our local computer we are ready to start to build our own script that we will run using the scripts that are in this open source project to do the actual work. Okay, so before we start writing this script lets make sure that we have a bunch of Iterations setup and in case you are not sure what I am talking about here or how to do that, lets cover that first.

Setting up the Iterations at the Project level

First make sure you are at the Project level of the TFS Project you want to add Iterations to. You should not be on a team within the Project, you want to be at the root of the Project container. Then click on the Gear icon and in that drop down select Work.
While in the Work menu select the Iteration sub menu and then the next step will depend if you are on the top level or on one of the existing Sprints. As you can see my selection is on the Root of the Iteration (indicated by the light blue background bar through 3WInc) so I would click on the New Child button as all Iterations fall under this root. If however I was on Sprint 1 or Sprint 2 I would then click on New which would then give me a new iteration at the same level. If while on one of these Child Iterations and I clicked on the New Child button, the Iteration would be a Child of that Iteration.

Enter the name of your new Iteration, then the Start and End dates, these dates should all ready be known by TFS and as soon as you click on these boxes the dates show up with what the next Iteration start should be and then when you click on the end date it should show you what the end date should be as this is based on the pattern you may have started. If this is your very first Iteration you will need to set this manually and then every Iteration after that TFS will know and understand the pattern. Click on the Save and Close button. Add as many Iterations as you need, as you can see by my images I am already making Iterations for next year.

Getting a Token from TFS or VSTS

One of the other things that we are going to need before we really get into the custom PowerShell script that we are going to use to apply all these Iterations to all the teams is a Token. The Token will be the way that the PowerShell tool can authenticate against your version of TFS or your instance of VSTS to perform the work. From your instance of TFS or VSTS click on your Profile. This is on the right hand side of the web page and represented by your picture if you have one in your profile or it could just be your initials. Click on it and select Security.

On the Security page you want to make sure you have Personal Access Token (PAT) selected and then click on the Add button.

Give your token a name, select the length of time that the Token can last (the maximum is a year). If you are on VSTS the Account will already be filled in for you. You can accept the default scope which is not to restrict the scope at all, and keep in mind that this Token is based on your Profile so you can’t create a Token that has more permissions than your role provides. At the bottom of this screen there is a button to Create Token.

A Guid like string will appear and you want to copy this and store it somewhere because this is the only time that TFS or VSTS will show you this value and this is your Token that we will need in the next step.

Now We Put Together the Actual Script That we Run

With all these pieces in hand we begin the process of building our PowerShell Script that will iterate through our Teams and update the Iterations.
Create a new PowerShell script, I called mine “UpdateIterations.ps1”. The first line is just a comment that says that we want to load the module that we are going to use into memory. The second line will do a Change Directory (cd) to the location on your computer where you have the GitHub project cloned to. The third line will load the module Releate-VstsIteration.ps1 into memory. It is important that you have written this line exactly as shown. It is a “period space period backslash Relate-VstsIteration.ps1” The next part of the script is an assignment of the variable $token with the PAT that you stored in the previous step.

# This loads the Releate-VstsIteration scripts and modules into memory
cd C:\git\GitHub\AIT.VSTS.Scripts\Iterations\TeamAssignment\
. .\Relate-VstsIteration.ps1

# Before we really get started lets setup some variable that are sure to change like the token.
$token = "<Your Token Goes Here>"

All that is left is a line for each team that you want within your TFS Project to update with the list of iterations that you have created in the Project. In the sample below you would replace the {account} with your actual VSTS account name and the {TFS Project} with the actual name of the TFS Project. For TFS you would replace that whole connection to the URL of your TFS instance with a forward slash and the name of the TFS Project. What you have in the -Username parameter does not matter. For the -TeamList paramter this is where you put the name of your team.

Relate-VstsIteration -Projecturi "https://{account}{TFS Project}" -Username "" -Token $token -AuthentificationType "Token" -TeamList "Your First Team"
Relate-VstsIteration -Projecturi "https://{account}{TFS Project}" -Username "" -Token $token -AuthentificationType "Token" -TeamList "Your Second Team"

Make as many lines of this line of each of your Teams, I just have two listed here but my real set of Teams is really around 12. With all that in place you just need to run this script which will jump into the location where the open sourced AIT.VSTS.Scripts reside, load the module and run your scripts. In PowerShell you will see messages about Iterations being updated for each team as it goes through the list. Once the script has completed go to one of your teams and open the Backlog list and refresh the page, and like magic all the iterations appear.

When Should we Move the Work Items to DONE?

This is a very common question that I get asked by different software development teams as I make my rounds to helping clients with their ALM practises. There is a common pattern associated with this question and I know this is the practise when I see a lot of columns on their Kanban boards or worse yet a lot of states that they are tracking on the work items.

Fewer States (keep it down to no more than 4 or 5)

When I see more than the 4 or 5 out of the box states that start in a TFS out of the box template it tells me that the team is trying to micro manage the work items. They are adding more work to their plates then they need to. It really gets hard to manage the work when it goes beyond doing the work because then the question comes up with who is responsible for moving the work item to Done and when is it Done?

The goal behind the work items and here I am specifically referring to the Product Backlog Items (the requirement type) and the Bugs is to track the work to complete the described work. This is in conflict to the pattern way of thinking that I spoke of earlier where the thought is that we need to track this work item through all the environments as we are testing and deploying. I am telling you that you do not. Initally when we are in the development cycle we are working closely with the testing team and as soon as we have something ready to test, they can test it right away because we have a proper CI/CD pipeline in place and can approve work that we have completed so that they can have a go at it to confirm that the new functionality or fix works as expected.

If the functionality is correct, the initial tests are passing then we can go ahead and push the code to the parent branch (could be master or develop, depending on the process you are following) which starts the beginning of the code review and a new set of testing can begin as this should trigger yet another CI/CD pipeline but this time we are testing this against other code as well and making sure that all the code in the build is working nicely together.

The Wrong Assumptions

An incorrect assumption that comes up when testing some of those very same test cases that were passing when we were doing the functional testing the first round are the same bugs. Or are they? The first round of testing you were in an almost isolated environment along side the development team but now that we are working from a merged branch such as master or develop. There is a good chance that they are related or it could be some other piece of code that is acting badly and we just happen to have caught it using the test case we used to test that new bit of functionality.
Not having that assumption and instead creating a new bug gives us a cleaner slate from where we can analyise this incorrect behaviour. Remember that test cases live on until they are no longer useful for the purposes of testing the application. Bugs and PBI’s and Stories do not, they always end after the work has been completed. They can come back as there are times where we might have missed something, but do not assume that is what happened.

When Does the State for the Work Item switch to DONE?

The simple answer to that question is when the work is done. The work is done when the coding and testing have been complete but this is going to be functional testing that we are talking about here and that testing was done from that active branch that was created for the development of this work. We have developers and testers working side by side and in a CI/CD environment this is a very natural flow. Work gets checked into source control, the build kicks off and deploys to the development environment (not your laptop) where the developer can give it a quick smoke test. From there they can approve the build to move on into a QA environment. If the testing from QA is successful then this could be a good place to implement a Pull Request.

The Pull Request does a couple of things, it provides a great opportunity to force a code review and squish the multiple commits into one nice clean commit and to automatically close the work item (set it to DONE).
That Pull Request will then start another Build which then deploys to Dev to QA (this time funtion and regression testing) as this could be a potential candidate for production.

Work Item is DONE but the testing continues

In a previous post Let the Test Plan Tell the Story I explain how the test plan is the real tool that tells us if the build we are testing is ready for a release into Production. This is the tool we use to verify that the functionality of the current new changes as well as the older features are working as expected through test cases. We are not testing the Stories and Bugs directly those are DONE when the work is done.

Master Only in Production, an Improvement

Some time ago I wrote a blog post about My New 3 Rules for Releases and one of those rules was to only release into production code that was built from the master branch. In that solution I wrote a PowerShell script that would run first thing on the deployment to only go forward if the branch from the build came from master otherwise it would fail the deployment. This gave me a guarantee that builds that did not come from master would never get deployed into Production.

This solution worked very well and guaranteed builds that did not come from master would ever get into Production, it was my safety net. It still is and I will probably continue to use it but there has been an improvement in the process to make this even cleaner. In my solution it was there as a safety net just to make sure that one day when I was clicking on things so fast and maybe doing more than one thing at a time that I did not cause this kind of error.

Artifact Condition

The new improvement is what is called an Artifact condition and it can be specific to each environment that you are deploying to. In this case I have selected my Production environments and said to only trigger a deployment in my Production environment when the Dev deployment succeeded and the branch is master. Of course it still includes all the approval and acceptance gates but the key to note here is if those first two conditions are not met it is not even going to trigger a Deployment to Production. In the past when a code from a none master branch was successful in Dev or QA I would have to fail it some where along the way to stop the pipeline and in this case the pipeline just nicely ends. Much, much cleaner.

How do you set it up

This is kind of tricky because in the VSTS Microsoft has just deployed a new Release Editor that seems to be missing this piece for now, not to worry as the new Release editor is still in preview and you can easily switch back and forth. When you go to Releases and click on the Edit link and if the screen looks like the following, click on the Edit (Old Editor) link to switch back to the old style Release editor.

Next you select your Production environment and click on the ellipse button and select the Deployment conditions link.

Finally the Configuration Screen

Now we are finally on the configuration page where all the real magic happens. I have listed 5 simple steps that you follow to setup a deployment that will only trigger when the build came from the master branch and the previous build was successful.

  1. First make sure that you have the option set to trigger a deployment into production after a successful deployment of the previous environment.
  2. Next click on the new checkbox to check it as this sets some conditions to the new deployment
  3. Click the Add artifact condition big green plus button.
  4. Set the repository to only include the master branch as that condition
  5. Finally click the OK button to save all you adjustments.

Now, you won’t even be given the opportunity to promote the build into Production if it was not built from the master branch.

One Build Definition to Support Multiple Branches

Before I moved to git, I had the same situation that many of you have had when it comes to managing build definitions. I had a build definition for each branch and for a single product this could have been several all doing the same thing. Yea, sure they were clones of each other and all I really needed to do was to change the path to the source in each case. Then in order to keep track of what each of these builds was for and what might have triggered it I would develop some sort of naming convention so that I could sort of tell without having to open it up. This really felt dirty and raised a red flag for me because once again we were introducing something into our environment that was not the same, but sort of the same. Wouldn’t it be better to actually have one build definition that we can use for all these various types of builds and different branches?

Builds with Git

When you really look at git, you learn that a branch is nothing more than a pointer to a changeset. When you compare this to any of the centralized source control systems out there including VSTF the branch is pointing to a copy of the source control in a different location. With that said, then I should be able to create one build definition and with a wild card be able to even trigger a Continuous Integration (CI) build by checking in code and it would use the appropriate branch. That is absolutely true, and for the remainder of this post we will go over the simple steps to make that happen.

Same Build Definition for All Branches

I will assume that you have a build that is working and your source code for this build is a git repository on TFS or VSTS. Because it is a Git repo, you can specify path filters to reduce the set of files that you want to trigger a build. According to the documentation, if you don’t set path filters, the the root folder of the repo is implicitly included by default. When you add an explicit path filter, the implicit include of the root folder is removed.

This is exactly what we want to do but we want to include a couple of different paths. So lets start by going to the Build Definition and clicking on the Triggers sub-menu. Make sure that the Continuous Integration switch is turned on and next pay our attention to the Branch Filters. In my branching schema I use three (3) kinds of paths. Master of course, as this is where all the finished and releasable code lands up. I also use features for any new items I am implementing and I usually include the Work Item Number in my branch as well as a short description. So an example of a feature branch for me would look something like:


With that said I have a similar path for bugs which are things that have an incorrect behavior or something that needs to be fixed. In my branch Filters I would include 3 paths and the feature and bug would include the wild card to have everything included that is part of a feature or bug branch.

With this in place my commit pushed to the remote repository will kick off a new build for any new features and bugs that I have been working on. Even better, the very same build definition kicks off when ever I complete a pull requests into Master. Not a clone or a copy but exactly the same build. There is never a question about what happened to the build, but rather what code change or merge did we introduce that caused this problem.

Before I discovered this I was happily flipping the branch name between my features and bugs, the definition defaulted to master. Because of that I wasn’t even bothering with CI for the development branch and the trick was to always remember to build from the correct branch. Now I don’t even have to think about that because the branch that triggered the build is the branch that is being built. Just another thing that I could have easily screwed up is out of the picture. I don’t even have to think about kicking off a build and deployment as this just happens every time I commit my code and push those commits up to the remote Git.

Sending an Email to the Developer when the Build Failed

Over the many versions of TFS there were existing workarounds that allowed us to send an email to the developer that queued the build and it had failed. Although these workarounds did work, I always felt that this should have been handled by the alert system within TFS. What was lacking was some sort of condition that if the build failed it should go to the developer that queued it up.

More recently I was tasked to find or build another workaround that would work within the vNext version of the Build engine. Well I started down this quest collecting api’s that I could call when I thought I would have one more look at the TFS alerts, maybe there were some updates to that part of the tool.

New Notification Engine

What do you know, there were alot of changes made with this engine but not for TFS 2015, these updates show up in TFS 2017 Update 1. One more reason to update to TFS 2017 for all those still using an on premise version of TFS as this has been in VSTS for a while now. In the remainer of this post I will walk you through the steps to implement this big improvement in the notifications and how to solve that problem of just sending an email to the developer that caused the build to fail.

If you are on VSTS or TFS 2017 Update 1, the steps are exactly the same which is nice as in my line of work I always hate having to remember two different ways for doing the same thing.

New Name

First off, the alerts name has been changed to Notifications and you get to them by hovering on the Gear icon and selecting Notifications.

However, there is a difference in where you select this gear. Make sure that you are in a TFS Project, if you see the drop down on the left say Projects, this would indicate that you are a level too high and in that case click on the Project and select one. After this page loads you should see a big blue button called “+ New”, click this button. The page changes to allow you to select “Build” under the Category and “A build fails” under the template. After you have done this click on the “Next” button.

This opens up a very different looking screen but the conditions to make this work are all there. First off we select “Specific team members” for the Deliver to choice, and in the Roles choice select the “Requested by”. This is the portion of the Notification that only selects the team member that queued up the build, in other words requested the build.

Although we had to select a project before we could get to this Notification area, in the next section the Filter, we can select “Any team project” which would apply this notification to all the TFS Projects. The filter criteria should be correct and not require any changes as this is basically gets fired off when the build has Failed. You just click on the “Finish” button and the notification is ready for testing.

What did the Notification area above the Project do?

Well just before I let you go setting up your Notifications using the proper tool, I thought I would let you know what would have happened if we did not select a Project first before we went to the Notification screen. If you do this you will notice that on the screen some of the criteria information that we used to narrow down the notification down to the developer that requested the build would not be there. These Notifications are the subscription only ones that have been in TFS since the beginning of that product. This does feel a bit strange to me, almost like these two concepts should be the opposite of what they currently are. It is what it is but at long last we can now use the Notification engine to better suite our needs.

When is Waterfall a Good Choice

In my work as an ALM consultant I will often be asked the question or told that a team can’t go and practice agile they have to do waterfall. I think they are looking at this in the wrong way. One of the things to think about in waterfall versus agile is what these two methodologies are really all about. Is waterfall really all that bad? The answer to that question is: No, waterfall is actually a great methodology and a great pattern that has worked for some projects. Not a lot in the software field; simply because in Waterfall you need to know all the requirements up front and work towards completing that plan. In other words you are working the plan and the schedule becomes king not the actual priority or benefit that you can provide to the end user.

Inspection and Adaption

In an agile setting, we know up front that we will not know everything that there is to know about this solution that we are designing and coding until we start. We recognize and acknowledge that as we start to develop and get early feedback that we may have to go back to the work that we have done and make changes. This is part of the Inspection and Adaption that goes on in Agile regularly. This is totally missing in Waterfall. Before you start attaching me with your arrows and spears, yes I know you can make changes in Waterfall, heaven knows how many times I have heard the excuses that projects were not completed on time or at all because the scope kept on changing. However, lets explore that thought process for a minute. Lets go through the steps that it takes to make a change in a Waterfall project.

First off someone has to invoke a change request in order to make that change. This is likely coming from the development team as they ran into a road block and would not be able to complete the project by the way the requirements were written. This change request would almost never be coming from the end user because they won’t likely be able to provide any feedback until the application has moved into testing, which is always done near the end of the project. Next there has to be an impact analysis on how this change may affect the rest of the project. What I find interesting here is that we are still bound to theories. The requirements were developed on theories of how things should work in the minds of a Business Analysis and now we have an impact analysis which is also based on the same air, how we think it should work. One of the things about any agile project is that it is based on living breathing code. If something isn’t working the way we need it to we can make changes and continue to get feedback until everyone is happy.

Big Design Up Front

With waterfall, things need to happen in a very precise set of steps. After the requirements are gathered and everyone is in agreement on what should go into this application, it goes to the architects who will come up with the design. Many of the choices that are made during this step are made based on the specific known aspects of the requirements documentation. The problem here is that the organization may not know if these are all the requirements and there is a high level of possibilities that they aren’t. There is a myth that the Scrum community states in their training material. The myth is that the longer you spend studying and analyzing the problem the more precise your solution will be.

The worst part of “Big Design Up Front” is that there might have been weeks and maybe even months of work to create this design. Which might be okay if the project was to come together in exactly that way. Chances are, there is going to be a change request that comes along that is going to break the design, sending the architects back to the drawing board. In any agile environment, we expect that the application and the design will probably change many times as we continue to inspect and adapt during development. The big difference here is that agile does not spend a lot of time up front on the design but continue to design and redesign as the project moves forward.

But We Need those Big Requirement Documents

Oh really? I have challenged many of my clients to prove me wrong on this. No one likes to read these because of the amount of boiler plate material that is in the document. Way back when I was leading teams on a new project I would often get these 60 to 70 page documents. I would go through them with my highlighter and find about a page and a half of things that we needed to do, the rest was filler. I am not alone in this thinking as I have seen lots of teams doing something similar to my highlighter exercise except they may put them into tools similar to Team Foundation Server. These teams would love to have the BA’s enter the requirements directly into TFS, but they struggle with having to have these big documents.

Again I ask you who are you writing these documents for? Many hours are spent to put these documents together to only end up in an archive somewhere. Developers don’t want them, they want the actual requirements or stories pulled out and track the things that they need to build. Testers tend to follow closer to what is going on in development than the big document simply because there is so much boiler plate stuff in these documents that makes it hard to work with. Now, you might be getting upset with me again because there is important stuff in that boiler plate text. True, but I don’t think it belongs in this document which is just a document. The problem with a document is that unless it has some way of being enforced it is just ink on a piece of paper. Everyone who has spent any time with your company knows what these requirements are that have to be in every product so wouldn’t it make more sense to have them in a Regression Test that gets run at least once before we release into Production. How about the really important ones being in a series of smoke tests which must get run before the Testers are even going to look at that build. This way those boiler plate requirements will be enforced.


Okay, I will admit I used the title of this blog post to attract development teams that are determined to work in a waterfall methodology and you probably thought this post would give you some much needed ammunition to fire back at the agile folks. Waterfall works great if everything is known about the project and you have done this same kind of thing many, many times. In those circumstances it is going to work great as you will know exactly how long it is going to take you and when the end user can have it in their hands. However, I have to say that very little of that kind of development is done in the United States or Canada. Those are the kinds of projects that can easily be done off shore for a lot less money because it then becomes just a labor exercise. Real development involves going where no one has ever gone before and doing things you were not even sure could be possible. That is why you need to adhere to an agile approach and take a couple of steps forward, and expect to take a step back, adjust and move forward again. Development involves trying things and retrying things until you get the results that the end users are expecting.

An Argument against the Date Based Version Number

In the past I have followed two types of version numbers for the products that I build and support on the side. Products that were customer facing all followed the Semantic concept of version control. If there was a big change but not breaking then the minor number incremented. If the change could have potential breaking changes then the Major number was incremented. This concept works well in that everytime that code was changed the third digit, the build number was incremented. We ignored the fourth number which was the revision as that was just a number to keep the build ID which was a makeup of the major, minor, build and revision, unique. If I have 1 through 18 in revision numbers all for the same build, it means that nothing in the code has changed since revision 1. We are working on changes to the actual build definition and these are just builds of the same code.

Projects that are not Customer facing

For other types of products, things that I used internally were given a different format because at the time I didn’t think it mattered and my only goal was to be able to look at the properties of an assembly and know which build it came from. For that I used a format that would change automatically for each build and I would never have to change any of the version numbers ever. This format followed a pattern like YY.MM.DD.RR, the RR representing the revision number that I allow TFS to create automatically to keep the build number unique. So for a build that was run on say February 23, 2017 that version number could look like:


I would use a PowerShell script to write this to the assembly just prior to compile time and this would work as my version number.

I have used this format for years and there have been many blog posts on how to do this automatically in TFS ever since the 2010 release. Back then we were building activites to be used in the xaml builds and since then the ALM rangers have converted this into a PowerShell script as part of the Build Extensions, as well as many others that are available in the TFS Market Place as a build task. The basic idea is to have this format as part of the build definition and most of these tools will extract the version like number out of the build name and that becomes the version number.

Build number format: MyBuildName_v$(Year:yy).$(Month).$(DayOfMonth)$(Rev:.rr)

But I Have Changed My Mind

Since moving into a more DevOps mindset if you please, I was beginning to see that I was loosing some valuable information about my internal builds. I had no way of knowing when an actual code change occured because if I built the product on Feb 23 and then built it again on Feb 24 because I wanted to try something on the build machine there was no way to tell from the build number or the version of the assembly if anything had changed. This is important stuff, but I also did not want to have to manually tweak the build number every time I did push something new into production and looking back at my old post My New 3 Rules for Releases the tools and solution to accomplish this were right at my finger tips.

But this is done on the Releases

Yes, they are and guess what? I did not have a formal release pipeline for some of these internal products. Hey some of them were just packaged up as Nuget packages with a wrapper of Chocolate. You will want to check out my post on How I Use Chocolatey in my Releases to really understand what I am talking about here.

After thinking about this for a while and having similar discussions with clients I came up with the idea of having at a minimum a Dev and Prod environment. The Dev environment would do what I pretty much have always done, it would deploy the application and maybe even run some tests to verify that the build has been sucessful. Sometimes I find issues here and I return back to the source, fix it up and send out another build.

When I am happy with the results I promote it to Production. The promotion does not do anything to any environment or machine but does lock the build, increment the build number and my newest thing create a Release work item.

Why Create a Release Work item

I will talk about this feature in more detail with some code samples in a future post. Briefly, the whole reason for the creation of a Release work item when I deploy to Production is to keep track of how many releases I have done in the last quarter. I love good metrics and this is one that lets me know I am pushing code out into production and not just tweaking it to death. Remember you can’t get good feedback if you don’t get it out there.

In Conclusion

So there you have it, all my products internal or customer facing I have much more clarity as to when a build has new code in it. I could have gone though source control and found out from the code history and found the lastest changeset number and see the first time that this was used in a build but so much work for something that I can see at a glance and not having to look anywhere else for it.

Security Configuration for Teams

Typically if it does not matter if team member’s can view the work of other teams or maybe they even work across teams which is usually the case, then having Contributor access at the TFS Project is all that is needed and desired. However, there may be those situations where you find that you need to guard data from each team so that the other teams cannot see the source or the work items of the other team and yet be within the same TFS Project so that we can get good cross team reporting that makes sense.

This post will take you through the steps that you will need to take in order to put that level of security in place.

Creating Teams

You create the teams and administrate the security from the TFS admin page. You would need to be a Project Administrator in order to create teams and you would have to be a Collection Administrator to create TFS Projects. Assuming that you have the appropriate permission we start from the normal TFS web access page and click on the gear icon on the very far right of the page.

Then just click on the New team button to create a new Team.

When creating a team it is important not to put them into any of the built in TFS Security groups that exist. These groups are setup from the TFS Project level and their rights and permissions filter all the way down to include all the teams. The end result is that you add a member into one team and they would still see the work and source from all the other teams because they got their permissions from the TFS Project level.

When you create the team make sure that you set the permissions to (Do not add to a security group) and although it is not saying this what happens is that this team also gets its own TFS Security Group with that name. This means that anyone we add to this team (provided they did not get higher permissions by being a member of some others team that does have a higher elevated security group) they would only have access to the things that we have given permission to for this team.

Before we move on to set the actual security we will have to set up the security for this team from the perspective of the TFS Project. There are a few things that we would have to set here otherwise the team members would not be able to even see their team. You do this by starting from the root team (this would match the name of the TFS Project) in the admin page. While still in the page where you created the team click on the Security tab.

Here you want to select your new team and then allow the permissions at the TFS Project level. You might be tempted to not set the View project-level information but doing that would not allow them to even see the project let alone get to their team. Things you defiantly don’t want to allow is the ability to Delete team project or edit that project-level information that sort of thing should be reserved for someone like the Project Administorators.

Area Path

The next thing that we need to tackle is the area path. In TFS starting from TFS 2012, the area path is what represents the team. Work items in the area path of the team is what we are able to use to keep the work items only visible to the appropriate team.

When this security screen first pops up you can see all the security groups that are from the Project level and it is important to note that if you want to restrict any users you want to make sure that they do not fall into any of these groups otherwise it will leave you wondering why they are able to access things that you did not give them permission.

The first thing you will want to do is to add the team security group to the area.

Find your team security group (it will existing from the creation of the team) and click the Save changes button.

With the new TFS group selected you will see on the right that nothing is set by default. Click on all the permissions that you want to grant to the users of this group and then click on the Save changes.

Version Control Security

Version control security works in a similar way that we had going with the areas. To start, the security is placed at a folder and then the permissions would be set on each of the folders for the team that has permission to access that folder and down (recursive).

The first step is to right click on the folder where you want to apply the security then go down to Advanced from the context menu that pops up and finally click on the Security tab.

When this folder opens up for the first time the group for the team will not be in the list of roles that have permissions. First thing you will need to do on this screen is to click on the Add button and choose the menu option of “Add TFS group”.

Next you will need to select the team group and add the permissions that you want this new group to have and finally click on the Save changes.

That is really all it takes to setup security at the team level. The thing to keep in mind is that the members should not be members of any of the default roles, as you can see from the image above all these roles have some sort of permission to at a minimum read (Readers role). If you follow this pattern where the members are only members of their team, then they would only see source that their team group can see. It would be like the other source would not even be there.

Shared Folders Security

For each of the teams to be able to show query tiles on their home page, those queries must exist in the Share Queries area. Because each team will have different needs and reporting on items that are different from other teams they should have their own folder area that only their team can see. One of the ways we can manage this is to create a query folder for each of the teams under the Share Query folder and then add security specific to each team.

Start in the Shared Queries folder, you can do this in either Web Access or with Visual Studio. Web Access is shown here as everyone will have access to this tool but the steps are very similar to this to do this in Visual Studio. Here we start from the home page and click on the View queries link

Expand the Shared Queries folder to expose all the folders and out of the box queries. Then right click onto the Shared Queries folder and select “New query folder”.

Enter the name of the team for this query folder. After it has been created right click while on the Team Folder and select Security…

Click on the Add dropdown control and the “Add TFS group” selection. This will open another dialog box so that we can add the Donald Team group to this folder.

Find or enter the name of the Team and then click on the Save changes button.

With the team security group selected you can select the permissions that they are allowed to have. Typically this would be the Contribute and the Read permissions. Then click on the Save changes button.

Now going back to that Shared Query view, you want to look at what this looks like from the view that a member who is only a member of this team would see. They can only see their team folder under Shared Queries, even the defaults are not visable.

Active Directory Groups

One final discussion in this area of Security and that is showing how the Active Directory Groups play into this whole thing. The TFS Groups are used to manage the permissions but instead of adding any individuals to the Group you add the AD Group instead.

It pretty much has to be done this way because TFS automatically makes a TFS Group at the time that the new team is created. Another way that this could have been done was by using a TFS Group and give it the permissions directly but the way that TFS works, this is the cleaner way to go because the TFS Group is going to be created regardless.

Start from the home page of the Team and make sure that you are in the team that you want to add the active directory groups. Next click on the Manage all members link which will open up a new window.

In this window click on the Add… dropdown and choose “Add Windows user or group”. This is where you would add the Active Directory (AD) group to be used to manage the actual users. From this point on as you add or remove people from the AD Groups they would get or loose the rights that were assigned to the appropriate team.

My New 3 Rules for Releases

Everyone of my products have an automated build and a properly managed release pipeline. At the time I just thought business as usual as I was always on my way to having a well performing DevOps operation in my personal development efforts. Well something happened in the way that I started approaching things which you don’t really plan, things will just start to happen when you get into a situation where everything is automated or at least they should and that is what this post is about.

I don’t have to wait

One of the first things that I did notice was that I didn’t have the feeling like I needed to wait until this big plan of mine to do a release. In the past I was using the Epic work item to plan out the finished features the I would need to complete to get the next release out. I even noticed before I had all these steps automated that plans would change quite often. The priorities and the well-meaning releases would take a turn to become something different like finding a critical bug that could affect the current customers. I would want to release that bug or feature as quickly as possible.

Before everything was automated, these things bothered me but there wasn’t an easy way to just get the release out there as there were still enough manual steps that you want to limit these. However, now there is no reason to get a build that has a complete bug fix or feature and push it down the pipeline and get it released into production. However, if this rush to production is now suddenly available to me isn’t there the possibility that something that wasn’t quite ready get into production by accident? That is why I came up with these 3 new rules that I set for myself that need to be followed before the build can be pushed into production.

My New 3 Rules for Releases

  1. Don’t allow any builds that came from any branch other than Master (git) or Main (tfvc) into production. If it is not Master then it should just be rejected in the deployment steps.
  2. A build that is released with success into Production, will be locked indefinitely with regards to the retention policy.
  3. The build number must incremented any time that we successfully released into production.

What follows are the ways that I automated these 3 rules and made them part of my operation. Now there is never a fear that something might get deployed into production that really should not. I can push things into production when it is important to do so and sometimes I might delay a release because there is no big benefit and saves the customers from having to download and install a release that could be packaged up with a later release. The point being that a release can be made any time it needs to and no more of this long range planning which never happens the way you expected anyway.

No Builds into Production that did not come from Master

As you may have gathered from some of my earlier posts, my personal projects have pretty much all landed up in git repositories that are remotely hosted in Visual Studio Team Services which is Microsoft’s cloud implementation of TFS. With that I am following a very typical git like workflow. Every Product Backlog Item or Bug starts with a new feature or bug branch. This is really nice as it gives me a nice level of isolation and knowing that my work will not affect the working code. It also gives me the flexibility to fix an important bug or PBI that changed in priority and know that the code I tore apart will not affect the outcome.

This also gives me the opportunity to test this code, confirm that it is complete and give it one more look through as the only way code from a branch can get into master is through a pull request. The pull request has a number of options with it as well such as squashing all the commits into a single commit (so I get a very clean and clear way of answering the question, how did you add this feature.) and deleting the branch after the merge.

Master is always the branch that represents production or ready for production. I wanted the code only to come from master because this is where all the branches come back to. Having a rule like this makes sure that the merge will always happen and that nothing gets left out. I have seen some very complicated branching structures when working with clients and something that I have seen quite often is that branches did not always get merged back to where they should. There would be these complicated discussions about where the code that goes to production should really come from. Here I have eliminated all the complexity by having a rule that says you can’t push a build that did not come from master into Production.

Now, how do you enforce this automatically? Well I could not find a task that would help me with this but I did know how I could do this with a simple PowerShell script.


if ($branch -ne "master") {
    Write-Host "Cannot deploy builds from the $branch branch into Production" 
    Write-Error ("Can only deploy builds from the master branch into Production")
    exit 1
else {
    Write-Host "Carry on, this build qualifies for a deployment into Production"

Using a PowerShell task at the top of a release for the Production environment as an inline script to implement this rule. If for some reason I pushed a build that came from some other branch this task will fail and not go any farther. In my world I typically have one build definition that is by default pointing to the master branch but I override that when I am working on one of my feature branches to get feedback on how the code is building and deploying. Which I really like because I am using the very same build and deployment scripts that I would use when going into production. So you can see how a build from one of these branches could accidentally get into production if I did not have this very elegant rule enforcement.

Locking A Released Build

During the process of development, there are several builds and deployments are happening all the time. However, most of these I don’t really care about as their only real value is to give feedback that the application was still able to build and deploy as it always has. So one thing I never want to do is to lock down a build that came from anything other than the master branch. I used to have a task on the build definition that would lock down any build that was created from the master branch. However this is not always a good rule to live by either as there have been times when the deployment of a master branch did fail while going through the release pipeline and other times it might not have failed but there was a conscious decision to hold off on a release but was merged into master to be added with a few more features.

What I needed was a task that would update the build with an infinite lock on the build when ever it was successfully deployed into Production. For that task I did find one in the Microsoft Market Place that did exactly that. This task is part of a small collection of BuildTasks written by Richard Fennell who is a long time ALM MVP. In the Market Place it is called “Build Updating Tasks” and if you search for that, “Richard Fennel” or “Black Marble” I am sure you will find it.

I have this task near the end of my Prod deployment and set the Build selection mode to “On primary build artifact” and done. Works like a charm, when I deploy to production and it was successful it will find that build and set its retention to keep forever. I no longer have to think about making sure I don’t lose those builds that are in Production.

Increment the Build number

This rule has really allowed me to move freely into my new DevOps approach and no longer have this dependancy of the long planned release which I explained earlier did not ever get released the way I thought that it would. Things and priorities change, that is life. In my build definition I have a set of variables. One called the MajorMinorNumber and the other is the BuildNumber. These combined with the TFS revision number on the end gives me the version number of my release. So in the build definition under the general sub tab my Build number format looks similar to:


Now lets break this down a little. The MajorMinorNumber change rarely as they would represent big changes in the application. This follows something close to semantic versioning in that if there is going to be a breaking change I would update the Major Number, if there was going to be a big change but would remain backwards compatible then the minor number would be incremented. In the case where I am just adding some new features that are additive to the application or fixing some bugs then the build number would be incremented. The 4th number which is the revision is left for TFS to make guarantee that we always have a unqiue build number.

In the past I have been known for using a date like version number for applications that I didn’t think would really matter. However, I have even noticed with them that there is some very important information that gets lost. If I had a daily build going on and so the day part of the version number would increment everyday even though I might still be working on the same PBI or Bug. Instead I want to have a new build number after I have a successful deployment into Production. This means that I have customers out there who may have upgraded to a newer version and with that I can even produce some release notes as to what was part of that release. But I did not want to go and increment the build number in the build everytime this happened, I wanted this to be automatic as well.

The solution for this is using the another special task that is part of the last extension that we installled. There is a task called “Update Build Variable” and I have this as the very last task for the deployment into my Prod Environment. Very simple to setup, the Build selection mode is: “Only primary build artifact” the Variable to update: “BuildNumber” and the Update mode is “Autoincrement”.

Now after a successful deployment into Production and my build number is incremented and ready to go for either my next long planned set of feature or getting out that really quick important fix or special feature that I just needed to get out there.

My Experience with Git Sub-modules

I just replaced my phone with a new Microsoft Lumina 950 XL which is a great phone. In my usual fashion of checking out the new features of my phone I wanted to see how my web sites looked.

The operating system of this phone is the Mobil version of Windows 10 and of course is using the new browser called Edge. Well it seems that my blog did not look good at all on this new platform and was in fact not even close to being workable. Even though I had the fonts set to the smallest setting, what was displayed were hugh letters so hardly any words fit on a line and was just crazy looking. However, I noticed that other web sites looked just fine especially the ones that I recognized and truely being built around the bootstrapper framework.

I was also surprised as to how many other web sites look badly in this browser with the same problems that I had. Anyway I may address some of that in a later post but right now, what I wanted to find out is if I changed the syle of this blog would it solve my problem. If I just changed the theme or something could it be possible that my site would look great again. This was all very surprising to me as I had tested the responsiveness of this site and it always looked good, just don’t know why my new phone made it look so bad.

New Theme, based on Bootstrapper

Looking for different themes for Hexo was not a problem, there are many of them and most of them are even free. I am really loving the work that I have done working with the Bootstrapper Framework so when I found a Hexo theme that was built around the Bootsrapper Framework, you know I just had to try it. Well this theme looked great a lot simpler looking theme than what I was using which was really the default theme with a few customizations. The new theme was also open source and in another git hub repository. The instructions said to use some sub-module mumbo jumbo to pull the source into the build. Well now I was curious as there was something that I saw on the build definition when working with git repositories, a simple check box that says include sub-modules. Looks like it is time to find out was git sub-modules is all about.

Welcome to another git light bulb moment.

What is a git sub module.

The concept of a git sub module is a whole new concept for me as a developer that has been using for the most part, a centralized version control system of one sort or another for most of my career. I then looked up the help files for these git sub modules and read a few blog posts, and it can get quite complicated but rather then going through all that it can do let me explain how this worked for me to quickly update the theme for my blog. In short, a git sub module is another git repository that may be used to prove source for certain parts for yet another git repository without being a part of that repository.
In other words, instead of having to add all that source from this other git repository and adding it to my existing Blog git respoitory it instead has a reference to that repository and will pull down that code so that I can use it during my build both locally and on the build machine. And the crazy thing is it makes it really easy for me to keep up with the latest changes because I don’t have to manage that it is pulling the latest from this other repository through this sub module.

I started from my local git repository and because I wanted this library in my themes folder I navigated to that folder as this is where hexo is going to expect to see themes. Then using git-posh (PowerShell module for working with git) I entered the following command.

git submodule add

This created the folder hexo-theme-bootstrap-blog and downloaded all the git repository into my local workspace and added a file called .gitmodules at the root of my Blog bit repository. Looking
inside the file, it contains the following contents:

[submodule "themes/bootstrap-blog"]
path = themes/bootstrap-blog
url =

When I added these changes to my staging area by using the add command:

git add .

It only added the .gitmodules file and of course the push only added that file as well to my remote git repository in TFS. Looking at the code of this Blog repository in TFS there is no evidence that this theme has been added to the repository, because it has not. Instead there is this file that tells the build machine and any other local git repositories where to find this theme and to get it. The only thing left was to change my _config.yml file to tell it to use the bootstrap-blog theme and run my builds. Everything works like a charm.

I really don’t think that there is any way that you can do something like this using centralized version control. Humm, makes me wonder, where else can I use git sub modules?

Some MSDeploy Tricks I've Learned

In an earlier post I talked about Hexo the tool I use for this Blog. In that post I talked about how delighted I was with this process except for one thing that did bother me and that was the deployment to the Azure website. For this process I was using FTP to push the files from the public folder to Azure. Instead I was hoping for an MSDeploy solution but that is harder than it sounds especially when you are really not using a Visual Studio Project and MSBuild to create the application.

In this post I will take you on my journey to find a working solution that does enable me to deploy my blog as a MSDeploy package to the Azure website.

What is in the Public Folder

First off I guess we should talk about what is in this folder that I call Public. As I have mentioned in my Hexo Post, the Hexo Generate command takes all my posts written in simple markup and creates the output that is my website and places it in a folder called public.

It is the output of this folder that I wish to create the MSDeploy package from. This is quite straight forward as I already knew that you can use MSDeploy to not only deploy a package but also create one. This will require knowing how to call MSDeploy from the command line.

Calling MSDeploy directly via Command Line

The basic syntax to create a package using MSDeploy is to call the program MSDeploy.exe then the parameter -verb and the verb choice is pretty much always sync. Then you pass in the parameter -source and this one we are going to say where the source is and finally the -dest which we tell it where to place the package or where to deploy the package to if the source is a package.

Using Manifest files

MSDeploy is very powerful with so many options and things you can do with it. I have found it difficult to learn because as far as I have found, there is no good book or course that you can take that will really take you into any real depth to learn this tool. I did come across a blog: DotNet Catch that covers MSDeploy quite often. It was there that I did learn about creating and deploying MSDeploy packages using Manifest files.

In this scenario I have a small xml file that says where the content is found and for that I write out a path to where the public folder is on my build machine. I call this file: manifest.source.xml

<?xml version="1.0" encoding="utf-8"?>
<contentPath path="C:\3WInc-Agent\_work\8\s\public" />
<createApp path="" />

With the source manifest and an existing application that I want to package up sitting in the public folder at the disclosed location, I just have to call the following command to generate an MSDeploy package. If you are calling this from the commandline on your machine then this should all be on one line.

"C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe" 

If you are calling this from TFS you would use the commandline task and in the first line called Tool you would put the path to the msdeploy.exe program. The other two lines would be one line and entered into the Arguments box.

Now in order for that to work I need a similar xml file that is used for the destination file to tell MSDeploy that this package is a sync to the particular website. This file I called: manifest.dest.xml

<?xml version="1.0" encoding="utf-8"?>
<contentPath path="Default Web Site" />
<createApp path="Default Web Site" />

The syntax to call this package and the destination manifest file is:

"C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe"

This works great except that I cannot use the xml files when deploying to my Azure websites as I do not have that kind of control on them. It is not a virtual machine that I can log onto or use a remote PowerShell script against to do my bidding and this package won’t deploy onto that environment without it. I need another approach to getting this to work the way I need it to.

Deploy to Build IIS to create a new MSDeploy package

This next idea that I came up with is a little strange and I had to get over the fact that I was configuring a web server on my Build Machine but that is exactly what I did do. My build machine is a Windows Server 2012 R2 virtual machine so I turned on the Web Server Role from the Roles and Features Service. Then using the above set of commands that I called from a Command Line task just like the test I used to create the package from the public folder I Deployed it to the Build Machine.

At this point I could even log into the build machine and confirm that I do indeed have a working web site with all my latest posts in it. I then called MSDeploy once more and created a new package from the web site.

"C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe"
-source:iisApp="Default Web Site"

The resulting was easily deployed to my Azure website without any issue what so ever. As you may have noticed that I have the file with the exact same name and place as the old one. There was no need to keep the first one as that was just used to get it deployed to the build machine so that we could creat the one that we really want. In order to make sure that went smoothly I deleted the old one before I called this last command which is also a command line task in the build definition.

Success on Azure Website

In my release definition for the Azure web site deployment I just needed to use the built-in out of the box task called “Azure Web App Deployment” point it to where it could find the file and tell it the name of my Azure web site and it took care of the rest.

How I Use Chocolatey in my Releases

I have been using Chocolatey for a while as an ultra easy way to install software. It has become the prefered way to install tools and utilities from the open source community. Recently I have started to explore this technology in more depth just to learn more about Chocolatey and found some really great uses for it that I did not expect to find. This post is about that adventure and how and what I use Chocolatey for.

Built on NuGet

First off I guess we should talk about what Chocolatey is. It is another packaged technology based on NuGet. In fact it is NuGet with some more features and elements added to it. If you have been around me over the last couple of years, I have declared that NuGet is probably one of the greatest advancements that we have had in the dot net community in the last 10 years. Initially introduced back in 2010, it was a package tool to help resolve the dependencies in open source software. Even back then I could see that this technology had legs and indeed it did as it has proven to resolve so many hard development problems that we have worked on for years to resolve. That being able to have shared code within multiple projects that does not interfere with the development of the underlying projects that depend on them. I will delve into this subject in a later post as right now I want to focus on Chocolatey.

While NuGet was really about installing and resolving dependencies at the source code level as in a new Visual Studio project, Chocolatey takes that same package structure and focuses on the Operating System. In other words I can create NuGet like packages (they have the very same extension as NuGet *.nupkg) that I can install, uninstall or upgrade in Windows. I have a couple of utility like programs that run on the desktop that I use to support my applications. These utilities are never distributed or a part of my application that I distribute through click-once but I need up to date version of these on my test lab machines. It has always been a problem with having some way to get these installed and up to date on these machines. However, with the use of Chocolatey this is now an easy solution and a problem that I no longer have.

Install Chocolatey

Let’s start with how we would go about installing Chocolatey. If you go to the web-site there are about 3 ways listed to download and install the package all of them using PowerShell.
This first one assumes nothing, as it will Bypass the ExecutionPolicy and has the best chance of installing on your system.

@powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((new-object net.webclient).DownloadString(''))" && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin

This next one is going to assume that you are an administrator on your machine and you have set the Execution Policy to at least RemoteSigned

iex ((new-object net.webclient).DownloadString(''))

Then this last script is going to assume that you are an administrator, have the Execution Policy set to at least RemoteSigned and have PowerShell v3 or higher.

iwr -UseBasicParsing | iex

Not sure what version of PowerShell you have? Well the easiest way to tell is to bring up the PowerShell console (you will want to run with Administrator elevated rights) and enter the following:


Making my own Package

Okay so I have Chocolatey installed and I have a product that I want to install so how do I get this package created? Good question so lets tackle that next. I start by using file explorer, go to your project and create a new folder. In my case I was working with a utility program that I called AgpAdmin so at the sibling level of that project I made a folder called AgpAdminInstall and this is where I am going to build my package.

Now I would bring up PowerShell running as an administrator and navigate over to that new folder that I just created and enter the following Chocolatey command.

Choco New AGPAdmin

This will create the nuspec file with the same name that I entered in that New command as well as a tools folder which will contain two powershell scripts. There are a couple of ways that you can build this package as the final bits don’t even need to be in this package. They could be referenced in other locations where they can be downloaded and installed. There is a lot of documentation and examples that you can find to do that. I would say that most of the Chocolatey packages that can be found on are done this way. I found that they mention that the assemblies could be embedded but I never found an example and that was the way that I wanted to package this so that is the guidance I am going to show you here.

Lets start with the nuspec file. This is the file that contains the meta data and where all the pieces can be found. If you are familiar with creating a typical NuGet spec this should all look pretty familiar but there are a couple of things that you must be aware of. In the Chocolatey version of this spec file you must have a projectUrl (in my case I was pointing to my VSTS implementation dashboard page. You must have a packageSourceUrl (in my case I pointed to my source url to my git repository) and a licenseUrl which needs to point to a page that describes your license. I never needed these when building a NuGet package but are required in order to get the Chocolatey package built. One more thing we need for the nuspec file to be complete is the files section where we tell it what files need to be included in the package.

There will be one entry there already which is to include all the items found in the folder tools and to place it within the nuget package structure of tools. We want to add one more file entry where we add a relative path from where we are to include the setup file that is being constructured up one folder and then then down 3 folders through the AGPAdminSetup tree and the target also being within the nuget package structure of tools. This line is what embeds my setup program into the Chocolatey package.

<?xml version="1.0" encoding="utf-8"?>
<!-- Do not remove this test for UTF-8: if “Ω” doesn’t appear as greek uppercase omega letter enclosed in quotation marks, you should use an editor that supports UTF-8, not this one. -->
<package xmlns="">
<!-- Read this before publishing packages to -->
<title>AGPAdmin (Install)</title>
<authors>Donald L. Schulz</authors>
<owners>The Web We Weave, Inc.</owners>
<summary>Admin tool to help support AGP-Maker</summary>
<description>Setup and Install of the AGP-Admin program</description>
<tags>agpadmin admin</tags>
<copyright>2016 The Web We Weave, Inc.</copyright>
<file src="..\AGPAdminSetup\bin\Release\AGPAdminSetup.exe" target="tools" />
<file src="tools\**" target="tools" />

Before we move on to the automated steps that we want to implement so that we don’t even have to think about building this package every time, we will need to make a couple of changes to the PowerShell scripts that are found in the tools folder. When you open this powershell script it is well commented and the variable names used are pretty clear in describing what they are for. You will notice that it seems to be ready out of the box to get you to provide a url where it can get your program to install. I want to use the embedded solution so un-comment the first $fileLocation line and replace the ‘NAME_OF_EMBEDDED_INSTALLER_FILE’ with the name of the file you want to run and I will also assume that you have it in this same tools folder (in the compiled nupkg file). In my package I did create an install program using the wix toolset which also gives it the capability to uninstall itself automatically. Next I commented out the default silentArgs and the validExitCodes found right under the #MSI comment. There is a long string of commented lines that all start with #silentArgs and what I did was un-comment the last one and set the value as ‘/quiet’ and un-comment the validExistCodes line right below that so the line looks like this:

silentArgs = '/quiet'
validExitCodes= @(0)

That is really all that there is to it. The rest of this script file should just work. There are a number of different cmdlet’s that you can call and they are all shown in the InstallChocoletey.ps1 file that appeared when you called the Choco new command and they are all commented fairly well. I was creating the Chocolatey wrapper around an Install program so I chose the cmdlet “Install-ChocolateyInstallPackage”. So to summarize the PowerShell Script ignoring the commented lines the finished PowerShell script looks a lot like this:

$ErrorActionPreference = 'Stop';

$packageName= 'MyAdminProg' # arbitrary name for the package, used in messages
$toolsDir = "$(Split-Path -parent $MyInvocation.MyCommand.Definition)"
$fileLocation = Join-Path $toolsDir 'MyAdminProgSetup.exe'

$packageArgs = @{
packageName = $packageName
unzipLocation = $toolsDir
fileType = 'EXE' #only one of these: exe, msi, msu
url = $url
url64bit = $url64
file = $fileLocation
silentArgs = '/quiet'
softwareName = 'MyAdminProg*' #part or all of the Display Name as you see it in Programs and Features. It should be enough to be unique
checksum = ''
checksumType = 'md5' #default is md5, can also be sha1
checksum64 = ''
checksumType64= 'md5' #default is checksumType

Install-ChocolateyInstallPackage @packageArgs

One thing that we did not cover in all this is the fileType value. This is going to be an exe, msi or msu depending on how you created your setup file. I took the extra step in my wix install program to create a bootstrap which takes the initial msi and checks the prerequists such as the correct version of the dot net framework and turns that into an exe. You will need to set this to the value of your install program what you want to run.

Another advantage to using an install package is that it knows how to uninstall itself. That means I did not need that other PowerShell script that was in the tools directory which was the chocolateyuninstall.ps1 file. I deleted mine so that it would use the automatic uninstaller that is managed and controlled by windows (msi). If this file exists in your package than Chocolatey is going to run that script and if you have not set this up properly will give you issues when you run the Choco uninstall command for the package.

Automating the Build in TFS 2015

We want to make sure that we place all these two files folders and the nuspec file into source control. Besides having this is a place where we can repeat this process and keep track of any changes that might happen between changes we will be able to automate the entire operation. Our goal here is to make a change which when we check in the code change of the actual utility program will kick off a build create the package and publish it to our private Chocolatey feed.

To automate the building of the chocolatey package I started with a Build Definition that I already had that was building all these pieces. It built the program, then created an AGPAdminPackage.msi file and then turned that into a bootstrapper and gave me the AGPAdminSetup.exe file. Our nuspec file has indicated where to find the finished AGPAdminSetup.exe file so that it will be embedded into the finished .nupkg file. Just after the steps that compile the code, run the tests, I add a PowerShell script and switch it to run inline and write the following script:

# You can write your powershell scripts inline here. 
# You can also pass predefined and custom variables to this scripts using arguments


This command will find the .nuspec file and create the .nupkg in the same folder as the nuspec file. From there the things that I do are to copy the pieces I am interested in having in the drop and place them into the staging work space $(Agent.BuildDirectory)\b and then for the Copy Publish Artifacts I just push everything I have in staging.

Private Feed

Because Chocolatey is based on Nuget technology it works on exactly the same principal of distribution which is a feed but it could also be a network file share. I have chosen the private feed as I need this to be a feed that I can access from home, the cloud, and when I am on the road. Okay so you might be in the same or similar situation as myself so how do you setup a Chocolatey Server? With Chocolatey of course.

choco install chocolatey.server -y

On the machine that you run this command on, it will create a chocolatey.server folder inside of a folder off of the root drive called tools. Just point IIS to this folder and you have a Chocolatey feed ready for your packages. The packages actually go into the App_Data\packages folder that you will find in this ready to go Chocolatey.server. However I will make another assumption that this server may not be right next to you but on a different machine or even the cloud so you will want to publish your packages. To do that you will need to make sure that you give the app pool modify permissions to the App_Data folder. This in the build definition after the Copy Publish Artifact add another PowerShell script to run inline and this time call the following command:

# You can write your powershell scripts inline here. 
# You can also pass predefined and custom variables to this scripts using arguments

choco push --source="https://<your server name here>/choco/" --api-key="<your api key here>" --force

That is it really you have a package in a feed that can be installed and upgraded with just a simple Chocolatey command.

Make it even Better

I went one step farther to make this even easier and that was to modify the chocolatey configuration file so that it looks in my private repository first before looking at the public one that is set up by default. This way I can install and upgrade my private packages just as if they were published and exposed to the whole world but it is not. You find the chocolatey.config file in the C:\ProgramData\Chocolatey\config folder. When you open the file you will see an area called sources and probably one source listed. Just add an additional source file give it an id (I called my Choco) and the value should be where your chocolatey feed can be found and set the priority to 1. That is it but you need to do this to all the machines that are going to be getting your program and all the latest updates. Now when ever you are doing a build to about to run tests on a virtual machine you can call have a simple powershell script do it for you.

choco install agpadmin -y
Write-Output "AGPAdmin Installed"

choco upgrade agpadmin -y
Write-Output "AGPAdmin Upgraded"

Start-Sleep 120

The program I am installing is called agpadmin and I pass the -y so that it skips the confirm as this is almost always part of a build. I call both the install and then the upgrade as it does not seem to do both but it just ignores the install if it is already installed and will then do the upgrade if there is a newer version out there.

Hope you enjoy Chocolatey as much as I do.

Who left the Developers in the Design Room

This post is all about something that has been starting to bug me and it has been bugging me for quite a while. I have been quiet about this and have started the conversation with different people at random and now it is finally time I just say my piece. Yes this is soap box time and so I am just going to unload here. If you don’t like this kind of post, I promise to be more joyful and uplifting next month but this month I am going to lay it out there and it just might sound a bit harsh.

Developers are bad Designers

I come from the development community with over 25 years I have spent on the craft and originally I got there because I was tired of the bad workflows and interfaces that people who thought they understood how accounting should work, just did not. I implemented a system that changed my workload from 12 hour days plus some weekends to getting everything done in 10 normal days. Needless to say I worked my way out of a job, but that was okay because that led me to opportunities that really allowed me to be creative. You would think that with a track record like that I should be able to design very usable software and be a developer, right?

Turns out that being a developer has given me developer characteristics and that is that we are a bit geeky. As a geeky person, you tend to like having massive control and clicking lots of buttons, but this might not be the best experience for a user that is just trying to get their job done. I once made the mistake of asking my wife, who was the Product Owner of a little product that we were building, what the message should be when they confirm that they want to Save a Student. Her remarks threw me off guard for a moment when she asked why do I need a save button? I made the change so just save it, don’t have a button at all.

Where’s the Beef

Okay, so far all I have enlightened you with is that I am not always the best designer and that is why I have gate keepers like my wife who remind me every so often that I am not thinking about the customer. However, I have noticed that many businesses have been doing a revamping of their websites with what looks like a focus on mobile. I get that but the end result is that it is harder for me to figure out how to use their site and somethings that I was able to do before are just not possible anymore. You can tell right away that the changes were not based on how a customer might interact with the site, I don’t think the customer was even considered.

One rule that I always try to follow and this is especially true for an eCommerce site is that you need to make it easy for the customer if you want them to buy. Some of the experiences that I have had lately almost leave you convinced that they don’t want to sell their products or do business with me. For some of these I have sought out different vendors because the frustration level is just too high.

Who Tests this Stuff?

That leads right into my second peeve in that no one seems to test this stuff. Sure the developer probably tested their work for proper functionality and there might have even been a product owner who understood the steps he needed to take after talking to the developer and proved to him or herself that the feature was working properly. That is not testing my friend, both of these groups of people test applications the very same way, it’s called the Happy Path. No one is thinking about all the ways that a customer may expect to interact with the new site. Especially when you have gone from an older design to the new one, ah, no one thought of that and now your sales numbers are falling because no one knows how to buy from you.

Testers have a special gene in their DNA that gives them the ability to think about all the ways that a user may interact with the application and even attempt to do evil things with it. You want these kind of people on your side, it is best to find it while it is still under development than having a customer find it and worse yet you get hacked which could really cost you financially as well as trust.

In my previous post “Let the Test Plan tell the Story” I laid out the purpose of the test plan. This is the report that we can always go back to and see what was tested and how much of it was tested and so on. I feel that the rush to get a new design out the door is hurting the future of many of these companies because they are taking the short cuts of not designing these sites with the customer in mind and eliminating much of the much needed testing. At least that is how it seems to me, my opinion.

Let the Test Plan Tell the Story

This post has been the result of some discussions that I have had lately when trying to determine the work flow for a client but this often comes up with others in the past but what I had never used as an argument was the role of the test plan in all this. Besides being an eye opener and an aha moment for the client and myself I thought I would explore this thought a little more as others might also find this helpful in understanding and getting better control of your flows.

What is this flow?

There is a flow in the way that software is developed and tested no matter how you manage your projects. Things typically start from some sort of requirement type of work item that describes the business problem and what the client desires to do and should include some benefit that the client would receive if this was implemented. Yea I just described the basics of a user story which is where we should all be by now when it comes to software development. The developers and testers and whoever else might be contributing to making this work item a reality start breaking down this requirement type into tasks that they are going to work on to make it happen.

The developers get to work as they start writing the code and completing their tasks while the testers start writing test cases that they will use to either prove that the new requirement is working as planned or if it has not and simply is not working. These test cases would all go into a test plan that would represent the current release that you are working on. As the developers complete their coding the testers will start testing and any test cases that are not passing is going to go back to the developers for re-work. Now how this is managed is going to depend on how the teams are structured. Typically in a scrum team where you have developers and testers on the same team this would be a conversation and the developer might just add more tasks because this is work that got missed. In some situations where the flow between developers and testers is still a separate hand off, a hold out from the waterfall days, then a bug might be issued that goes back to the developers and you follow that through to completion.

As the work items move from the business to the developers they become Active. When the developers are code complete the work items should become resolved and as the testers confirm that the code is working properly they become closed. Any time that the work item is not really resolved (developer wishful thinking) the state would move back to Active. In TFS (Team Foundation Server) there is an out of the box report called Reactivations which keeps track of the work items that moved from resolved or closed back to active. This is the first sign that there are some serious communication problems going on between development and test.

With all the Requirements and Bugs Closed How will I know what to test?

This is where I find many teams start to get a little weird and over complicate their work flows. I have seen far to many clients take the approach of having additional states that say where the bug is by including the environment that they are testing it in. For instance they might have something that says Ready for System Testing or Ready for UAT and so on. Initially this might sound sensible and the right thing to do. However, I am here to tell you that this is not beneficial, and loses the purpose of the states and this work flow is going to drown you in the amount of work that it takes to manage this. Let me tell you why.

Think of the state as a control on how developed that requirement or bug is. For instance it would start off as New or Proposed, depending on your Process template, from there we approve it by changing the state to approved or active. Those that use active in their work flow don’t start working on it until it is moved into the current iteration. The process that moves it to approved also moves it into a current iterationn to start working on it but they then move the state to committed when they start working on it. At code completion the active ones go to resolved where the testers will then begin their testing and if satisfied will close the work item. In the committed group they always work very close to the testers who have been testing all along here so when the test cases are passing then the work item moves to done. The work on these work items are done, so what happens next is that we start moving this build that represents all the work that has been completed and move it through the release pipeline. Are you with me so far?

This is where I typically hear confusion, as the next question is usually something like this: If all the requirement and bug types have been closed how do we know what to test? The test plan of course, this should be the report that tells you what state that these builds are in. It should be from this one report, the results of the test plan that we base our approvals for the build to move onto the next environment and eventually to production. Let the Test Plan Tell the Story. From the test plan we can not only see how the current functionality is working and matches our expectations but there should also be a certain amount of regression testing going on to make sure features that have worked in the past are still working. We get all that information from this one single report, the test plan.

The Test Impact Report

As we test the various builds throughout the current iteration as new requirements are completed and bugs fixed the testers are running those test cases to verify that this work truly is completed. If you have been using the Microsoft Test Manager (MTM) and this is a dot net application, you have turned on the test impact instrumentation through the test settings we have the added benefit of the Test Impact Report. In MTM as you update the build that you are testing it does a comparison to the previous build and what has been tested before. When it detects that some code has changed near the code that we previously tested and probably passed it is going to include those test cases in the test impact report as tests that you might want to rerun just to make sure that the changes that were made do not affect your passed tests.

The end result is that we have a test plan that tells the story on the quality of the code written in this iteration and specifically lists the build that we might want to consider to push into production.

Living on a Vegan Diet

In all my blog posts that I have written over the years I have never talked about health or a healthy lifestyle. This will be a first and you as a technology person might be wondering what has living a Vegan Lifestyle have anything to do with software. After all the blog title is “Donald on Software”.

For years I would go through these decade birthdays and just remark how turning thirty was just like turning twenty except I had all the extra knowledge called life. Going from thirty to forty, same thing but things took a turn when I moved into my fifties. I have had doctors notice that my blood pressure was a bit elevated. I took longer to recover from physical activates. Felt aches I never noticed before and I promised my wife that I would live a long, long time and that wasn’t feeling all that convincing. I didn’t have the same get up and go that I had known before.

A Bit About My Family

My wife and step daughter have been vegetarian/vegans for many years. I was open to other types of food like plant based meals and would eat them on occasion when we were at a vegan restaurant or that was what was being cooked at home. However, I travel a lot so most of my food would be from a restaurant where I could eat anything I wanted. This went on for several years, I was taking a mild blood pressure pill every day. This was keeping my blood pressure under control but there were other things that it appeared to be affecting as well in a negative way.

The Turning Point for Me

During Thanksgiving weekend in November 2014, Mary (my wife) and I watched a documentary on Netflix called “Forks over Knives”, and at the end of that I vowed never to eat meat again and start moving towards a Vegan lifestyle.
The documentary is about two doctors one that came from the medical field and one from the science side of things and their adventure to unravelling the truth about how the food that we eat is related to health. One of the biggest studies that has ever been done is called “The China Study” and is a 20 year study that examines the relationship between the consumption of animal products (including dairy) and chronic illnesses such as coronary heart disease, diabetes, breast cancer, prostate cancer and bowel cancer.

Not only reducing these numbers but now that the toxic animal products were out of our system, our bodies would start to repair some of this damage that we have always been told could never be repairable naturally.

Getting over the big Lie

Yes there is a very large lie that we have all believed to be the truth because we assumed that it came from the medical field and sanctioned by the government to be the truth. That being the daily nutritional guide. This is the guide that told use to eat large amounts of meat and dairy products to give us energy and strong bones but this did not come from any medical study this came from the agriculture and dairy industries to sell more products.

Most of that animal protein that we take in our body rejects, there is very small amounts that it actually uses. Now common sense would tell me if my body is rejecting all this animal based protein it is working extra hard and something is going to break down in the form of disease and other difficulties especially as we get older. Oh wait, they now make a pill for that so we can continue to live the way we always have. So now we are not only supporting an industry that never had that big of a market before but now we are spending billions of dollars every year to pharmaceutical companies as well in order to correct the mistakes we made with the things we eat. One thing that I did learn in physics is that one action creates another and opposite reaction so this is not solving anything either just keep making it worse and now health care costs are through the roof with bodies that normally know how to heal themselves.

Now for the Good News

I know I got you all depressed and disappointed as I just dissed your favorite food and called it bad and toxic but there is a happy ending here. I felt like you are right now for about five minutes and then decided to say “NO to Meat”. If you get a chance I would encourage you to look up that documentary “Forks over Knives” as one other thing that disturbed me was the way they were harvesting these animals and called it ethical or within the approved guidelines. These animals were under stress and that stress goes into the meat and you wonder why everyone seems so stressed, I know there is a relationship here.

Anyway, the good news is my latest checkup with my doctor. I am currently on no medication what so ever and my blood pressure numbers are very normal and very impressive for a guy my age. I did a stress test and was able to reach my ideal heart rate easily and effortlessly and I feel great. If I had any plaque buildup it is certainly repairing itself as I feel great. Still can’t seem to lose the 15 pounds I have been working on for the last couple of years but I know I will accomplish that soon enough. I am done with meat and all animal proteins as in milk, eggs, honey and I am going to live a long, long time and feel great. Won’t you join me?

Migrate from TFVC to Git in TFS with Full History

Over the last year or so I have been experimenting and learning about git. The more I learned about this distributed version control the more I liked it and finally about 6 months ago I moved all my existing code into git repositories. They are still hosted on TFS which is the best ALM tool on the market by a very, very, very long mile. Did I mention how much I love TFS and where this product is going? Anyway, back to my git road map as this road is not as simple as it sounds because many of the concepts are so different and at first I even thought a bit weird. After getting my head around the concepts and the true power of this tool there was no turning back. Just to be clear I am not saying that the old centeralized version control known as TFVC is dead, by no means there are somethings that I will continue to use it for and probably always will like my PowerPoint slides, and much of my training material.

Starting with Git

One thing about git is that there is just an enormous amount of support and its availability on practically every coding IDE for every platform is just remarkable. What really made things simple for me to do the migration was an open source project on CodePlex called Git-TF. In fact how I originally used this tool was that I made a separate TFS Project with a git repository. I would work on that new repository and had some CI builds to make sure things kept working and then when I finished a feature I would push this back to the TFVC as a single changeset however because I always link my commits with a work item in the TFVC project it had a side effect that I was not expecting. If you opened the work item you would see some commits listed in the links section. Clicking on the commit link would open up the code in compare mode to the previous commit so you could see what changes were made. Of course this only works if you are looking at work items from web access.

Git-TF also has some other uses and one of those is the ability to take a folder from TFVC and convert that into a git repository with full history. That is what I am going to cover in this post. There are some rules to this that I would like to lay down here as best practises as you don’t want to just take a whole TFVC repository and turn it into one big git repository as that just is not going to work. One of the things to get your head around git is that those respoitories need to be small and should be small remember that you are not getting latest when you clone a repository you are getting the whole thing which includes all the history.

Install Git-TF

One of the easiest ways to install Git-TF on a windows machine is via Chocolatey since it will automatically wire up the PATH for you.

choco install git-tf -y

No Chocolatey or you just don’t want to use this package managment tool you can follow the manual instructions on CodePlex

Clean up your Branches

If you have been a client of mine or ever hear me talk about TFS you will certainly have heard me recommending one collection and one TFS Project. You would also have heard me talk about minimizing the use of branches for when you need them. If you have branches going all over the place and code that has never found it’s way back to main you are going to want to clean this up as we are only going to clone main for one of these solutions into a git repository. One of the things that is very different about the git enhanced TFS is that a single TFS project can contain many git repositories. In fact starting from TFS 2015 update 1 you can have a centralized version control TFVC and multiple git repositories in the same TFS project which totally eliminates the need to create a new TFS project just to hold the git repositories. We could move the code with full history into a git repo of the same project we are pulling from.

In our examples that we are pulling into the git repository we are doing this from the solution level as that is where most people using Visual Studio have been doing for decades however the git ideal view of this would be to go even smaller to a single project per repository and stitch the depenancies together for all the other projects through package management through tools like NuGet. Right now that is out of scope for this posting but will delve into this in a future post.


Now that we have a nice clean branch to create your git repository it is time to run the clone command from the git-tf tool. So from the command line make a nice clean directory and then be in that directory as this is where the clone will appear. Note: if you don’t use the –deep switch you will just get the latest tip and not the full history

mkdir C:\git\MySolutionName
cd c:\git\MySolutionName
git-tf clone $/MyBigProject/MyMainBranch --deep

You will then be prompted for your credentials (Alt credentials if using Once accepted, the download will begin and could take some time depending on the length of your changeset history or size of your repository.

Prep and Cleanup

Now that you have an exact replica of your team project branch as a local git repository, it’s time to clen up some files and add some others to make things a bit more git friendly.

  • Remvoe the TFS source control bindings from the solution. You could have done this from within Visual Studio, but its just as easy to do it manually. Simply remove all the *.vssscc files and make small a small edit to your .sln file removing the GlobalSection(TeamFoundationVersionControl) ...
    EndGlobalSection in your favorite text editor.
  • Add a .gitignore file. It’s likely your Visual Studio project or solution will have some files you won’t want in your repository (packages, obj, ect) once your solution is built. A near complete way to start is by copying everything from the standard VisualStudio.gitignore file into your own repository. This will ensure all the build generated file, packages, and even your resharper cache folder will not be committed into your new repo. As you can imagine if all you used was Visual Studio to sling your code that would be that. However with so much of our work now moving into more hibrid models where we might use several different tools for different parts of the application tying to manage this gitignore file could get pretty complicated. Recently I came across an online tool at where you pick the OS, IDEs or Programming Language and it will generate the gitignore file for you.

    Commit and Push

    Now that we have a local git repository, it is time to commit the files, add the remote (back to TFS), and push the new branch (master) back to TFS so the rest of my team can clone this and continue to contribute to the source which will have full history of every check-in that was done before we converted it to git. From the root, add and commit any new files as there may have been some changes from the previous Prep and Clean step.
    git add .
    git commit -a -m "initial commit after conversion"

We need a git repository on TFS that we want to push this repository to. So from TFS in the Project that you want this new repository:

  1. Click on the Code tab
  2. Click on the repository dropdown
  3. Click on the New Repoisotry big “+” sign.
  1. Make sure the type is Git
  2. Give it a Name
  3. Click on the Create button.

The result page gives you all the information that you need to finish off your migration process.

  1. This command adds the remote address to your local repository so that it knows where to put it.
  2. This command will push your local repository to the new remote one.

That’s it! Project published with all history intact.

A New Start on an Old Blog

It has been quite a while since I have posted my last blog so today I thought I would bring you up to speed on what I have been doing with this site. The last time I did a post like this was back in June of 2008. Back then I talked about the transition that I made going from City Desk to Microsoft Content Management System which evenually was merged into SharePoint and from there we changed the blog into DotNetNuke.

Since that time we have not created any new content but have moved that material to BlogEngine.Net and this really is a great tool but not the way I wanted to work. I really do not want a Content Management system for my blog, I don’t want pages that are rendered dynamically and the content pulled from a database. What I really wanted were static pages and the content for those pages be stored and built the same way that I build all my software, stored in Version Control.

Just before I move on and tell you more about my new blog workflow I thought I would share a picture from my backyard and that tree on the other side of the fence is usually green it does not change colors every fall but this year the weather has been cooler than usual, so yes we sometimes do get fall colors in California and here is the proof.


Hexo is a static page generator program that takes simple markup and turns it into static html pages. This means I can deploy this anywhere from a build that I can generate it just like a regular ALM build because all the pieces are in source control. It fully embrasses git and is a github open source project. I thought that moving my blog to Hexo would help me in too ways, besides giving me the output that I am really looking for but also to use as a teaching tool on how the new Build system that is part of TFS 2015 fully embraces other technologies outside of dotNet and the Visual Studio family. From here I check-in my new blogs into source control and that triggers a build which puts the source into a drop folder which is then deployed to my web site which is hosted on Azure.

As of this post I am using FTP in a PowerShell script which is used to deploy the web site which is not ideal. I am working on creating an MSDeploy package that can then be deployed directly onto the Azure website that is hosting this blog.

The Work Flow

The process begins when I want to start a new blog. Because my git repositories are available to me from almost any computer that I am working with I go to the local workspace of my Blog git repository checkout the dev branch and at the command line enter the following command

hexo new Post "A New Start on an Old Blog"

This will place a new md file in the _post folder with the same name as the title but the spaces replaced by hyphens (“-“). After that I like to open the folder at the root of my blog workspace using Visual Studio Code. The thing that I like about using Visual Studio Code as my editor is that it understands simple markdown and will give me a pretty good preview as I am working on it and if my screen is wide enough I can even have one half of the screen to type in the raw simple markdown and the other half to see what it looks like.

The other thing that I like about this editor is that it understands and talks git. Which means I can edit my files and save them and Visual Studio Code is going to inform me that I have uncommitted changes so I can add them to staging and commit them to my local repository as well as push them to my remote git repository. Above you may have noticed that before I began this process I checked out the dev branch which means that I do not write my new posts in the master branch and the reason for that is that I have a continious integration trigger on the build server that is looking for anything that is checked into the master on the remote git repository. Because I might start a blog on one machine and finish it on another I need some way to keep all these in sync and that is what I use the dev branch for. Once I am happy with the post I will then merge the changes from dev into master and this will begin the build process.

Publishing the Post

Once I am happy with my post all I need to do is to merge the dev branch into Master and this starts the build process. Which is really just another Hexo command that is called against my source which then generates all the static pages, javascript, images and so on and puts it into a public folder.

hexo generate

It is the content of this folder that then becomes my drop artifacts. Because the Release Manager also has a CI trigger after the build has been sucessful it will begin a Release pipeline to get this drop into my web site. My goal is to get this wrapped up into an MSDeploy package that can then be deployed directly onto my Azure web site. I am still working on that and will provide a more detailed post on what I needed to do to get that to happen. In the meantime, I need to make sure that my Test virtual machine is up and running in Azure as one of the first things that this Release Manager pipeline will do is to copy the contents of the drop onto this machine. Then it calls a CodedUI test which really is not testing it will run my PowerShell script that will FTP the pages to my Azure web site. It needs to do this as a user and the easiest way without me having to do this manually is to run the CUI to do it and complete it.


So there you have it, I have my blog in source control so I have no dependancy of a database and all the code to generate the web site and my content pages are in source control which makes it really easy if I ever need to make a move to a different site or location or anything like rebuild from a really bad crash. As an ALM guy I really like this approach and what would be even better was having a new pre-production staging site to go over the site and give it a last and final approval before it goes live to the public site.

Database Unit Testing from the Beginning

The concept of unit testing for a database and really this means a database project still seems like a wild idea. Of course I am still surprise how many development shops still use their production database as their source of truth which it shouldn’t be but that’s because they do not have their database in source control. In order to take you down the road to explore some of the benefits that are going to be available to you with being able to run unit tests on your database I need to get us all caught up with how to create a database project as this is where the magic happens.

Creating a Database Project

You need to have Visual Studio 2010 Premium or higher to create a database project. One of the options that are available to us is to reverse engineer an existing database and that is what we are going to do in these first steps. I have installed the sample database called AdventureWorks. This is available as a free download from the Codeplex site.

From Visual Studio you will want to create a new Project and select the SQL Server 2008 Wizard which can be found under the SQL Server node found under the Database category. Give it a name, I called my AdventureWorks and give it a location on your hard drive where you want the project to be located.

A wizard will popup and take you through a number of pages, just accept the defaults until you get to the Import Database Schema page as this is something we do want to do is to import the AdventureWorks database.

Make sure you check the Import existing schema and then you will likely want to click on the New Connection button unless you have made a previous connection to the database, that connection string won’t be found in the dropdown.

If you have connected to databases in the past this dialog box should be very familiar to you. Basically we need to say where the SQL server is. In this case it is on my local machine and is the default instance. Other common server names are also localhost\SQLExpress as that is the name instance that SQL Express creates when it is installed. After you get the server instance completed the dropdown of database names will be populated and from there you should be able to find the AdventureWorks database. I also like to click on the Test Connection button just to confirm that there aren’t any connectivity issues. Click OK and we are ready to move on.

Click Next and continue through the wizard pages and accepting the defaults. On the last page click Finish. This is where this Visual Studio wizard really does it’s work as it creates the project and does a complete reverse engineering of the database. The end result is a Visual Studio SQL Database project that represents the database in code which is suitable for checking into Source Control, capable of deploying changes that might be made to this project, being able to compare changes between versions and much much more.

Lets get to Unit Testing

When you are on a database project as in I have physically clicked on it so that it has focus you will see that a number of toolbar buttons appear. We want to click on the one called Schema View.

This brings up another little window in the same area as the Solution and Team Explorer area of Visual Studio called the Schema View.

From this view you will want to expand Schemas, then expand the HumanResources, expand Programmability, Stored Procedures and finally you want to right click onto the uspUpdateEmployeePersonalInfo and choose Create Unit Tests…

If you don’t already have a Test Project the next step will let you create a skeleton Unit test for this stored procedure and the option to create a new Test project in the language of your choice.

You will find that when this window opens you can choose more than just the one stored procedure that we choose in the previous step but yours is the only one that is checked. If you did want to have more than one stored procedure in the same class file you could pick them as well. Then set the Project name or select and existing Test Project and give it a decent class name. I named mine HumanRecourceUnitTests.cs. After you click OK it will build all the pieces the test project and default unittest.cs file that we don’t need and everything just starts to look like a typical Unit Test until the following dialog pops up.

Now in order to run unit tests against the database you need a connection to the database. In the first part of this you should be able to find your original stored procedure that you used to create the database project. You will notice that this dialog has an optional additional what it calls a secondary data connection to validate unit tests. In this sample we will not need this but in a real world application you may so let me explain that scenario. When an application that is built with a database connection, typically that application and the connection string would just have enough rights to run the stored procedures and nothing else. In those cases you will want to test those connection string when running the stored procedure that you are testing but that same connection string would not have the rights to check the database to see if those rights are valid especially in a scenario where you want to check if the right values got inserted or deleted, that is where this secondary data connection comes in, it would be a data connection that had higher rights to look at those values directly from the tables.

After you have clicked the OK button Visual Studio will display a skeleton of a unit test to test this stored procedure.

In theory we have a unit test that we could run, but the results would indicate that the results are inconclusive because although this stored procedure is being run, it is really just exercising the stored procedure and not really testing it as in giving it some values to insert and checking if those values come back.

We are going to replace the unit test calls here with the following code snippet. I have it all in one piece here for you to easily grab this but following this I will break down this code so you can see what is going on. It is very similar to what the skeleton provided with us but we give it some distinct values.

-- Prepare the test data (expected results)
DECLARE @EmployeeId int

SELECT TOP 1 @EmployeeId = EmployeeId
FROM HumanResources.Employee

-- Wrap it into a transaction to return us into a clean state after the test

-- Call the target code
EXEC HumanResources.uspUpdateEmployeePersonalInfo
@EmployeeId, '987654321', '3/2/1987', 'S', 'F'

-- Select the results to verify
SELECT NationalIdNumber, Birthdate, MartialStatus, Gender
FROM HumanResources.Employee
WHERE EmployeeId = @EmployeeId


The first part of this code is to capture the EmployeeId that we want to update so that is what the first DECLARE statement does. In the next call we just want to capture an existing EmployeeId from the Employee table and because we really don’t care which on it runs us but we only want want we use the TOP 1 clause in that statement. At this point our declared variable @EmployeeId now has this value.

Note: I have found that there could be a breaking change here that depends on which version of the adventure works database that you have as some will have the employeeId and others will have this column named BusinessEntityID. To find which one you have go back to the Schema View of the project and expand the Schemas, HumanResources and Tables. Find the Employee table and expand the Columns, the column in question will be that first one right there.

Because the stored procedure will make changes to the data in the table and we may not want to actually commit those changes we just want to test these changes we surround the next pieces around a transaction and after we have collected our validation values we can roll this back.

After the transaction we call the update stored procedure and pass in some specific data. Next we call a select statement to get those values from the table with the EmployeeId that we just passed into the previous steps. Finally we roll the whole transaction back so that we do not actually make any changes to the database so we can run this test over and over.

Before we can actually run this test we need to make some changes to the Test Conditions portion of the unit test. First you will want to remove the existing entry that is shown there by clicking on the Delete Test button.

After you have removing the existing Test Condition we can then add a new one or more to verify the results. Select Scalar Value from the dropdown control and click on the “+” button.

On the scalarValueCondition1 line that this action creates, right click on this line and choose Properties, which will display the properties window. Update the following information:

  • Name: VerifyNationalId
  • Column number: 1
  • Enabled: True
  • Expected value: 987654321
  • Null expected: False
  • ResultSet: 1
  • Row number: 1

What is really happening here is that we are going to look at that first column and see if it matches the NationalId that we sent to the stored procedure. NationalId is the first column that is returned in the select statement.

We are now ready to run the unit test and see that it is working and pass the test. Typically in a unit test you could be anywhere in the method of the unit test do a right click and you will see one of the context choices being to run test. However what we have been working on so far has been the design surface of the database unit tests which is why we were able to write SQL statement to write our tests. To see or get to the actual code page you need to go back to the HumanResourceUnitTests.cs file and right click on it and choose view code.

As an alternative you could select the file in the solution and press the F7 key, either way you will then be looking at the actual test and if you right click anywhere within that method you will see that one of your choice is to Run Tests. Do that now and you will see the test results go from a Pending to hopefully a Pass. If you do get a failure with an exception you will want to check the column names from this table. Some of the names changed and even the way they are spelled. It appears to be case sensitive as well. Like I mentioned before, there seem to be more than one version of this sample database out there and they did make some changes.

Now that we do have a working test, I always like to make a change to prove that it is working by making it fail. So to make it fail, change the Expected value to 9876543210000. I basically just added 4 zeros to the end of the expected result. Re-run the test and it should fail and if we look at the Test Result Details we can see that the expected results did not match, which is exactly what we expected.

Take out the padded zeros and run the test once more so that we get a passed test once more. This is just a step to keep or tests correct.

Associate the Unit Test to a Test Case

The following section is going to need TFS 2010 in order complete this part of the walk through, and even better if you have Lab Management setup to complete the development, build, deploy, test cycle on these database unit tests.

Right now, the unit test that we created can be run from Visual Studio just like we have done in this walk through. You can also make these part of an automated build which if this test project was included in the solution for an automated build in Team Foundation Server (TFS) it would automatically run and be part of the build report. However, this would not update the Test Plan / Test Suite / Test Case that the QA people are using to manage their tests, but it can.

In Visual Studio, Create a new Work Item of type: Test Case, and call it “uspUpdateEmployeePersonalInfo Stored Procedure Test”. We won’t fill anything in the steps section as we are going to go straight to automation with this Test Case. Click on the Associated Automation tab and click on the ellipse “…” button

This will bring up the Choose Test dialog box and because we have just this one test open in Visual Studio we will see the exact test that we want associated with this test case. Click on the OK button.

We now have a test case that can be used to test the stored procedure in automation. When this test case is run in automation it will update the test results and will be reported to the Test Plan and Suite that this test case is a part of.

Database Schema Compare where Visual Studio goes that extra mile

There are a number of good database tools out there for doing database schema comparisons. I have used different ones over the years at first initially to help me write SQL differencing scripts that I could use when deploying database changes. If your background is anything like mine where you were namely a Visual Basic or a C# developer and could get by with working on SQL if you could write directly to the database. There were challenges with being able to script everything out using just SQL. Today that is not nearly an issue for me and I can do quite a bit with scripting and could build those scripts by hand, but why?

WHAT… Visual Studio for database development?

Over the years I have tried to promote SQL development to be done in Visual Studio. I made a great case, SQL is code just as much as my VB, C#, F# or what ever your favorite language of choice happens to be and should be protected in source control. Makes sense but it is a really hard sell. Productivity goes down hill, errors begin to happen because this is not how the SQL teams are used to working on databases. It was an easier sell for me because I loved working in Visual Studio and found the SQL tools not to be as intuitive to me. I have never been able to figure out how I could walk through a stored procedure in Query Analyzer or Management Studio but have always been able to do this with stored procedures that I wrote from within Visual Studio and that was long before the data editions of Visual Studio.

Ever since the release of the Data Dude or its official name back then, Visual Studio Team Edition for Database Professionals, this was what I did and I tried to convince others that this is what we should be doing. It was never an easy sell, yea the schema comparison was nice but our SQL professionals already had all kinds of comparison tools for SQL and it would be too hard for them to work this way. They wanted to be able to make changes in a database and see the results of those changes, not have to deploy it somewhere first.

So as a quick summary of what we figured out so far. Schema comparison from one database to another, nothing new, your SQL department probably has a number of these and use them to generate their change scripts. How is Visual Studio schema comparison better than what I already have how is it going to go the extra mile? That my friend starts with the database project which does a reverse engineering of sorts of what you have in the database and scripts the whole thing out into source files that you can check into source control and compare the changes just like you do with any other source code.

Now once you have a database project you are able to not just do a schema comparison with two databases but you can also compare from a database and this project. The extra mile is that I can even go so far as to deploy the differences to your test and production databases. It gets even better but before I tell you the best part lets go through the actual steps that you would take to create this initial database project.

Create the Database Project

I am going to walk you through the very simple steps that it takes to build a database project for the AdventureWorks database. For this you will need Visual Studio 2010 Premium edition or higher.

We start by creating a new project and select “SQL Server 2008 Database Project” template from under the Database - SQL Server project types. Give it a name and set the location. I called mine AdventureWorks because I am going to work with the sample AdventureWorks database. Click OK..

Visual Studio will build a default database project for you, but it is not connected to anything so there is no actual database scripted out here. We are going to do that now. Right click on the database project and a context sensitive menu will popup with Import Database Objects and Settings… click on that now.

This opens the Import Database Wizard dialog box. If you have already connected to this database from Visual Studio then you will find an entry in the dropdown control Source database connection. If not then you will create a new connection by clicking on the New Connection… button.

So if you have a ready made connection in the dropdown, choose it and skip the next screen and step as I am going to build my new connection.

Because my adventure works database in on my local machine I went with that but this database could be a database that is anywhere on your network, this will all just work provided you do have the necessary permissions to connect to it in this way. Clicking on OK takes us back to the previous screen with the Source database connection filled in.

Everyone, click Start which will bring up the following screen and start to import and script out the database. When it is all done click the Finish button. Congratulations you have built a Database Project.

You can expand the solution under Schema Objects, Schemas, and I am showing the dbo schema and it has 3 table scripts. All the objects of this database are scripted out here. You can look at these files right here is Visual Studio.

However you might want to use the Schema View tool for looking at the objects which gives you a more Management Studio type of view.

Just click on the icon in the Solution Explorer that has the popup caption that says Database Schema Viewer.

Updating the Visual Studio Project from the database

In the past these were the steps that I would show and demonstrate on how to get a database project scripted out and now that it is code is really easy to get into version control because of the really tight integration from Visual Studio. My thoughts after that is this is the tool that you should be working in to evolve the database. Work in Visual Studio and deploy the changes to the database.

Light Bulb Moment

Just recently I discovered how the SQL developer does not really need to leave their favorite tool for working on the database, Management Studio. That’s right, the new workflow is to continue to make your changes in your local or isolated databases so that you can see first hand how the database changes are going to work. When you are ready to get those changes into version control you use Visual Studio and the Database Schema comparison.

So here we see what I always thought was the normal workflow, with the Project on the left and the database that we are going to deploy to on the right. If instead we are working on the database and we want to push those change to the Project, then switch the source and target around.

Now when you click the OK button you will get a schema comparison just like you always did but when deployed it will check out the project and update the source files. This will then give you complete history and the files will move through the system from branch to branch with a perfect snapshot of what the database looked like for a specific build.

  1. Click this button to get the party started.
  2. This comment will disappear in the project source file.
  3. The source will be checked out during the update.

    The Recap of what we have just seen.

    This totally changes my opinion on how to go forward with this great tool. The fact that we can update the project source from the database was probably always there but if I missed the fact that this was possible then I am sure many others might have missed it as well. It makes SQL development smooth and safe (all schema scripts under version control) and the ready for the next step to smooth and automated deployment.

The Two Opposite IT Agenda's

The Problem

I have been in the Information Technology (IT) field for a long time and most of that time has been spent in the development space. Each environment different from the previous one and in some cases there were huge gaps in the level of technology that was used and made available in each location. This has stumped me for a long time why this was. You go to a user group meeting and when ever the speaker was speaking about a technology that was current and he would conduct a quick survey around the room how many were using this technology, the results would be very mixed. There would even be lots of users at these meetings where they were still using technologies that were over 10 years old and no sign of moving forward.

Why is this happening?

Good question, and after thinking about this for a long, long time I think I have the answer. It really depends on which aspect of the IT spectrum is controlling the development side. I think it has become quite acceptable to break up the whole IT space into two major groups, the IT Professionals, and the Software Developers. When I first moved to California I worked for a company that was a software developer and they did software development for their clients on a time and materials basis. There was no question as to which wing of IT influenced the company with regards to technology and hardware. The developers in this case were golden, if you needed special tools, you got them. Need a faster computer, more monitors, special machines to run beta versions of the latest OS and development tools, you got it. You were on the bleeding edge and the IT Professionals were there to help you slow down the bleeding when that go out of control. However, this company was always current got the best jobs and in a lot of cases when we deployed our final product to their production systems that would be the point at which their IT department would then be forced to update their systems and move to the new round of technology.

Welcome to the Other Side

What happens when the influence is on the other foot, the IT Professionals. They have a different agenda as their number one goal is stability, security, and easy deployment. However this does come with a cost, especially when the company is heavily relying on technology to push its products. I have heard this from many different companies all with in this same situation, that they are not a technology company, the technology is just the tool or one of the tools to deliver their products. When this group controls the budget and the overall technical agenda things like this will happen. Moving forward will be very, very slow and the focus will be purely on deployment issues and keeping those costs under control and not on the cost of development which could get very expensive as the technology changes and you are not able to take advantage of those opportunities. Over time, the customers that receive your products will start to evaluate your future as not being able to move fast enough for them because they are going to expect you to be out there and fully tested these waters before they move there and if your not it is not going to look favorable in their eyes. This is especially true if you have some completion in your space that are adapting the new technologies faster then your company is.

There is another side to this that I have witnessed which bothers me even more. The decision to move all enterprise applications to the web was never from the development side of IT but came from the IT Professionals. Remember one of their big agendas is the easy, easy deployment and as a result they have made software development so expensive that we have been forced to move as much as we can to off shore development companies. In most cases this hasn’t even come close to a cost savings for the applications as you never seem to get what you thought your were designing and it is not always the fault of the off shore companies, they are giving your exactly what you asked for. In more cases it is the wrong technology for the solution. Most high volume enterprise applications were desktop applications with a lot of state (data that you are working with). The web is stateless and over the years many things have been developed to make the web appear state full but is it not. I have seen projects spend 100 times more time and money into implementing a features on a web to make it appear and feel like a desktop application. Now to be clear this argument started when deployment of desktop applications was hard as in the early days there was no easy way to keep all the desktops up to date except to physically go around and update them as patches and newer versions became available. However, in the last 5 years or more that has totally changed with things like click-once technology you can implement full automatic updates and license enforcement just as easily as web sites and maybe even better. We all know there are new tricks every day to spoof a web site into some new security trick.

What’s the Answer

I don’t really have the answer but I do have a few thoughts that I have been thinking about and I would love to hear about some other ideas that you might have. My thought is that you should separate the IT push down two paths and this advice is for the companies that are currently being held back by the stabilizing IT Professionals. I would even go so far as to keep the developers on a separate network then the rest of the employees this will keep the bleeding on the other side of the fence and not affect your sales and support staff which are there to sell and support products that are stable and need to keep them that way. This will allow the development to improve and expand their technical expertise and provide better and more relevant solutions for your customers, internal and external.

Goal Tracking

Since about the beginning of the year I have been thinking about goal tracking. I compiled a long list of technologies that I wanted to learn, experiment with and maybe even build some projects using some of these newly learned skills. Nothing quite like turning something new into something useful. I find that this technique provides me with the best understanding of how and why a technology would be used in one scenario over another. My goals for this year is a long list and some have a dependency of a previous goal being completed before I even begin, like reading the book before I begin my project based on the technology.

However, I suffer from the getting bored and just needing a break from a certain goal and then forget to get back to it at the appropriate time illness. It’s like I need something to help me track what my goals are and an easy to see a KPI like indicator to show me which goals I need to pay attention to right now or I might miss my target date altogether. Before I go much farther I should define KPI:

KPI’s are Key Performance Indicators which help organizations achieve organizational goals through the definition and measurement of progress. The key indicators are agreed upon by an organization and are indicators which can be measured that will reflect success factors. The KPIs selected must reflect the organization’s goals, they must be key to its success, and they must be measurable. Key performance indicators usually are long-term considerations for an organization.
This is what I need for my goals, some way to track my progress. I went to work on it, storing the goals was easy. Give it a name, what your target date is for completing the goal and some exit criteria. Okay, so I had to think a little bit about that last one, but I needed something that would tell me when the goal was completed. So, I started with an easy one, reading a book. I know when I have completed that goal when my current page is equal to the total number of pages in the book. Sorry, I just jumped into some logic thinking that a computer program could use to determine if it was completed. So in the case of tracking the progress for my book reading goals I could keep track of what page I was on each day and how long I spent reading. The last one is going to help in figuring out how fast I am reading this book and checking this against how much time I have set aside to work on my goals.

Okay, then from that information I could recalculate my goal target date by calculating the rate at which I am going what I should actually reach my goal. If the new target date is earlier then I had planned then the KPI should show me a green light. If it is later then this, it should show me a yellow (warning) light if I am just slipping but I still have time in my allocated time frame to meet this goal. Of course the KPI would be a red light if there was no way that I could meet this goal. This one is harder to determine as it is an indicator which would come up when I certainly have gone past the target date, how I can determine if I have run out of time before this date is hard to calculate especially if I have alot of goals. There are things that I cannot really know like sacrificing one goal so that I can put all my effort toward the other goal. If you are behind I will show the warning light, if we missed the goal I will show the red light…but at least I have something that I can track for my goals.

There were a couple of other types of goals that I thought of tracking. My projects that I build are not based on any page number but I thought I would set a goal in the amount of time I would spend on the goal by a certain target date and track it that way. This also should work quite well and can easily see when I am on and off track but the red can again only be shown if I have already missed the mark. Then just to throw something different into the goal tracking mix, I thought about setting up some goals for my weight. This one is really different in that there is no time element here at all. In stead we are tracking the weight on a regular basis and let the goal tracker estimate and the rate that I am loosing or gaining weight when I should be able to reach my ideal weight. I think that the KPI’s are going to start showing me problem indicators when I am moving in the opposite direction that I was planning. If this is going to work or not I am not sure, for instance for the past week I have had no change in any direction and the goal tracker is still saying I will reach my ideal weight within the date I have targeted….time will tell.

Anyway, as you can probably tell by now I have actually started to put together a goal tracking program. It is still rough and most certainly is a beta product.

Good luck with your goals, I am finding that I am a lot more focused on my goals and staying on track then when I wasn’t tracking my goals, so I think it is working. or

Starting from the Beginning

I have been a Visual Basic developer for over ten years now. It was not my first language that got me excited about programming. No, that would have been Clipper. I accidentally fell into Clipper much the same way that Visual Basic started as an experiment for me.

It was sometime around 1987 or 1988 when I was working as the accountant and network administrator for my family’s car dealership. I was running into limitations in the current accounting software we were using and I knew that a change would be needed. However, I had great difficulty in finding software that had the functionality and flexibility that was required. I started looking into some source code solutions and found one that was originally written in dBase III and was Clipper ready. This was my start to serious programming as this source code did not work and it took me two months to work through the code learning to work with Clipper as I went along.

Clipper 4 worked well for me, which was a very process level language. There were no surprises. The user could only process data in the exact steps designed in the software. When Clipper 5 was released I upgraded, which exposed me to some new and unfamiliar aspects of programming. Clipper introduced three “built in objects” and soon several “third party vendors” started coming out with add-ons for Clipper that allowed the creation of your own objects and classes. You should realize that by this time I was becoming quite the Clipper programmer. I was designing new features to our accounting software and building complementary add-ons. I was experimenting with Windows but was never able to implement it into the dealership until Windows 3.1 was released. However, we were running our Accounting software in a DOS box through Windows. Nothing special here, but it worked. Nantucket, the company that owned Clipper, made a lot of promises that there would be a Windows version of Clipper coming out soon.

In the mean time, I read an article by a fellow Clipper guru that suggested looking into Visual Basic to get a better handle on working with objects. So I got a copy of Visual Basic 1.0 for exactly that purpose; to get a better understanding about how objects worked and be able to actually write a Window application. I was still thinking that Clipper was going to be my main programming language with Visual Basic as a tool to learn about objects. This was similar to the original purpose of Pascal, which was to learn structured programming. Anyway, I was having a great time with Visual Basic, reading a book or two on it and building some really simple programs.

On a flight to Comdex one year, I was sitting next to two guys from Sequiter Software. They noticed that I was reading one of my Visual Basic books and asked me how I liked Visual Basic? I told them I was enjoying it very much but I was really a Clipper programmer and it would be great if Visual Basic could access a database. Well what followed was some very interesting information that they shared with me: Sequiter makes a product called CodeBase which is a very fast engine that reads and writes to dBase, Fox, and Clipper data. I could use it in Visual Basic by declaring the procedures to the API from within Visual Basic. Well that was it: from then on Visual Basic became a very important tool in what would later become my programming career. Remember I was an accountant that couldn’t find good software that worked the way I needed it to work for me.

So I guess you could say that I have been building database applications in Visual Basic since version 1.0. Just so that you are clear on who I really am, I did not take any short cuts to change careers. I did take a number of correspondence courses where I studied Assembler, Basic, Pascal, C, and COBOL and then went to College where I graduated with distinction. I am constantly learning and doing the best I can to keep up on the latest technology and am always interested in creating better software products.

My Take on vs.

C# (sharp) is the language that was designed for Dot Net and was used to build much of the Framework. It shares the same first name as its big brother C++, but it doesn’t really feel like any C++ that I have ever taken out for a spin. C# really is a cross between the best of Visual Basic, the best of C++, and some elements of Java which make it the perfect language for Dot Net.

Microsoft made an announcement that the new version of Visual Basic would finally be corrected to match the standards of other languages. There have always been inconsistencies between VB and other languages when it came to the number of elements that were created in an array and the value of true. Hey, somewhere along the way, this now very powerful language got itself all screwed up, but then is the first version where the forms are not being handled by the Ruby Engine. The operating system will actually be in charge of what we see on the screen. Anyway, at the time of this announcement I was all excited about the future of VB and what the heck was C# anyway, it did not even seem too important to me.

Then sometime in the early part of 2002, Microsoft made the announcement that I think surprised almost every serious VB programmer. They were going to re-tool Visual to make it more compatible to Visual Basic 6. Well this was enough for me to consider looking into C# more seriously.

Before we go on I thought I would take a moment to talk about the bad rap that VB (as a language) has been given. VB has been attacked over the years by the programming community for not being a very serious language. The language started off as a very simple tool to build desktop applications for Windows. Over the years VB has become a very powerful programming language and probably its curse has been the ease in which you can build Windows applications. I say a curse, because anyone who has worked with this language has been able to build a working program quickly and easily. On the other hand you would not typically take a language like C++ straight out of the box and start writing a program without at least taking a number of courses and reading a few books on the subject. You might give up and leave that sort of programming to the professionals. But none of that is involved when programming in VB since it is such a forgiving language. However, even with VB’s simplicity there is a lot more to writing a good solid VB program then just a piece of code that works. There is choosing good programming practices and constantly refining your skill so that you write the most efficient and easily maintainable code that you can possibly write. The bad rap really belongs to the VB programmers that have picked up some very bad habits along the way and have failed to refine their skill to build elegant and well managed code. There are bad C++ programmers out there too just not as many.

I have spent a fair amount of time in a variety of C and C++ environments and have found that it was just too much work to build a Windows desktop application. Visual Basic makes a lot of sense since that was how it was designed. C on the other hand has it roots in building drivers and operating systems and I do not typically engage in those kinds of projects.

Making the Change

I am leaning towards programming in C# instead of VB but not just because I am upset with the decisions that were made on the future of VB: I need something much more powerful then that to justify my reasons.

One of the things I really like in C# is the new inline XML style comments. This is not available in With this I am able to produce some very clear comments in my code, where they happen and with them produce a technical document. Many times in the past I have had to duplicate some of this effort, and then update my documents when the code went through some changes. Not anymore, it is all in one place and as I make changes my documents are also updated.

Secondly, Dot Net is built around this new concept of Namespaces, which is the way that Microsoft is getting around the DLL hell issues that have plagued us for years. I have some interesting stories to tell on that subject but will need to wait for another time. In C#, the Namespace is exposed right out there directly in your face. You can adjust the Namespace in but you need to do this through the IDE and is just not in your face. I have done some work with multiple projects that support other projects and I just think it is a lot cleaner when I have total control over the Namespace.

Thirdly, there is the learning curve. is not just an upgrade from Visual Basic. It is a new life style and you really need to get into that life style if you really want to take advantage of the Frame Work and go beyond what we have coded and designed in the past. This Frame Work is wonderful and I am almost tempted to say that the only limitation is lack of our imagination. Since I started getting into C#, I have had to take each step with a fresh new approach. When I was playing with the early beta’s I found that I was constantly doing a comparison with the previous version of VB. I think my return on investment with C# is a whole lot better then if I had gone the route. Something to keep in mind, Dot Net programming is for new programs, not to port over a working application. Microsoft has made sure that the Dot Net Framework supports our old code, so why touch something that works fine. Instead it is to create new applications and rebuilds of programs that lacked functionality that was difficult or impossible to implement in the past.

It is true that all languages compile to a common Immediate Language (IL) file that is used by the Common Language Runtime (CLR) but there are some advantages to using C# over VB.


In a survey of 633 development managers conducted by BZ Research during June of 2002 the results show that is being used in 21.8% of the current projects being developed while is being used in 18.8%. Over the next 12 months these same development managers are planning future development, where will be used in 43.4% of the projects and will rise to 36.7%. Pretty close.

These numbers would support what I have heard through the grape-vine that many of the VB shops are making plans to go the route: I think in part it is because of the upgrade path that has been followed in the past. They are not taking into consideration that is not the same VB that they have worked with over the past decade or so. I am sure that many of these shops will eventually start to move towards C#, since this is the language of the Framework and clearly the best way to start over. I think the training could be more cost effective and less expensive then attempts to retrain them with

My prediction is that the growth of will be even greater then what is being portrayed in this survey which shows them pretty close to a draw. As for me and my house, we are going to skip dabbling in and go straight to the future of good programming, C#.

No no he's not dead, he's, he's restin'!

Just in case you did not get the Monty Python reference here is a cartoon courtesy of Blaugh which gets right to the point. I have been away from writing anything for my web site for a very, very, long time. Where have I been? Where do I begin? I have been quite busy developing software for a number of clients that I cannot name because of none disclosure clauses in my contracts. I never did understand how disclosing who the actual client is in a public forum would be such a big deal but I can only descibe what I have been doing over the last four years as having worked in the hospitality, mortgage, back to hospitality and now property cost industry. While that has been keeping me busy with all the work that these projects generate, Mary and I have continued to develop and support our AGP Maker program.

What has brought my sudden attention back to this site and providing more articles and input on what I have been working on? I guess because of the change in where and how I am hosting the site and a change in the content program to update the site. This has been the third time that I have changed the content management system for this site. I started out using City Desk because of an article that I came across. I don’t rember which magazine but the article was about content managment systems. The article quoted a couple of paragraphs from Joel Spolsky and he was talking about City Desk. That article took me to Joel’s web site Joel on Software which was the original inspiration for starting my site. I liked Joel’s style and how he looked at things. This was the very first blog that I followed faithfully. Even today, when Joel writes something, I just want to find the time to sit down and read it. I guess part of it is that Joel does not write every day or even every week. When he has something he wants to say and share he does and that has always been my goal. Speak when I have something to say, not just to generate content.

Next I switched the content management system over to Microsoft Content Management System (MCMS). This was a great learning experience and I was able to leverage my dot net skills. It provided me with the ability to edit the pages from where ever I was at home or on the road, which was a problem that I had with City Desk as I had to make changes within the City Desk program and then push out all the files to their final location. The future of MCMS is uncertain as Microsoft is moving that technology into the latest release of Microsoft Office SharePoint Server (MOSS). That was not the reason I am leaving this platform though, as this is a really great product, it was just impossible to host these sites anywhere but on my public exposing web server. I really want to move all the public web sites to be hosted outside our office so that they can be expanded and extended and provide a much more stable environment. Our office is not setup for hosting and right now our hosting needs are not all that great, but things may very well change over time.

This brings us to the third content mangement system that I am switching to. I am moving all our content over to DotNetNuke. Once again that leverages my skills as a dot net developer added with the extra benifit that GoDaddy supports this in their free hosting program natively. This continues to give me the flexability to update the pages were ever I am, give me a better opportunity to get my pages indexed by the search engines and allow readers to link to direct pages and articles. When I had this site hosted in my office you could not link directly to an article unless you knew the name of the page which was all hidden from view. This may even lead to some articles that I might do about working with DotNetNuke.

Over the next couple of weeks and months I want to take on some technical issues like authentication and how I have taken advantage of windows authentication but used it in the way that forms authentication provides some greater flexability. The way that in house internal programs are built and consumed in other companies that I have worked, just bugs me to death. There is no reason why I need to log onto every single tool that I use if I have logged onto the computer that I am using. There is no need for this and I have developed some techinques that I will share on how I use this to work the way it should.

I would also like to cover some topics that I have never covered before. These would just be opion pieces so take them with a grain of salt, but I do want to cover some political and economic issues that have been bugging me. If nothing else they will make you think cause I am sure my views are going to be a little different then what you might have been expecting. I do at times have a unique view on the way I see things working and how I think that they should be working. Keep in mind these are opionions not necessarily based on a lot of facts.

I would like to talk about my conversion and on going conversion of all my web sites going the way of DotNetNuke. This is a great content management tool that gives me a lot of flexabiltiy as the skins are easy to create, now if only I was better at graphics I could really do something with this tool, but over all the experience is quite plesant. Modules that are not provided by what is in the DotNetNuke installation package I can create quite easily, I am after all a software developer. Plus the fact that GoDaddy which has been my domain name registar for years is now providing some free hosting (for the price of a domain registration) and they fully support DotNetNuke as a hosting package.

The Power of Time Tracking

I love to keep track of time. It could be related to my love of data and all the information that I can extract from it: how much fuel does my car use, how much time do I spend on stuff each week, how many hours am I away from my family.

Actually my attraction to time tracking goes much deeper than that. I never planned to start my own business when I made the career change into software development so many years ago. I had seen how it worked being self employed as I grew up in a family owned business and my father was in charge of the service department of an auto / agriculture dealership. One of the things I noticed was that as the billing was being discussed from the details obtained from the back of the work order, the customers would be requesting a discount because they did not see enough detail that explained why the job took as long as it did. This was not something I was looking forward to experiencing myself being self employed.

I do not like to negotiate. I am not a good negotiator. Instead I would like to have my work speak for itself and it has for many, many years. So when I did end up being self employed presented my client with an invoice, I also had the opportunity to present them with a detailed accounting of what that invoice represented. Sometimes it read just like a book, but I never had to explain my work. There was never any negotiation about the amount of the invoice that I was presenting and I always got paid on time. Mission accomplished.

That was my original motivation to really get into time tracking, I have built various pieces of internal software (after all, I am a software developer) that have helped me to maintain my goals. Since then I have discovered other benefits to keeping tracking of time and for the rest of this article I want to detail these benefits.


When working on a long project many times the client would only pay on some sort of a deliverable. We can all agree that we don’t want to pay for something we have not yet received. I did discover that I could use my detailed time tracking entries as a deliverable since the client was able to receive something from me that they could use to justify approval of payment.

When I was in college an instructor said that we should be paid for each stage of the software life cycle. At first I had trouble with the concept because at that time I only pictured the deliverable as the final finished product. However, my perception was very small in the overall grasp of developing software for a client. I soon discovered that sometimes all that I ended up doing for the client was research and some feasibility studies, sometimes working on the specifications and never got a chance to work on the actual software. Also, I have worked on projects where the specification phase went on for almost a year; collecting rules and processes and writing about the software that would be built. I needed regular pay periods. The only way to do this and justify my demands was to provide a deliverable that ended up being the details from my time tracking efforts.

Resolve Disputes

Anytime you need to resolve a dispute, details play a very important role. I worked on a project quite some time ago that did lead to some legal confrontation. My recorded project details were used to justify the amount of time that was spent on the project and why a deposit should not be returned. It is important to keep track of how and where you spend your time.

I know that this is also a good rule with dealing with tax situations. Revenue Canada and the IRS want details and many legal actions have been taken against individuals simply because they could not produce enough details. I can hardly remember what I did a few hours ago let alone days, months or years, but if I have details in front of me, it sure helps jog my memory.

How Much Does it Cost?

We all have ideas as to how long we work on a task or even a project. Sometimes I can hardly believe that a certain task took me as long as it did. It felt like five minutes but in reality it took me four hours. When you start tracking details of your day with real time, you get very clear evidence of the time that was really consumed.

In my own anal way I not only record the time and detail that I can bill my clients, I also keep track of how long I spend on the road, reading my technical books and papers, and the internal projects. This altogether tells me how much time this is costing me away from my family. It helps me keep my life in perspective and allows me to make better decisions. If I have only allowed for a few hours of time to spend with the family this week, maybe it is time to go and have that game of handball with my stepdaughter or go for a nice leisurely stroll with my wife or give my stepson a call just to see how he is doing.

This is a very monetary world and time costs us money. Sometime this is good since it helps us to provide for our families and sometimes the cost is great because it takes us away from them. However, if you don’t track it, how are you going to know how well you are balancing your life? What is the cost?

This raises another thought from another life. When I was into the financial world, (okay I was an accountant for the family business), I would have people asking me advise on how they could construct a budget. My advise is always the same; you first need to be extremely anal about tracking your spending, because before you can start budgeting you need to know where you money is going. That is how I see Time Tracking.

The Plug

Over the years I have built several applications that tracked time. First with an Access Database, then a VB front end and SQL backend. The problem with both of these was the synchronization to a central data store. For the last three-and-a-half years I worked for a company that built a time machine on the web. I thought that I would continue working for them until I decided to move on or retired, so I stopped thinking about building a better time tracker. Their software allowed me to keep track of all the things I had become accustomed to tracking.

I regretted this decision when the company got sold to a medium size corporation that was acquiring software companies across the country. The head office insisted that we use a multi-million dollar time tracking system which was in my eyes worthless. I could not maintain the level of detail to which I had grown accustom. None of us could see the point of this since it did not produce detailed invoices for our clients. Now red flags were flying for our clients, they all loved to know the details of what we were doing for them. Anyway, the company closed its doors and I found myself again being an independent software developer and needing some form of time tracking system, so I built Time Tracker. The product is still evolving and I may release a commercial version of the product some time in the future.

If you would like to know more about my Time Tracker program and or are interested in finding out how you can implement Time Tracker in your facility, send email to:

My name is Donald L. Schulz and I like to keep track of my time.

Who's the Boss

For most of our lives we have a constant struggle to try to be the boss of ourselves. Does it ever happen? When you grew up as a child I am sure you have memories similar to mine where at some point in your life you were struggling to gain control of your own life. Could not wait to move out of the house and get out on your own, so that you could be the boss of you. How’s that going for you? Are you the boss of you yet?

It is not long after you move out that you find you have a whole bunch of new people that have stepped in to take over the boss position. You have to pay rent so you have to answer to your landlord as he becomes a certain boss and when you can’t pay the rent, he fires you by way of eviction. Then in order to make some money to pay the rent you have to find a job and that usually leads to a boss and might even have a complete entourage of bosses. You know what I mean, there is your manager, the assistant manager, then there is the shift manager and none of them are shy at giving you orders and commands. Come to think of it, maybe living at home wasn’t so bad after all.

Self Employed

Then one day you wake up with this fantastic idea. If you start your own company you could become your own boss. Then you would truly have reached your goal of being the boss of your self. Then as the company grows you could end up being the boss for lots of other people. Yea, this is what you are going to do to be the boss of you. Well it is never quite like that because if you want to remain in business you will need to listen to your customers. You need to provide them with a service that they will value and will want to pay you for. One of the very reasons why a small company has a good chance of competing against a larger competitor is the ability to deliver better quality customer service. Wait a minute! If I have to listen to my customers and do what they want me to do, then they are my new boss? That’s right and as your business grows and you attract more and more customers and you want to continue to be successful, the number of people you need to listen to increases as well. You could just ignore the requests of your customers and we all know how that is going to affect your newly formed company. Remember the last time you were fed up with a business that was ignoring your needs. Why, you found a new place of business who was more willing to listen to your needs and even provide you with that service you were looking for.

Going Public

Okay, let’s take the self employed business a step farther. Let say that you do make a real honest effort in your new business and listen to your customers and follow through on many of their suggestions to improve the products and services that you provide. You make improvements’ in your goods and services for the benefit of your customers. The company grows and grows, you are the boss of hundreds maybe even thousands of employees, your customers love your products so you decide to take the business to the next level and go public. You know trade shares of your company on the stock market. This was of course in an effort to reach more customers and to expand to other geographical areas, expand your horizons and get your products and services into your new deserving customers. This changes things. All of a sudden you are hearing from a new group of people that want your attention and they keep talking about steady growth, make more profit and drive the share price up. These are your investors and it sounds like a new set of bosses to me. They don’t seem to share the same passion that you had with pleasing your customers, in fact they don’t seem to care about them other then to make them pay more money and anything to show growth and make the stock price go up. This can be a problem, if you grow too fast and the profits are a little slow at coming in you are going to be under pressure to increase profits somewhere and decline expenses in other areas. Both of these decisions could greatly affect your fine customer service that you have been able to provide in the past.


Let us talk about one more area in this topic of bosses and that is in the area of politics. I think that sometimes politicians forget that there positions are in a role reversal of sorts. Politicians work for the people, I think the correct term is the servant of the people. Yes the highest ranked position in the country, that of the president is really a servant of the people and we expect them to serve the needs of its citizens and make decisions that are for the good of the people not themselves and the many friends that they have made to get to this fine position of servant hood.


I think that having a boss and having to answer to someone is a fact of life. You can even get to be the president of the United States only to answer to the people, who are your bosses. So, in conclusion be the best boss that you can be to the people who look to you for leadership and threat those in a boss position to you with respect. If they do not deserve your respect, then maybe it is time to leave and find a new and better boss. There are a number of them out there, I know, I have worked for a few of them myself.

What is The Web We Weave, Inc?

A little more than a year ago, Mary and I had a discussion about the many projects that we both have had in the backs of our minds and would like to make a reality. We thought a corporation would be good in that it could provide us with the legal entity and a single structure in which we could register our copyrights and trademarks. It might also provide us with some tax relief and if things went well, could very well represent a major part of our future.

Well, these things are all well and good until you find that our vast array of projects are just that, vast, varied and hard to find a simple way to describe what our company “The Web We Weave, Inc.” is all about. I guess the best way to present this is to go through our current list, and how we came about these as they all lead to a very interesting story, at least to Mary and me.

Fuel Consumption

I have been interested in fuel consumption and fuel consumption tracking since about as far back as 1990 or 1991. Somewhere around this time, I was spending a lot of my time traveling between Canada and the USA. I would often travel with my laptop and liked to keep an eye on my fuel consumption. There were a few calculator type programs that could do this when in Canada and we used them at the family-owned dealership to verify fuel consumption for our customers. The problem that I had was when I traveled to the US; I had to do all these extra conversions from a US gallon into liters in order for this other program to do the calculation. I figured that there had to be a better way to do this that would take the various measurements and do the conversions on the fly in the background.

As a result of this, I built an application that I later distributed as shareware called “win-Fuel” and it was written as a desktop application in Visual Basic 1.0. It did exactly what I had in mind. Radio buttons that switched between miles and kilometers and a second set of radio buttons to switch between Liters, Imperial Gallons, and US Gallons. The display of the results showed the consumption calculations in four formats; Miles Per US Gallons, Miles Per Imperial Gallons, Miles Per Liter, and the number of Miles per 100 Liters (the official metric calculation). I did have some success with the application but more importantly to me than the success was that it provided me with the experience of building a simple application and taking it through all the stages; from concept, development, to distribution. I also built a context sensitive “help” and a “setup” program to complete the project.

I had always planned to go back to this little application and add the ability to store the information in a database of some sort and use that data to compare past trips with the current calculation. However, that never did happen. But now with the availability of the Internet and the ease at which what was once deemed a desktop application can now be transported into a web application and provide an even larger data source, this is a possibility. It is my belief that the collected data could be quite valuable to governments, environmental groups, car manufactures as well as individuals. Up to the present, fuel consumption has always been measured under laboratory conditions and real data has never been taken into consideration.

I see this site as a free service to the users of the application which will allow me to be able to provide a market with the collective value of the data. I also see this as a central place where fuel consumptions can be discussed in the form of forums and discussions, as well as, articles on fuel consumption such as tips on getting better gas mileage and regular vs. premium grade gasoline.

Stock Market Analysis

Mary has always had an interest in the Stock Market and has been very good at doing the research and analysis necessary to make good stock market picks. From this interest she has wanted to be able to share some of this research with others in the rather unique way of only looking at and grading stocks that can be bought directly from companies, the official term is DRIPS (Direct Reimbursement Investment Programs).

One of the many interesting things that I learned about this as Mary has been explaining it to me, is that stock bought through a stock broker is not in your name but instead is being held for you under the brokers’ name. Buying directly from companies enables you to buy stock in your own name, have a right to vote at shareholders meetings, avoid stock broker fees and a whole lot of other benefits that Mary can tell you about on her new site. Sounds pretty good, doesn’t it?

Anyway, the model for this site is a choice of either a monthly or yearly subscription basis or on a per use basis. The idea being that during the valid times that the client has chosen they can go into the site and pick out the various stocks that are graded for their safety and growth potential and provide links to the web sites if available, to the various companies that do indeed sell their stock directly to you. The data itself would come from a variety of sources and it can be assured that Mary has already validated that all the stocks listed on her site are available as DRIPS.

Graduation and Awards Program

Then in the last year that Mary was working in the Activities Department of a High School, she came across an interesting opportunity. It seems that this High School would type up a number of lists for the Awards Night and the Graduation Ceremony. One list would be a list of all the Awards, the Presenters and the students that received that award. This would be used during their Awards Night Presentation. Then on Graduation there would be a program that listed all the students and the awards that they received during the Awards Night. Besides this being a lot of repetitive work, it was always easy to have many errors and a lot of time was spent proofreading the lists.

Mary knew there had to be a better solution to this and brought the problem home where we designed an Access Application where Students, Presenters, and Awards were only entered once, with links to each other and the result being two Access Reports that could be either exported into Word for some further formatting adjustments or Printed out right from Access to be given to the Printer to print these two programs. This proved to be quite successful and the next year, this High School had Mary come back to provide some instruction and make some minor tweaks to the application.

It did not take long for Mary and I to realize that if this High School had such a large task in front of them when it came to the end of the school year, so would every other High School and Middle School in the country. However, the Access Application has a bit of a quick and dirty feel to it and would need to be re-engineered into a more commercial product, especially if we were going to support it. Plans are in motion to build a complete application even though the decisions for the final name have not been reached yet. Our plan is to build and then market this application starting with all the schools here in Southern California.

Custom Software Development

Several months ago, Mary was questioning me about the future of our little corporation. We had the company in place, although we had not opened any bank accounts or anything further then shelling out the legal fees involved in getting ourselves set up. There was no point in going too fast, as all the projects that we had been talking about so far would cost us money and time to develop and we really were not sitting on top of any real surpluses of either. Still wanting to do all these projects, we would just go about it more slowly. There was a need to upgrade our entire network, as the servers and workstations were old and having great difficulty keeping up with the technology that we were using for development and storage.

Then near the end of January 2002, everything changed. The company that I had been working for over the last 3.5 years closed its doors. Now I was officially unemployed, and The Web We Weave, Inc. was not able to support us at this time, or so we thought. The last client that I worked for through my employer was attempting to put a number of the team members on their project back together, offering them short term contracts.

I was one of the lucky ones, and as it turns out it worked out quite well in that we put the contract in the name of The Web We Weave, Inc. and now we are doing custom software development.

Beyond Technical

Besides all of these more or less technical projects, Mary and I have interests in writing. Mary has a number of ideas for books that she would like to write and I still have a desire to do things with music. We both would like to write articles for various publications as we both feel we have something to say and would like to share it with the rest of the world.

What is “The Web We Weave, Inc.”?

Well, we are back to this question, what is “The Web We Weave, Inc.”? And you are probably as confused about this as we are. We made a list of words that we thought were some descriptions of what describes the nature of our company, but still no simple mission statement.

  • web-FUEL
  • Nothing but Direct
  • Education
  • Fuel Consumption
  • Software Development
  • Web Development
  • AGP Maker
  • Research
  • Business Intelligence
  • E-commerce
  • Consulting
  • Organizing
  • Tracking
  • Money Making
  • Profitable
  • Service Provider
  • Hardware
  • Relaxed
  • Confident
  • Professional
  • Cutting Edge Technology

We will keep working on it, to find that perfect mission statement and motto that clears up exactly what “The Web We Weave, Inc.” is all about. It just shows that I was wrong; you need more then just a cool name.