Donald On Software

Just my thoughts, my interests, my opinions!

Some MSDeploy Tricks I've Learned

In an earlier post I talked about Hexo the tool I use for this Blog. In that post I talked about how delighted I was with this process except for one thing that did bother me and that was the deployment to the Azure website. For this process I was using FTP to push the files from the public folder to Azure. Instead I was hoping for an MSDeploy solution but that is harder than it sounds especially when you are really not using a Visual Studio Project and MSBuild to create the application.

In this post I will take you on my journey to find a working solution that does enable me to deploy my blog as a MSDeploy package to the Azure website.

What is in the Public Folder

First off I guess we should talk about what is in this folder that I call Public. As I have mentioned in my Hexo Post, the Hexo Generate command takes all my posts written in simple markup and creates the output that is my website and places it in a folder called public.

It is the output of this folder that I wish to create the MSDeploy package from. This is quite straight forward as I already knew that you can use MSDeploy to not only deploy a package but also create one. This will require knowing how to call MSDeploy from the command line.

Calling MSDeploy directly via Command Line

The basic syntax to create a package using MSDeploy is to call the program MSDeploy.exe then the parameter -verb and the verb choice is pretty much always sync. Then you pass in the parameter -source and this one we are going to say where the source is and finally the -dest which we tell it where to place the package or where to deploy the package to if the source is a package.

Using Manifest files

MSDeploy is very powerful with so many options and things you can do with it. I have found it difficult to learn because as far as I have found, there is no good book or course that you can take that will really take you into any real depth to learn this tool. I did come across a blog: DotNet Catch that covers MSDeploy quite often. It was there that I did learn about creating and deploying MSDeploy packages using Manifest files.

In this scenario I have a small xml file that says where the content is found and for that I write out a path to where the public folder is on my build machine. I call this file: manifest.source.xml

1
2
3
4
5
<?xml version="1.0" encoding="utf-8"?>
<sitemanifest>
<contentPath path="C:\3WInc-Agent\_work\8\s\public" />
<createApp path="" />
</sitemanifest>

With the source manifest and an existing application that I want to package up sitting in the public folder at the disclosed location, I just have to call the following command to generate an MSDeploy package. If you are calling this from the commandline on your machine then this should all be on one line.

1
2
3
4
"C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe" 
-verb:sync
-source:manifest="C:\3WInc-Agent\_work\8\s\msdeploy\manifest.source.xml"
-dest:package=C:\3WInc-Agent\_work\8\s\msdeploy\blog.zip

If you are calling this from TFS you would use the commandline task and in the first line called Tool you would put the path to the msdeploy.exe program. The other two lines would be one line and entered into the Arguments box.
Build Task to Create Package from Manifest file

Now in order for that to work I need a similar xml file that is used for the destination file to tell MSDeploy that this package is a sync to the particular website. This file I called: manifest.dest.xml

1
2
3
4
5
<?xml version="1.0" encoding="utf-8"?>
<sitemanifest>
<contentPath path="Default Web Site" />
<createApp path="Default Web Site" />
</sitemanifest>

The syntax to call this blog.zip package and the destination manifest file is:

1
2
3
4
"C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe"
-verb:sync
-source:package="C:\3WInc-Agent\_work\8\s\msdeploy\blog.zip"
-dest:manifest="C:\3WInc-Agent\_work\8\s\msdeploy\manifest.dest.xml"

This works great except that I cannot use the xml files when deploying to my Azure websites as I do not have that kind of control on them. It is not a virtual machine that I can log onto or use a remote PowerShell script against to do my bidding and this package won’t deploy onto that environment without it. I need another approach to getting this to work the way I need it to.

Deploy to Build IIS to create a new MSDeploy package

This next idea that I came up with is a little strange and I had to get over the fact that I was configuring a web server on my Build Machine but that is exactly what I did do. My build machine is a Windows Server 2012 R2 virtual machine so I turned on the Web Server Role from the Roles and Features Service. Then using the above set of commands that I called from a Command Line task just like the test I used to create the package from the public folder I Deployed it to the Build Machine.

At this point I could even log into the build machine and confirm that I do indeed have a working web site with all my latest posts in it. I then called MSDeploy once more and created a new Blog.zip package from the web site.

1
2
3
4
"C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe"
-verb:sync
-source:iisApp="Default Web Site"
-dest:package="C:\3WInc-Agent\_work\8\s\msdeploy\blog.zip"

The resulting blog.zip was easily deployed to my Azure website without any issue what so ever. As you may have noticed that I have the blog.zip file with the exact same name and place as the old one. There was no need to keep the first one as that was just used to get it deployed to the build machine so that we could creat the one that we really want. In order to make sure that went smoothly I deleted the old one before I called this last command which is also a command line task in the build definition.

Success on Azure Website

In my release definition for the Azure web site deployment I just needed to use the built-in out of the box task called “Azure Web App Deployment” point it to where it could find the blog.zip file and tell it the name of my Azure web site and it took care of the rest.

Deploy the zip package to Azure

How I Use Chocolatey in my Releases

I have been using Chocolatey for a while as an ultra easy way to install software. It has become the prefered way to install tools and utilities from the open source community. Recently I have started to explore this technology in more depth just to learn more about Chocolatey and found some really great uses for it that I did not expect to find. This post is about that adventure and how and what I use Chocolatey for.

Built on NuGet

First off I guess we should talk about what Chocolatey is. It is another packaged technology based on NuGet. In fact it is NuGet with some more features and elements added to it. If you have been around me over the last couple of years, I have declared that NuGet is probably one of the greatest advancements that we have had in the dot net community in the last 10 years. Initially introduced back in 2010, it was a package tool to help resolve the dependencies in open source software. Even back then I could see that this technology had legs and indeed it did as it has proven to resolve so many hard development problems that we have worked on for years to resolve. That being able to have shared code within multiple projects that does not interfere with the development of the underlying projects that depend on them. I will delve into this subject in a later post as right now I want to focus on Chocolatey.

While NuGet was really about installing and resolving dependencies at the source code level as in a new Visual Studio project, Chocolatey takes that same package structure and focuses on the Operating System. In other words I can create NuGet like packages (they have the very same extension as NuGet *.nupkg) that I can install, uninstall or upgrade in Windows. I have a couple of utility like programs that run on the desktop that I use to support my applications. These utilities are never distributed or a part of my application that I distribute through click-once but I need up to date version of these on my test lab machines. It has always been a problem with having some way to get these installed and up to date on these machines. However, with the use of Chocolatey this is now an easy solution and a problem that I no longer have.

Install Chocolatey

Let’s start with how we would go about installing Chocolatey. If you go to the Chocolatey.org web-site there are about 3 ways listed to download and install the package all of them using PowerShell.
This first one assumes nothing, as it will Bypass the ExecutionPolicy and has the best chance of installing on your system.

1
@powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))" && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin

This next one is going to assume that you are an administrator on your machine and you have set the Execution Policy to at least RemoteSigned

1
iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))

Then this last script is going to assume that you are an administrator, have the Execution Policy set to at least RemoteSigned and have PowerShell v3 or higher.

1
iwr https://chocolatey.org/install.ps1 -UseBasicParsing | iex

Not sure what version of PowerShell you have? Well the easiest way to tell is to bring up the PowerShell console (you will want to run with Administrator elevated rights) and enter the following:

1
$PSVersionTable

Making my own Package

Okay so I have Chocolatey installed and I have a product that I want to install so how do I get this package created? Good question so lets tackle that next. I start by using file explorer, go to your project and create a new folder. In my case I was working with a utility program that I called AgpAdmin so at the sibling level of that project I made a folder called AgpAdminInstall and this is where I am going to build my package.

The file structure

Now I would bring up PowerShell running as an administrator and navigate over to that new folder that I just created and enter the following Chocolatey command.

1
Choco New AGPAdmin

This will create the nuspec file with the same name that I entered in that New command as well as a tools folder which will contain two powershell scripts. There are a couple of ways that you can build this package as the final bits don’t even need to be in this package. They could be referenced in other locations where they can be downloaded and installed. There is a lot of documentation and examples that you can find to do that. I would say that most of the Chocolatey packages that can be found on Chocolatey.org are done this way. I found that they mention that the assemblies could be embedded but I never found an example and that was the way that I wanted to package this so that is the guidance I am going to show you here.

Lets start with the nuspec file. This is the file that contains the meta data and where all the pieces can be found. If you are familiar with creating a typical NuGet spec this should all look pretty familiar but there are a couple of things that you must be aware of. In the Chocolatey version of this spec file you must have a projectUrl (in my case I was pointing to my VSTS implementation dashboard page. You must have a packageSourceUrl (in my case I pointed to my source url to my git repository) and a licenseUrl which needs to point to a page that describes your license. I never needed these when building a NuGet package but are required in order to get the Chocolatey package built. One more thing we need for the nuspec file to be complete is the files section where we tell it what files need to be included in the package.

There will be one entry there already which is to include all the items found in the folder tools and to place it within the nuget package structure of tools. We want to add one more file entry where we add a relative path from where we are to include the setup file that is being constructured up one folder and then then down 3 folders through the AGPAdminSetup tree and the target also being within the nuget package structure of tools. This line is what embeds my setup program into the Chocolatey package.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
<?xml version="1.0" encoding="utf-8"?>
<!-- Do not remove this test for UTF-8: if “Ω” doesn’t appear as greek uppercase omega letter enclosed in quotation marks, you should use an editor that supports UTF-8, not this one. -->
<package xmlns="http://schemas.microsoft.com/packaging/2015/06/nuspec.xsd">
<metadata>
<!-- Read this before publishing packages to chocolatey.org: https://github.com/chocolatey/chocolatey/wiki/CreatePackages -->
<id>agpadmin</id>
<title>AGPAdmin (Install)</title>
<version>2.2.0.2</version>
<authors>Donald L. Schulz</authors>
<owners>The Web We Weave, Inc.</owners>
<summary>Admin tool to help support AGP-Maker</summary>
<description>Setup and Install of the AGP-Admin program</description>
<projectUrl>https://donald.visualstudio.com/3WInc/AGP-Admin/_dashboards</projectUrl>
<packageSourceUrl>https://donald.visualstudio.com/DefaultCollection/3WInc/_git/AGPMaker-Admin</packageSourceUrl>
<tags>agpadmin admin</tags>
<copyright>2016 The Web We Weave, Inc.</copyright>
<licenseUrl>http://www.agpmaker.com/AGPMaker.Install/index.htm</licenseUrl>
<requireLicenseAcceptance>false</requireLicenseAcceptance>
</metadata>
<files>
<file src="..\AGPAdminSetup\bin\Release\AGPAdminSetup.exe" target="tools" />
<file src="tools\**" target="tools" />
</files>
</package>

Before we move on to the automated steps that we want to implement so that we don’t even have to think about building this package every time, we will need to make a couple of changes to the PowerShell scripts that are found in the tools folder. When you open this powershell script it is well commented and the variable names used are pretty clear in describing what they are for. You will notice that it seems to be ready out of the box to get you to provide a url where it can get your program to install. I want to use the embedded solution so un-comment the first $fileLocation line and replace the ‘NAME_OF_EMBEDDED_INSTALLER_FILE’ with the name of the file you want to run and I will also assume that you have it in this same tools folder (in the compiled nupkg file). In my package I did create an install program using the wix toolset which also gives it the capability to uninstall itself automatically. Next I commented out the default silentArgs and the validExitCodes found right under the #MSI comment. There is a long string of commented lines that all start with #silentArgs and what I did was un-comment the last one and set the value as ‘/quiet’ and un-comment the validExistCodes line right below that so the line looks like this:

1
2
silentArgs = '/quiet'
validExitCodes= @(0)

That is really all that there is to it. The rest of this script file should just work. There are a number of different cmdlet’s that you can call and they are all shown in the InstallChocoletey.ps1 file that appeared when you called the Choco new command and they are all commented fairly well. I was creating the Chocolatey wrapper around an Install program so I chose the cmdlet “Install-ChocolateyInstallPackage”. So to summarize the PowerShell Script ignoring the commented lines the finished PowerShell script looks a lot like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ErrorActionPreference = 'Stop';

$packageName= 'MyAdminProg' # arbitrary name for the package, used in messages
$toolsDir = "$(Split-Path -parent $MyInvocation.MyCommand.Definition)"
$fileLocation = Join-Path $toolsDir 'MyAdminProgSetup.exe'

$packageArgs = @{
packageName = $packageName
unzipLocation = $toolsDir
fileType = 'EXE' #only one of these: exe, msi, msu
url = $url
url64bit = $url64
file = $fileLocation
silentArgs = '/quiet'
softwareName = 'MyAdminProg*' #part or all of the Display Name as you see it in Programs and Features. It should be enough to be unique
checksum = ''
checksumType = 'md5' #default is md5, can also be sha1
checksum64 = ''
checksumType64= 'md5' #default is checksumType
}

Install-ChocolateyInstallPackage @packageArgs

One thing that we did not cover in all this is the fileType value. This is going to be an exe, msi or msu depending on how you created your setup file. I took the extra step in my wix install program to create a bootstrap which takes the initial msi and checks the prerequists such as the correct version of the dot net framework and turns that into an exe. You will need to set this to the value of your install program what you want to run.

Another advantage to using an install package is that it knows how to uninstall itself. That means I did not need that other PowerShell script that was in the tools directory which was the chocolateyuninstall.ps1 file. I deleted mine so that it would use the automatic uninstaller that is managed and controlled by windows (msi). If this file exists in your package than Chocolatey is going to run that script and if you have not set this up properly will give you issues when you run the Choco uninstall command for the package.

Automating the Build in TFS 2015

We want to make sure that we place all these two files folders and the nuspec file into source control. Besides having this is a place where we can repeat this process and keep track of any changes that might happen between changes we will be able to automate the entire operation. Our goal here is to make a change which when we check in the code change of the actual utility program will kick off a build create the package and publish it to our private Chocolatey feed.

To automate the building of the chocolatey package I started with a Build Definition that I already had that was building all these pieces. It built the program, then created an AGPAdminPackage.msi file and then turned that into a bootstrapper and gave me the AGPAdminSetup.exe file. Our nuspec file has indicated where to find the finished AGPAdminSetup.exe file so that it will be embedded into the finished .nupkg file. Just after the steps that compile the code, run the tests, I add a PowerShell script and switch it to run inline and write the following script:

1
2
3
4
# You can write your powershell scripts inline here. 
# You can also pass predefined and custom variables to this scripts using arguments

cpack

This command will find the .nuspec file and create the .nupkg in the same folder as the nuspec file. From there the things that I do are to copy the pieces I am interested in having in the drop and place them into the staging work space $(Agent.BuildDirectory)\b and then for the Copy Publish Artifacts I just push everything I have in staging.

Private Feed

Because Chocolatey is based on Nuget technology it works on exactly the same principal of distribution which is a feed but it could also be a network file share. I have chosen the private feed as I need this to be a feed that I can access from home, the cloud, and when I am on the road. Okay so you might be in the same or similar situation as myself so how do you setup a Chocolatey Server? With Chocolatey of course.

1
choco install chocolatey.server -y

On the machine that you run this command on, it will create a chocolatey.server folder inside of a folder off of the root drive called tools. Just point IIS to this folder and you have a Chocolatey feed ready for your packages. The packages actually go into the App_Data\packages folder that you will find in this ready to go Chocolatey.server. However I will make another assumption that this server may not be right next to you but on a different machine or even the cloud so you will want to publish your packages. To do that you will need to make sure that you give the app pool modify permissions to the App_Data folder. This in the build definition after the Copy Publish Artifact add another PowerShell script to run inline and this time call the following command:

1
2
3
4
# You can write your powershell scripts inline here. 
# You can also pass predefined and custom variables to this scripts using arguments

choco push --source="https://<your server name here>/choco/" --api-key="<your api key here>" --force

That is it really you have a package in a feed that can be installed and upgraded with just a simple Chocolatey command.

Make it even Better

I went one step farther to make this even easier and that was to modify the chocolatey configuration file so that it looks in my private repository first before looking at the public one that is set up by default. This way I can install and upgrade my private packages just as if they were published and exposed to the whole world but it is not. You find the chocolatey.config file in the C:\ProgramData\Chocolatey\config folder. When you open the file you will see an area called sources and probably one source listed. Just add an additional source file give it an id (I called my Choco) and the value should be where your chocolatey feed can be found and set the priority to 1. That is it but you need to do this to all the machines that are going to be getting your program and all the latest updates. Now when ever you are doing a build to about to run tests on a virtual machine you can call have a simple powershell script do it for you.

1
2
3
4
5
6
7
choco install agpadmin -y
Write-Output "AGPAdmin Installed"

choco upgrade agpadmin -y
Write-Output "AGPAdmin Upgraded"

Start-Sleep 120

The program I am installing is called agpadmin and I pass the -y so that it skips the confirm as this is almost always part of a build. I call both the install and then the upgrade as it does not seem to do both but it just ignores the install if it is already installed and will then do the upgrade if there is a newer version out there.

Hope you enjoy Chocolatey as much as I do.

Who left the Developers in the Design Room

This post is all about something that has been starting to bug me and it has been bugging me for quite a while. I have been quiet about this and have started the conversation with different people at random and now it is finally time I just say my piece. Yes this is soap box time and so I am just going to unload here. If you don’t like this kind of post, I promise to be more joyful and uplifting next month but this month I am going to lay it out there and it just might sound a bit harsh.

Developers are bad Designers

I come from the development community with over 25 years I have spent on the craft and originally I got there because I was tired of the bad workflows and interfaces that people who thought they understood how accounting should work, just did not. I implemented a system that changed my workload from 12 hour days plus some weekends to getting everything done in 10 normal days. Needless to say I worked my way out of a job, but that was okay because that led me to opportunities that really allowed me to be creative. You would think that with a track record like that I should be able to design very usable software and be a developer, right?

Turns out that being a developer has given me developer characteristics and that is that we are a bit geeky. As a geeky person, you tend to like having massive control and clicking lots of buttons, but this might not be the best experience for a user that is just trying to get their job done. I once made the mistake of asking my wife, who was the Product Owner of a little product that we were building, what the message should be when they confirm that they want to Save a Student. Her remarks threw me off guard for a moment when she asked why do I need a save button? I made the change so just save it, don’t have a button at all.

Where’s the Beef

Okay, so far all I have enlightened you with is that I am not always the best designer and that is why I have gate keepers like my wife who remind me every so often that I am not thinking about the customer. However, I have noticed that many businesses have been doing a revamping of their websites with what looks like a focus on mobile. I get that but the end result is that it is harder for me to figure out how to use their site and somethings that I was able to do before are just not possible anymore. You can tell right away that the changes were not based on how a customer might interact with the site, I don’t think the customer was even considered.

One rule that I always try to follow and this is especially true for an eCommerce site is that you need to make it easy for the customer if you want them to buy. Some of the experiences that I have had lately almost leave you convinced that they don’t want to sell their products or do business with me. For some of these I have sought out different vendors because the frustration level is just too high.

Who Tests this Stuff?

That leads right into my second peeve in that no one seems to test this stuff. Sure the developer probably tested their work for proper functionality and there might have even been a product owner who understood the steps he needed to take after talking to the developer and proved to him or herself that the feature was working properly. That is not testing my friend, both of these groups of people test applications the very same way, it’s called the Happy Path. No one is thinking about all the ways that a customer may expect to interact with the new site. Especially when you have gone from an older design to the new one, ah, no one thought of that and now your sales numbers are falling because no one knows how to buy from you.

Testers have a special gene in their DNA that gives them the ability to think about all the ways that a user may interact with the application and even attempt to do evil things with it. You want these kind of people on your side, it is best to find it while it is still under development than having a customer find it and worse yet you get hacked which could really cost you financially as well as trust.

In my previous post “Let the Test Plan tell the Story” I laid out the purpose of the test plan. This is the report that we can always go back to and see what was tested and how much of it was tested and so on. I feel that the rush to get a new design out the door is hurting the future of many of these companies because they are taking the short cuts of not designing these sites with the customer in mind and eliminating much of the much needed testing. At least that is how it seems to me, my opinion.

Let the Test Plan Tell the Story

This post has been the result of some discussions that I have had lately when trying to determine the work flow for a client but this often comes up with others in the past but what I had never used as an argument was the role of the test plan in all this. Besides being an eye opener and an aha moment for the client and myself I thought I would explore this thought a little more as others might also find this helpful in understanding and getting better control of your flows.

What is this flow?

There is a flow in the way that software is developed and tested no matter how you manage your projects. Things typically start from some sort of requirement type of work item that describes the business problem and what the client desires to do and should include some benefit that the client would receive if this was implemented. Yea I just described the basics of a user story which is where we should all be by now when it comes to software development. The developers and testers and whoever else might be contributing to making this work item a reality start breaking down this requirement type into tasks that they are going to work on to make it happen.

The developers get to work as they start writing the code and completing their tasks while the testers start writing test cases that they will use to either prove that the new requirement is working as planned or if it has not and simply is not working. These test cases would all go into a test plan that would represent the current release that you are working on. As the developers complete their coding the testers will start testing and any test cases that are not passing is going to go back to the developers for re-work. Now how this is managed is going to depend on how the teams are structured. Typically in a scrum team where you have developers and testers on the same team this would be a conversation and the developer might just add more tasks because this is work that got missed. In some situations where the flow between developers and testers is still a separate hand off, a hold out from the waterfall days, then a bug might be issued that goes back to the developers and you follow that through to completion.

As the work items move from the business to the developers they become Active. When the developers are code complete the work items should become resolved and as the testers confirm that the code is working properly they become closed. Any time that the work item is not really resolved (developer wishful thinking) the state would move back to Active. In TFS (Team Foundation Server) there is an out of the box report called Reactivations which keeps track of the work items that moved from resolved or closed back to active. This is the first sign that there are some serious communication problems going on between development and test.

With all the Requirements and Bugs Closed How will I know what to test?

This is where I find many teams start to get a little weird and over complicate their work flows. I have seen far to many clients take the approach of having additional states that say where the bug is by including the environment that they are testing it in. For instance they might have something that says Ready for System Testing or Ready for UAT and so on. Initially this might sound sensible and the right thing to do. However, I am here to tell you that this is not beneficial, and loses the purpose of the states and this work flow is going to drown you in the amount of work that it takes to manage this. Let me tell you why.

Think of the state as a control on how developed that requirement or bug is. For instance it would start off as New or Proposed, depending on your Process template, from there we approve it by changing the state to approved or active. Those that use active in their work flow don’t start working on it until it is moved into the current iteration. The process that moves it to approved also moves it into a current iterationn to start working on it but they then move the state to committed when they start working on it. At code completion the active ones go to resolved where the testers will then begin their testing and if satisfied will close the work item. In the committed group they always work very close to the testers who have been testing all along here so when the test cases are passing then the work item moves to done. The work on these work items are done, so what happens next is that we start moving this build that represents all the work that has been completed and move it through the release pipeline. Are you with me so far?

This is where I typically hear confusion, as the next question is usually something like this: If all the requirement and bug types have been closed how do we know what to test? The test plan of course, this should be the report that tells you what state that these builds are in. It should be from this one report, the results of the test plan that we base our approvals for the build to move onto the next environment and eventually to production. Let the Test Plan Tell the Story. From the test plan we can not only see how the current functionality is working and matches our expectations but there should also be a certain amount of regression testing going on to make sure features that have worked in the past are still working. We get all that information from this one single report, the test plan.

Test Plan Results

The Test Impact Report

As we test the various builds throughout the current iteration as new requirements are completed and bugs fixed the testers are running those test cases to verify that this work truly is completed. If you have been using the Microsoft Test Manager (MTM) and this is a dot net application, you have turned on the test impact instrumentation through the test settings we have the added benefit of the Test Impact Report. In MTM as you update the build that you are testing it does a comparison to the previous build and what has been tested before. When it detects that some code has changed near the code that we previously tested and probably passed it is going to include those test cases in the test impact report as tests that you might want to rerun just to make sure that the changes that were made do not affect your passed tests.

Test Impact Results

The end result is that we have a test plan that tells the story on the quality of the code written in this iteration and specifically lists the build that we might want to consider to push into production.

Living on a Vegan Diet

In all my blog posts that I have written over the years I have never talked about health or a healthy lifestyle. This will be a first and you as a technology person might be wondering what has living a Vegan Lifestyle have anything to do with software. After all the blog title is “Donald on Software”.

For years I would go through these decade birthdays and just remark how turning thirty was just like turning twenty except I had all the extra knowledge called life. Going from thirty to forty, same thing but things took a turn when I moved into my fifties. I have had doctors notice that my blood pressure was a bit elevated. I took longer to recover from physical activates. Felt aches I never noticed before and I promised my wife that I would live a long, long time and that wasn’t feeling all that convincing. I didn’t have the same get up and go that I had known before.

A Bit About My Family

My wife and step daughter have been vegetarian/vegans for many years. I was open to other types of food like plant based meals and would eat them on occasion when we were at a vegan restaurant or that was what was being cooked at home. However, I travel a lot so most of my food would be from a restaurant where I could eat anything I wanted. This went on for several years, I was taking a mild blood pressure pill every day. This was keeping my blood pressure under control but there were other things that it appeared to be affecting as well in a negative way.

The Turning Point for Me

During Thanksgiving weekend in November 2014, Mary (my wife) and I watched a documentary on Netflix called “Forks over Knives”, and at the end of that I vowed never to eat meat again and start moving towards a Vegan lifestyle.
The documentary is about two doctors one that came from the medical field and one from the science side of things and their adventure to unravelling the truth about how the food that we eat is related to health. One of the biggest studies that has ever been done is called “The China Study” and is a 20 year study that examines the relationship between the consumption of animal products (including dairy) and chronic illnesses such as coronary heart disease, diabetes, breast cancer, prostate cancer and bowel cancer.

Not only reducing these numbers but now that the toxic animal products were out of our system, our bodies would start to repair some of this damage that we have always been told could never be repairable naturally.

Getting over the big Lie

Yes there is a very large lie that we have all believed to be the truth because we assumed that it came from the medical field and sanctioned by the government to be the truth. That being the daily nutritional guide. This is the guide that told use to eat large amounts of meat and dairy products to give us energy and strong bones but this did not come from any medical study this came from the agriculture and dairy industries to sell more products.

Most of that animal protein that we take in our body rejects, there is very small amounts that it actually uses. Now common sense would tell me if my body is rejecting all this animal based protein it is working extra hard and something is going to break down in the form of disease and other difficulties especially as we get older. Oh wait, they now make a pill for that so we can continue to live the way we always have. So now we are not only supporting an industry that never had that big of a market before but now we are spending billions of dollars every year to pharmaceutical companies as well in order to correct the mistakes we made with the things we eat. One thing that I did learn in physics is that one action creates another and opposite reaction so this is not solving anything either just keep making it worse and now health care costs are through the roof with bodies that normally know how to heal themselves.

Now for the Good News

I know I got you all depressed and disappointed as I just dissed your favorite food and called it bad and toxic but there is a happy ending here. I felt like you are right now for about five minutes and then decided to say “NO to Meat”. If you get a chance I would encourage you to look up that documentary “Forks over Knives” as one other thing that disturbed me was the way they were harvesting these animals and called it ethical or within the approved guidelines. These animals were under stress and that stress goes into the meat and you wonder why everyone seems so stressed, I know there is a relationship here.

Anyway, the good news is my latest checkup with my doctor. I am currently on no medication what so ever and my blood pressure numbers are very normal and very impressive for a guy my age. I did a stress test and was able to reach my ideal heart rate easily and effortlessly and I feel great. If I had any plaque buildup it is certainly repairing itself as I feel great. Still can’t seem to lose the 15 pounds I have been working on for the last couple of years but I know I will accomplish that soon enough. I am done with meat and all animal proteins as in milk, eggs, honey and I am going to live a long, long time and feel great. Won’t you join me?

Migrate from TFVC to Git in TFS with Full History

Over the last year or so I have been experimenting and learning about git. The more I learned about this distributed version control the more I liked it and finally about 6 months ago I moved all my existing code into git repositories. They are still hosted on TFS which is the best ALM tool on the market by a very, very, very long mile. Did I mention how much I love TFS and where this product is going? Anyway, back to my git road map as this road is not as simple as it sounds because many of the concepts are so different and at first I even thought a bit weird. After getting my head around the concepts and the true power of this tool there was no turning back. Just to be clear I am not saying that the old centeralized version control known as TFVC is dead, by no means there are somethings that I will continue to use it for and probably always will like my PowerPoint slides, and much of my training material.

Starting with Git

One thing about git is that there is just an enormous amount of support and its availability on practically every coding IDE for every platform is just remarkable. What really made things simple for me to do the migration was an open source project on CodePlex called Git-TF. In fact how I originally used this tool was that I made a separate TFS Project with a git repository. I would work on that new repository and had some CI builds to make sure things kept working and then when I finished a feature I would push this back to the TFVC as a single changeset however because I always link my commits with a work item in the TFVC project it had a side effect that I was not expecting. If you opened the work item you would see some commits listed in the links section. Clicking on the commit link would open up the code in compare mode to the previous commit so you could see what changes were made. Of course this only works if you are looking at work items from web access.

Git-TF also has some other uses and one of those is the ability to take a folder from TFVC and convert that into a git repository with full history. That is what I am going to cover in this post. There are some rules to this that I would like to lay down here as best practises as you don’t want to just take a whole TFVC repository and turn it into one big git repository as that just is not going to work. One of the things to get your head around git is that those respoitories need to be small and should be small remember that you are not getting latest when you clone a repository you are getting the whole thing which includes all the history.

Install Git-TF

One of the easiest ways to install Git-TF on a windows machine is via Chocolatey since it will automatically wire up the PATH for you.

1
choco install git-tf -y

No Chocolatey or you just don’t want to use this package managment tool you can follow the manual instructions on CodePlex https://gittf.codeplex.com/

Clean up your Branches

If you have been a client of mine or ever hear me talk about TFS you will certainly have heard me recommending one collection and one TFS Project. You would also have heard me talk about minimizing the use of branches for when you need them. If you have branches going all over the place and code that has never found it’s way back to main you are going to want to clean this up as we are only going to clone main for one of these solutions into a git repository. One of the things that is very different about the git enhanced TFS is that a single TFS project can contain many git repositories. In fact starting from TFS 2015 update 1 you can have a centralized version control TFVC and multiple git repositories in the same TFS project which totally eliminates the need to create a new TFS project just to hold the git repositories. We could move the code with full history into a git repo of the same project we are pulling from.

In our examples that we are pulling into the git repository we are doing this from the solution level as that is where most people using Visual Studio have been doing for decades however the git ideal view of this would be to go even smaller to a single project per repository and stitch the depenancies together for all the other projects through package management through tools like NuGet. Right now that is out of scope for this posting but will delve into this in a future post.

Clone

Now that we have a nice clean branch to create your git repository it is time to run the clone command from the git-tf tool. So from the command line make a nice clean directory and then be in that directory as this is where the clone will appear. Note: if you don’t use the –deep switch you will just get the latest tip and not the full history

1
2
3
mkdir C:\git\MySolutionName
cd c:\git\MySolutionName
git-tf clone https://myaccount.visualstudio.com/DefaultCollection $/MyBigProject/MyMainBranch --deep

You will then be prompted for your credentials (Alt credentials if using visualstudio.com). Once accepted, the download will begin and could take some time depending on the length of your changeset history or size of your repository.

Prep and Cleanup

Now that you have an exact replica of your team project branch as a local git repository, it’s time to clen up some files and add some others to make things a bit more git friendly.

  • Remvoe the TFS source control bindings from the solution. You could have done this from within Visual Studio, but its just as easy to do it manually. Simply remove all the *.vssscc files and make small a small edit to your .sln file removing the GlobalSection(TeamFoundationVersionControl) ...
    EndGlobalSection in your favorite text editor.
  • Add a .gitignore file. It’s likely your Visual Studio project or solution will have some files you won’t want in your repository (packages, obj, ect) once your solution is built. A near complete way to start is by copying everything from the standard VisualStudio.gitignore file into your own repository. This will ensure all the build generated file, packages, and even your resharper cache folder will not be committed into your new repo. As you can imagine if all you used was Visual Studio to sling your code that would be that. However with so much of our work now moving into more hibrid models where we might use several different tools for different parts of the application tying to manage this gitignore file could get pretty complicated. Recently I came across an online tool at https://www.gitignore.io/ where you pick the OS, IDEs or Programming Language and it will generate the gitignore file for you.

    Commit and Push

    Now that we have a local git repository, it is time to commit the files, add the remote (back to TFS), and push the new branch (master) back to TFS so the rest of my team can clone this and continue to contribute to the source which will have full history of every check-in that was done before we converted it to git. From the root, add and commit any new files as there may have been some changes from the previous Prep and Clean step.
    1
    2
    git add .
    git commit -a -m "initial commit after conversion"

We need a git repository on TFS that we want to push this repository to. So from TFS in the Project that you want this new repository:

Create a new Repository
  1. Click on the Code tab
  2. Click on the repository dropdown
  3. Click on the New Repoisotry big “+” sign.
Name your Repository
  1. Make sure the type is Git
  2. Give it a Name
  3. Click on the Create button.
Useful Git Information

The result page gives you all the information that you need to finish off your migration process.

  1. This command adds the remote address to your local repository so that it knows where to put it.
  2. This command will push your local repository to the new remote one.

That’s it! Project published with all history intact.

A New Start on an Old Blog

It has been quite a while since I have posted my last blog so today I thought I would bring you up to speed on what I have been doing with this site. The last time I did a post like this was back in June of 2008. Back then I talked about the transition that I made going from City Desk to Microsoft Content Management System which evenually was merged into SharePoint and from there we changed the blog into DotNetNuke.

Since that time we have not created any new content but have moved that material to BlogEngine.Net and this really is a great tool but not the way I wanted to work. I really do not want a Content Management system for my blog, I don’t want pages that are rendered dynamically and the content pulled from a database. What I really wanted were static pages and the content for those pages be stored and built the same way that I build all my software, stored in Version Control.

Just before I move on and tell you more about my new blog workflow I thought I would share a picture from my backyard and that tree on the other side of the fence is usually green it does not change colors every fall but this year the weather has been cooler than usual, so yes we sometimes do get fall colors in California and here is the proof.

Hexo

Hexo is a static page generator program that takes simple markup and turns it into static html pages. This means I can deploy this anywhere from a build that I can generate it just like a regular ALM build because all the pieces are in source control. It fully embrasses git and is a github open source project. I thought that moving my blog to Hexo would help me in too ways, besides giving me the output that I am really looking for but also to use as a teaching tool on how the new Build system that is part of TFS 2015 fully embraces other technologies outside of dotNet and the Visual Studio family. From here I check-in my new blogs into source control and that triggers a build which puts the source into a drop folder which is then deployed to my web site which is hosted on Azure.

As of this post I am using FTP in a PowerShell script which is used to deploy the web site which is not ideal. I am working on creating an MSDeploy package that can then be deployed directly onto the Azure website that is hosting this blog.

The Work Flow

The process begins when I want to start a new blog. Because my git repositories are available to me from almost any computer that I am working with I go to the local workspace of my Blog git repository checkout the dev branch and at the command line enter the following command

1
hexo new Post "A New Start on an Old Blog"

This will place a new md file in the _post folder with the same name as the title but the spaces replaced by hyphens (“-“). After that I like to open the folder at the root of my blog workspace using Visual Studio Code. The thing that I like about using Visual Studio Code as my editor is that it understands simple markdown and will give me a pretty good preview as I am working on it and if my screen is wide enough I can even have one half of the screen to type in the raw simple markdown and the other half to see what it looks like.

The other thing that I like about this editor is that it understands and talks git. Which means I can edit my files and save them and Visual Studio Code is going to inform me that I have uncommitted changes so I can add them to staging and commit them to my local repository as well as push them to my remote git repository. Above you may have noticed that before I began this process I checked out the dev branch which means that I do not write my new posts in the master branch and the reason for that is that I have a continious integration trigger on the build server that is looking for anything that is checked into the master on the remote git repository. Because I might start a blog on one machine and finish it on another I need some way to keep all these in sync and that is what I use the dev branch for. Once I am happy with the post I will then merge the changes from dev into master and this will begin the build process.

Publishing the Post

Once I am happy with my post all I need to do is to merge the dev branch into Master and this starts the build process. Which is really just another Hexo command that is called against my source which then generates all the static pages, javascript, images and so on and puts it into a public folder.

1
hexo generate

It is the content of this folder that then becomes my drop artifacts. Because the Release Manager also has a CI trigger after the build has been sucessful it will begin a Release pipeline to get this drop into my web site. My goal is to get this wrapped up into an MSDeploy package that can then be deployed directly onto my Azure web site. I am still working on that and will provide a more detailed post on what I needed to do to get that to happen. In the meantime, I need to make sure that my Test virtual machine is up and running in Azure as one of the first things that this Release Manager pipeline will do is to copy the contents of the drop onto this machine. Then it calls a CodedUI test which really is not testing it will run my PowerShell script that will FTP the pages to my Azure web site. It needs to do this as a user and the easiest way without me having to do this manually is to run the CUI to do it and complete it.

Summary

So there you have it, I have my blog in source control so I have no dependancy of a database and all the code to generate the web site and my content pages are in source control which makes it really easy if I ever need to make a move to a different site or location or anything like rebuild from a really bad crash. As an ALM guy I really like this approach and what would be even better was having a new pre-production staging site to go over the site and give it a last and final approval before it goes live to the public site.

Database Unit Testing from the Beginning

The concept of unit testing for a database and really this means a database project still seems like a wild idea. Of course I am still surprise how many development shops still use their production database as their source of truth which it shouldn’t be but that’s because they do not have their database in source control. In order to take you down the road to explore some of the benefits that are going to be available to you with being able to run unit tests on your database I need to get us all caught up with how to create a database project as this is where the magic happens.

Creating a Database Project

You need to have Visual Studio 2010 Premium or higher to create a database project. One of the options that are available to us is to reverse engineer an existing database and that is what we are going to do in these first steps. I have installed the sample database called AdventureWorks. This is available as a free download from the Codeplex site.

Create a Project

From Visual Studio you will want to create a new Project and select the SQL Server 2008 Wizard which can be found under the SQL Server node found under the Database category. Give it a name, I called my AdventureWorks and give it a location on your hard drive where you want the project to be located.

A wizard will popup and take you through a number of pages, just accept the defaults until you get to the Import Database Schema page as this is something we do want to do is to import the AdventureWorks database.

New Project Wizard

Make sure you check the Import existing schema and then you will likely want to click on the New Connection button unless you have made a previous connection to the database, that connection string won’t be found in the dropdown.

Connection Properties

If you have connected to databases in the past this dialog box should be very familiar to you. Basically we need to say where the SQL server is. In this case it is on my local machine and is the default instance. Other common server names are also localhost\SQLExpress as that is the name instance that SQL Express creates when it is installed. After you get the server instance completed the dropdown of database names will be populated and from there you should be able to find the AdventureWorks database. I also like to click on the Test Connection button just to confirm that there aren’t any connectivity issues. Click OK and we are ready to move on.

Click Next and continue through the wizard pages and accepting the defaults. On the last page click Finish. This is where this Visual Studio wizard really does it’s work as it creates the project and does a complete reverse engineering of the database. The end result is a Visual Studio SQL Database project that represents the database in code which is suitable for checking into Source Control, capable of deploying changes that might be made to this project, being able to compare changes between versions and much much more.

Lets get to Unit Testing

When you are on a database project as in I have physically clicked on it so that it has focus you will see that a number of toolbar buttons appear. We want to click on the one called Schema View.

Solution Explorer

This brings up another little window in the same area as the Solution and Team Explorer area of Visual Studio called the Schema View.

Schema View

From this view you will want to expand Schemas, then expand the HumanResources, expand Programmability, Stored Procedures and finally you want to right click onto the uspUpdateEmployeePersonalInfo and choose Create Unit Tests…

If you don’t already have a Test Project the next step will let you create a skeleton Unit test for this stored procedure and the option to create a new Test project in the language of your choice.

Create Unit Tests

You will find that when this window opens you can choose more than just the one stored procedure that we choose in the previous step but yours is the only one that is checked. If you did want to have more than one stored procedure in the same class file you could pick them as well. Then set the Project name or select and existing Test Project and give it a decent class name. I named mine HumanRecourceUnitTests.cs. After you click OK it will build all the pieces the test project and default unittest.cs file that we don’t need and everything just starts to look like a typical Unit Test until the following dialog pops up.

Project DatabaseUnitTests Configuration

Now in order to run unit tests against the database you need a connection to the database. In the first part of this you should be able to find your original stored procedure that you used to create the database project. You will notice that this dialog has an optional additional what it calls a secondary data connection to validate unit tests. In this sample we will not need this but in a real world application you may so let me explain that scenario. When an application that is built with a database connection, typically that application and the connection string would just have enough rights to run the stored procedures and nothing else. In those cases you will want to test those connection string when running the stored procedure that you are testing but that same connection string would not have the rights to check the database to see if those rights are valid especially in a scenario where you want to check if the right values got inserted or deleted, that is where this secondary data connection comes in, it would be a data connection that had higher rights to look at those values directly from the tables.

After you have clicked the OK button Visual Studio will display a skeleton of a unit test to test this stored procedure.

Testing Stored Procedure

In theory we have a unit test that we could run, but the results would indicate that the results are inconclusive because although this stored procedure is being run, it is really just exercising the stored procedure and not really testing it as in giving it some values to insert and checking if those values come back.

We are going to replace the unit test calls here with the following code snippet. I have it all in one piece here for you to easily grab this but following this I will break down this code so you can see what is going on. It is very similar to what the skeleton provided with us but we give it some distinct values.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
-- Prepare the test data (expected results)
DECLARE @EmployeeId int

SELECT TOP 1 @EmployeeId = EmployeeId
FROM HumanResources.Employee

-- Wrap it into a transaction to return us into a clean state after the test
BEGIN TRANSACTION

-- Call the target code
EXEC HumanResources.uspUpdateEmployeePersonalInfo
@EmployeeId, '987654321', '3/2/1987', 'S', 'F'

-- Select the results to verify
SELECT NationalIdNumber, Birthdate, MartialStatus, Gender
FROM HumanResources.Employee
WHERE EmployeeId = @EmployeeId

ROLLBACK TRANSACTION

The first part of this code is to capture the EmployeeId that we want to update so that is what the first DECLARE statement does. In the next call we just want to capture an existing EmployeeId from the Employee table and because we really don’t care which on it runs us but we only want want we use the TOP 1 clause in that statement. At this point our declared variable @EmployeeId now has this value.

Note: I have found that there could be a breaking change here that depends on which version of the adventure works database that you have as some will have the employeeId and others will have this column named BusinessEntityID. To find which one you have go back to the Schema View of the project and expand the Schemas, HumanResources and Tables. Find the Employee table and expand the Columns, the column in question will be that first one right there.

Schema View

Because the stored procedure will make changes to the data in the table and we may not want to actually commit those changes we just want to test these changes we surround the next pieces around a transaction and after we have collected our validation values we can roll this back.

After the transaction we call the update stored procedure and pass in some specific data. Next we call a select statement to get those values from the table with the EmployeeId that we just passed into the previous steps. Finally we roll the whole transaction back so that we do not actually make any changes to the database so we can run this test over and over.

Before we can actually run this test we need to make some changes to the Test Conditions portion of the unit test. First you will want to remove the existing entry that is shown there by clicking on the Delete Test button.

Test Conditions: Data Checksum

After you have removing the existing Test Condition we can then add a new one or more to verify the results. Select Scalar Value from the dropdown control and click on the “+” button.

Test Conditions: Scalar Value

On the scalarValueCondition1 line that this action creates, right click on this line and choose Properties, which will display the properties window. Update the following information:

  • Name: VerifyNationalId
  • Column number: 1
  • Enabled: True
  • Expected value: 987654321
  • Null expected: False
  • ResultSet: 1
  • Row number: 1
Properties

What is really happening here is that we are going to look at that first column and see if it matches the NationalId that we sent to the stored procedure. NationalId is the first column that is returned in the select statement.

We are now ready to run the unit test and see that it is working and pass the test. Typically in a unit test you could be anywhere in the method of the unit test do a right click and you will see one of the context choices being to run test. However what we have been working on so far has been the design surface of the database unit tests which is why we were able to write SQL statement to write our tests. To see or get to the actual code page you need to go back to the HumanResourceUnitTests.cs file and right click on it and choose view code.

Solution Explorer / View Code

As an alternative you could select the file in the solution and press the F7 key, either way you will then be looking at the actual test and if you right click anywhere within that method you will see that one of your choice is to Run Tests. Do that now and you will see the test results go from a Pending to hopefully a Pass. If you do get a failure with an exception you will want to check the column names from this table. Some of the names changed and even the way they are spelled. It appears to be case sensitive as well. Like I mentioned before, there seem to be more than one version of this sample database out there and they did make some changes.

Test Results

Now that we do have a working test, I always like to make a change to prove that it is working by making it fail. So to make it fail, change the Expected value to 9876543210000. I basically just added 4 zeros to the end of the expected result. Re-run the test and it should fail and if we look at the Test Result Details we can see that the expected results did not match, which is exactly what we expected.

Take out the padded zeros and run the test once more so that we get a passed test once more. This is just a step to keep or tests correct.

Associate the Unit Test to a Test Case

The following section is going to need TFS 2010 in order complete this part of the walk through, and even better if you have Lab Management setup to complete the development, build, deploy, test cycle on these database unit tests.

Right now, the unit test that we created can be run from Visual Studio just like we have done in this walk through. You can also make these part of an automated build which if this test project was included in the solution for an automated build in Team Foundation Server (TFS) it would automatically run and be part of the build report. However, this would not update the Test Plan / Test Suite / Test Case that the QA people are using to manage their tests, but it can.

In Visual Studio, Create a new Work Item of type: Test Case, and call it “uspUpdateEmployeePersonalInfo Stored Procedure Test”. We won’t fill anything in the steps section as we are going to go straight to automation with this Test Case. Click on the Associated Automation tab and click on the ellipse “…” button

Choose Test

This will bring up the Choose Test dialog box and because we have just this one test open in Visual Studio we will see the exact test that we want associated with this test case. Click on the OK button.

We now have a test case that can be used to test the stored procedure in automation. When this test case is run in automation it will update the test results and will be reported to the Test Plan and Suite that this test case is a part of.

Database Schema Compare where Visual Studio goes that extra mile

There are a number of good database tools out there for doing database schema comparisons. I have used different ones over the years at first initially to help me write SQL differencing scripts that I could use when deploying database changes. If your background is anything like mine where you were namely a Visual Basic or a C# developer and could get by with working on SQL if you could write directly to the database. There were challenges with being able to script everything out using just SQL. Today that is not nearly an issue for me and I can do quite a bit with scripting and could build those scripts by hand, but why?

WHAT… Visual Studio for database development?

Over the years I have tried to promote SQL development to be done in Visual Studio. I made a great case, SQL is code just as much as my VB, C#, F# or what ever your favorite language of choice happens to be and should be protected in source control. Makes sense but it is a really hard sell. Productivity goes down hill, errors begin to happen because this is not how the SQL teams are used to working on databases. It was an easier sell for me because I loved working in Visual Studio and found the SQL tools not to be as intuitive to me. I have never been able to figure out how I could walk through a stored procedure in Query Analyzer or Management Studio but have always been able to do this with stored procedures that I wrote from within Visual Studio and that was long before the data editions of Visual Studio.

Ever since the release of the Data Dude or its official name back then, Visual Studio Team Edition for Database Professionals, this was what I did and I tried to convince others that this is what we should be doing. It was never an easy sell, yea the schema comparison was nice but our SQL professionals already had all kinds of comparison tools for SQL and it would be too hard for them to work this way. They wanted to be able to make changes in a database and see the results of those changes, not have to deploy it somewhere first.

So as a quick summary of what we figured out so far. Schema comparison from one database to another, nothing new, your SQL department probably has a number of these and use them to generate their change scripts. How is Visual Studio schema comparison better than what I already have how is it going to go the extra mile? That my friend starts with the database project which does a reverse engineering of sorts of what you have in the database and scripts the whole thing out into source files that you can check into source control and compare the changes just like you do with any other source code.

Now once you have a database project you are able to not just do a schema comparison with two databases but you can also compare from a database and this project. The extra mile is that I can even go so far as to deploy the differences to your test and production databases. It gets even better but before I tell you the best part lets go through the actual steps that you would take to create this initial database project.

Create the Database Project

I am going to walk you through the very simple steps that it takes to build a database project for the AdventureWorks database. For this you will need Visual Studio 2010 Premium edition or higher.

We start by creating a new project and select “SQL Server 2008 Database Project” template from under the Database - SQL Server project types. Give it a name and set the location. I called mine AdventureWorks because I am going to work with the sample AdventureWorks database. Click OK..
Create a Project
Visual Studio will build a default database project for you, but it is not connected to anything so there is no actual database scripted out here. We are going to do that now. Right click on the database project and a context sensitive menu will popup with Import Database Objects and Settings… click on that now.
Import Objects
This opens the Import Database Wizard dialog box. If you have already connected to this database from Visual Studio then you will find an entry in the dropdown control Source database connection. If not then you will create a new connection by clicking on the New Connection… button.
Import Wizard
So if you have a ready made connection in the dropdown, choose it and skip the next screen and step as I am going to build my new connection.
New Connection
Because my adventure works database in on my local machine I went with that but this database could be a database that is anywhere on your network, this will all just work provided you do have the necessary permissions to connect to it in this way. Clicking on OK takes us back to the previous screen with the Source database connection filled in.

Everyone, click Start which will bring up the following screen and start to import and script out the database. When it is all done click the Finish button. Congratulations you have built a Database Project.
Import Wizard Finishing
You can expand the solution under Schema Objects, Schemas, and I am showing the dbo schema and it has 3 table scripts. All the objects of this database are scripted out here. You can look at these files right here is Visual Studio.
Solution Explorer
However you might want to use the Schema View tool for looking at the objects which gives you a more Management Studio type of view.
Toolbar
Just click on the icon in the Solution Explorer that has the popup caption that says Database Schema Viewer.
Schema View

Updating the Visual Studio Project from the database

In the past these were the steps that I would show and demonstrate on how to get a database project scripted out and now that it is code is really easy to get into version control because of the really tight integration from Visual Studio. My thoughts after that is this is the tool that you should be working in to evolve the database. Work in Visual Studio and deploy the changes to the database.

Light Bulb Moment

Just recently I discovered how the SQL developer does not really need to leave their favorite tool for working on the database, Management Studio. That’s right, the new workflow is to continue to make your changes in your local or isolated databases so that you can see first hand how the database changes are going to work. When you are ready to get those changes into version control you use Visual Studio and the Database Schema comparison.
Switch Control
So here we see what I always thought was the normal workflow, with the Project on the left and the database that we are going to deploy to on the right. If instead we are working on the database and we want to push those change to the Project, then switch the source and target around.
Options
Now when you click the OK button you will get a schema comparison just like you always did but when deployed it will check out the project and update the source files. This will then give you complete history and the files will move through the system from branch to branch with a perfect snapshot of what the database looked like for a specific build.
Options

  1. Click this button to get the party started.
  2. This comment will disappear in the project source file.
  3. The source will be checked out during the update.

    The Recap of what we have just seen.

    This totally changes my opinion on how to go forward with this great tool. The fact that we can update the project source from the database was probably always there but if I missed the fact that this was possible then I am sure many others might have missed it as well. It makes SQL development smooth and safe (all schema scripts under version control) and the ready for the next step to smooth and automated deployment.

The Two Opposite IT Agenda's

The Problem

I have been in the Information Technology (IT) field for a long time and most of that time has been spent in the development space. Each environment different from the previous one and in some cases there were huge gaps in the level of technology that was used and made available in each location. This has stumped me for a long time why this was. You go to a user group meeting and when ever the speaker was speaking about a technology that was current and he would conduct a quick survey around the room how many were using this technology, the results would be very mixed. There would even be lots of users at these meetings where they were still using technologies that were over 10 years old and no sign of moving forward.

Why is this happening?

Good question, and after thinking about this for a long, long time I think I have the answer. It really depends on which aspect of the IT spectrum is controlling the development side. I think it has become quite acceptable to break up the whole IT space into two major groups, the IT Professionals, and the Software Developers. When I first moved to California I worked for a company that was a software developer and they did software development for their clients on a time and materials basis. There was no question as to which wing of IT influenced the company with regards to technology and hardware. The developers in this case were golden, if you needed special tools, you got them. Need a faster computer, more monitors, special machines to run beta versions of the latest OS and development tools, you got it. You were on the bleeding edge and the IT Professionals were there to help you slow down the bleeding when that go out of control. However, this company was always current got the best jobs and in a lot of cases when we deployed our final product to their production systems that would be the point at which their IT department would then be forced to update their systems and move to the new round of technology.

Welcome to the Other Side

What happens when the influence is on the other foot, the IT Professionals. They have a different agenda as their number one goal is stability, security, and easy deployment. However this does come with a cost, especially when the company is heavily relying on technology to push its products. I have heard this from many different companies all with in this same situation, that they are not a technology company, the technology is just the tool or one of the tools to deliver their products. When this group controls the budget and the overall technical agenda things like this will happen. Moving forward will be very, very slow and the focus will be purely on deployment issues and keeping those costs under control and not on the cost of development which could get very expensive as the technology changes and you are not able to take advantage of those opportunities. Over time, the customers that receive your products will start to evaluate your future as not being able to move fast enough for them because they are going to expect you to be out there and fully tested these waters before they move there and if your not it is not going to look favorable in their eyes. This is especially true if you have some completion in your space that are adapting the new technologies faster then your company is.

There is another side to this that I have witnessed which bothers me even more. The decision to move all enterprise applications to the web was never from the development side of IT but came from the IT Professionals. Remember one of their big agendas is the easy, easy deployment and as a result they have made software development so expensive that we have been forced to move as much as we can to off shore development companies. In most cases this hasn’t even come close to a cost savings for the applications as you never seem to get what you thought your were designing and it is not always the fault of the off shore companies, they are giving your exactly what you asked for. In more cases it is the wrong technology for the solution. Most high volume enterprise applications were desktop applications with a lot of state (data that you are working with). The web is stateless and over the years many things have been developed to make the web appear state full but is it not. I have seen projects spend 100 times more time and money into implementing a features on a web to make it appear and feel like a desktop application. Now to be clear this argument started when deployment of desktop applications was hard as in the early days there was no easy way to keep all the desktops up to date except to physically go around and update them as patches and newer versions became available. However, in the last 5 years or more that has totally changed with things like click-once technology you can implement full automatic updates and license enforcement just as easily as web sites and maybe even better. We all know there are new tricks every day to spoof a web site into some new security trick.

What’s the Answer

I don’t really have the answer but I do have a few thoughts that I have been thinking about and I would love to hear about some other ideas that you might have. My thought is that you should separate the IT push down two paths and this advice is for the companies that are currently being held back by the stabilizing IT Professionals. I would even go so far as to keep the developers on a separate network then the rest of the employees this will keep the bleeding on the other side of the fence and not affect your sales and support staff which are there to sell and support products that are stable and need to keep them that way. This will allow the development to improve and expand their technical expertise and provide better and more relevant solutions for your customers, internal and external.