Donald On Software

Just my thoughts, my interests, my opinions!

Who left the Developers in the Design Room

This post is all about something that has been starting to bug me and it has been bugging me for quite a while. I have been quiet about this and have started the conversation with different people at random and now it is finally time I just say my piece. Yes this is soap box time and so I am just going to unload here. If you don’t like this kind of post, I promise to be more joyful and uplifting next month but this month I am going to lay it out there and it just might sound a bit harsh.

Developers are bad Designers

I come from the development community with over 25 years I have spent on the craft and originally I got there because I was tired of the bad workflows and interfaces that people who thought they understood how accounting should work, just did not. I implemented a system that changed my workload from 12 hour days plus some weekends to getting everything done in 10 normal days. Needless to say I worked my way out of a job, but that was okay because that led me to opportunities that really allowed me to be creative. You would think that with a track record like that I should be able to design very usable software and be a developer, right?

Turns out that being a developer has given me developer characteristics and that is that we are a bit geeky. As a geeky person, you tend to like having massive control and clicking lots of buttons, but this might not be the best experience for a user that is just trying to get their job done. I once made the mistake of asking my wife, who was the Product Owner of a little product that we were building, what the message should be when they confirm that they want to Save a Student. Her remarks threw me off guard for a moment when she asked why do I need a save button? I made the change so just save it, don’t have a button at all.

Where’s the Beef

Okay, so far all I have enlightened you with is that I am not always the best designer and that is why I have gate keepers like my wife who remind me every so often that I am not thinking about the customer. However, I have noticed that many businesses have been doing a revamping of their websites with what looks like a focus on mobile. I get that but the end result is that it is harder for me to figure out how to use their site and somethings that I was able to do before are just not possible anymore. You can tell right away that the changes were not based on how a customer might interact with the site, I don’t think the customer was even considered.

One rule that I always try to follow and this is especially true for an eCommerce site is that you need to make it easy for the customer if you want them to buy. Some of the experiences that I have had lately almost leave you convinced that they don’t want to sell their products or do business with me. For some of these I have sought out different vendors because the frustration level is just too high.

Who Tests this Stuff?

That leads right into my second peeve in that no one seems to test this stuff. Sure the developer probably tested their work for proper functionality and there might have even been a product owner who understood the steps he needed to take after talking to the developer and proved to him or herself that the feature was working properly. That is not testing my friend, both of these groups of people test applications the very same way, it’s called the Happy Path. No one is thinking about all the ways that a customer may expect to interact with the new site. Especially when you have gone from an older design to the new one, ah, no one thought of that and now your sales numbers are falling because no one knows how to buy from you.

Testers have a special gene in their DNA that gives them the ability to think about all the ways that a user may interact with the application and even attempt to do evil things with it. You want these kind of people on your side, it is best to find it while it is still under development than having a customer find it and worse yet you get hacked which could really cost you financially as well as trust.

In my previous post “Let the Test Plan tell the Story” I laid out the purpose of the test plan. This is the report that we can always go back to and see what was tested and how much of it was tested and so on. I feel that the rush to get a new design out the door is hurting the future of many of these companies because they are taking the short cuts of not designing these sites with the customer in mind and eliminating much of the much needed testing. At least that is how it seems to me, my opinion.

Let the Test Plan Tell the Story

This post has been the result of some discussions that I have had lately when trying to determine the work flow for a client but this often comes up with others in the past but what I had never used as an argument was the role of the test plan in all this. Besides being an eye opener and an aha moment for the client and myself I thought I would explore this thought a little more as others might also find this helpful in understanding and getting better control of your flows.

What is this flow?

There is a flow in the way that software is developed and tested no matter how you manage your projects. Things typically start from some sort of requirement type of work item that describes the business problem and what the client desires to do and should include some benefit that the client would receive if this was implemented. Yea I just described the basics of a user story which is where we should all be by now when it comes to software development. The developers and testers and whoever else might be contributing to making this work item a reality start breaking down this requirement type into tasks that they are going to work on to make it happen.

The developers get to work as they start writing the code and completing their tasks while the testers start writing test cases that they will use to either prove that the new requirement is working as planned or if it has not and simply is not working. These test cases would all go into a test plan that would represent the current release that you are working on. As the developers complete their coding the testers will start testing and any test cases that are not passing is going to go back to the developers for re-work. Now how this is managed is going to depend on how the teams are structured. Typically in a scrum team where you have developers and testers on the same team this would be a conversation and the developer might just add more tasks because this is work that got missed. In some situations where the flow between developers and testers is still a separate hand off, a hold out from the waterfall days, then a bug might be issued that goes back to the developers and you follow that through to completion.

As the work items move from the business to the developers they become Active. When the developers are code complete the work items should become resolved and as the testers confirm that the code is working properly they become closed. Any time that the work item is not really resolved (developer wishful thinking) the state would move back to Active. In TFS (Team Foundation Server) there is an out of the box report called Reactivations which keeps track of the work items that moved from resolved or closed back to active. This is the first sign that there are some serious communication problems going on between development and test.

With all the Requirements and Bugs Closed How will I know what to test?

This is where I find many teams start to get a little weird and over complicate their work flows. I have seen far to many clients take the approach of having additional states that say where the bug is by including the environment that they are testing it in. For instance they might have something that says Ready for System Testing or Ready for UAT and so on. Initially this might sound sensible and the right thing to do. However, I am here to tell you that this is not beneficial, and loses the purpose of the states and this work flow is going to drown you in the amount of work that it takes to manage this. Let me tell you why.

Think of the state as a control on how developed that requirement or bug is. For instance it would start off as New or Proposed, depending on your Process template, from there we approve it by changing the state to approved or active. Those that use active in their work flow don’t start working on it until it is moved into the current iteration. The process that moves it to approved also moves it into a current iterationn to start working on it but they then move the state to committed when they start working on it. At code completion the active ones go to resolved where the testers will then begin their testing and if satisfied will close the work item. In the committed group they always work very close to the testers who have been testing all along here so when the test cases are passing then the work item moves to done. The work on these work items are done, so what happens next is that we start moving this build that represents all the work that has been completed and move it through the release pipeline. Are you with me so far?

This is where I typically hear confusion, as the next question is usually something like this: If all the requirement and bug types have been closed how do we know what to test? The test plan of course, this should be the report that tells you what state that these builds are in. It should be from this one report, the results of the test plan that we base our approvals for the build to move onto the next environment and eventually to production. Let the Test Plan Tell the Story. From the test plan we can not only see how the current functionality is working and matches our expectations but there should also be a certain amount of regression testing going on to make sure features that have worked in the past are still working. We get all that information from this one single report, the test plan.

Test Plan Results

The Test Impact Report

As we test the various builds throughout the current iteration as new requirements are completed and bugs fixed the testers are running those test cases to verify that this work truly is completed. If you have been using the Microsoft Test Manager (MTM) and this is a dot net application, you have turned on the test impact instrumentation through the test settings we have the added benefit of the Test Impact Report. In MTM as you update the build that you are testing it does a comparison to the previous build and what has been tested before. When it detects that some code has changed near the code that we previously tested and probably passed it is going to include those test cases in the test impact report as tests that you might want to rerun just to make sure that the changes that were made do not affect your passed tests.

Test Impact Results

The end result is that we have a test plan that tells the story on the quality of the code written in this iteration and specifically lists the build that we might want to consider to push into production.

Living on a Vegan Diet

In all my blog posts that I have written over the years I have never talked about health or a healthy lifestyle. This will be a first and you as a technology person might be wondering what has living a Vegan Lifestyle have anything to do with software. After all the blog title is “Donald on Software”.

For years I would go through these decade birthdays and just remark how turning thirty was just like turning twenty except I had all the extra knowledge called life. Going from thirty to forty, same thing but things took a turn when I moved into my fifties. I have had doctors notice that my blood pressure was a bit elevated. I took longer to recover from physical activates. Felt aches I never noticed before and I promised my wife that I would live a long, long time and that wasn’t feeling all that convincing. I didn’t have the same get up and go that I had known before.

A Bit About My Family

My wife and step daughter have been vegetarian/vegans for many years. I was open to other types of food like plant based meals and would eat them on occasion when we were at a vegan restaurant or that was what was being cooked at home. However, I travel a lot so most of my food would be from a restaurant where I could eat anything I wanted. This went on for several years, I was taking a mild blood pressure pill every day. This was keeping my blood pressure under control but there were other things that it appeared to be affecting as well in a negative way.

The Turning Point for Me

During Thanksgiving weekend in November 2014, Mary (my wife) and I watched a documentary on Netflix called “Forks over Knives”, and at the end of that I vowed never to eat meat again and start moving towards a Vegan lifestyle.
The documentary is about two doctors one that came from the medical field and one from the science side of things and their adventure to unravelling the truth about how the food that we eat is related to health. One of the biggest studies that has ever been done is called “The China Study” and is a 20 year study that examines the relationship between the consumption of animal products (including dairy) and chronic illnesses such as coronary heart disease, diabetes, breast cancer, prostate cancer and bowel cancer.

Not only reducing these numbers but now that the toxic animal products were out of our system, our bodies would start to repair some of this damage that we have always been told could never be repairable naturally.

Getting over the big Lie

Yes there is a very large lie that we have all believed to be the truth because we assumed that it came from the medical field and sanctioned by the government to be the truth. That being the daily nutritional guide. This is the guide that told use to eat large amounts of meat and dairy products to give us energy and strong bones but this did not come from any medical study this came from the agriculture and dairy industries to sell more products.

Most of that animal protein that we take in our body rejects, there is very small amounts that it actually uses. Now common sense would tell me if my body is rejecting all this animal based protein it is working extra hard and something is going to break down in the form of disease and other difficulties especially as we get older. Oh wait, they now make a pill for that so we can continue to live the way we always have. So now we are not only supporting an industry that never had that big of a market before but now we are spending billions of dollars every year to pharmaceutical companies as well in order to correct the mistakes we made with the things we eat. One thing that I did learn in physics is that one action creates another and opposite reaction so this is not solving anything either just keep making it worse and now health care costs are through the roof with bodies that normally know how to heal themselves.

Now for the Good News

I know I got you all depressed and disappointed as I just dissed your favorite food and called it bad and toxic but there is a happy ending here. I felt like you are right now for about five minutes and then decided to say “NO to Meat”. If you get a chance I would encourage you to look up that documentary “Forks over Knives” as one other thing that disturbed me was the way they were harvesting these animals and called it ethical or within the approved guidelines. These animals were under stress and that stress goes into the meat and you wonder why everyone seems so stressed, I know there is a relationship here.

Anyway, the good news is my latest checkup with my doctor. I am currently on no medication what so ever and my blood pressure numbers are very normal and very impressive for a guy my age. I did a stress test and was able to reach my ideal heart rate easily and effortlessly and I feel great. If I had any plaque buildup it is certainly repairing itself as I feel great. Still can’t seem to lose the 15 pounds I have been working on for the last couple of years but I know I will accomplish that soon enough. I am done with meat and all animal proteins as in milk, eggs, honey and I am going to live a long, long time and feel great. Won’t you join me?

Migrate from TFVC to Git in TFS with Full History

Over the last year or so I have been experimenting and learning about git. The more I learned about this distributed version control the more I liked it and finally about 6 months ago I moved all my existing code into git repositories. They are still hosted on TFS which is the best ALM tool on the market by a very, very, very long mile. Did I mention how much I love TFS and where this product is going? Anyway, back to my git road map as this road is not as simple as it sounds because many of the concepts are so different and at first I even thought a bit weird. After getting my head around the concepts and the true power of this tool there was no turning back. Just to be clear I am not saying that the old centeralized version control known as TFVC is dead, by no means there are somethings that I will continue to use it for and probably always will like my PowerPoint slides, and much of my training material.

Starting with Git

One thing about git is that there is just an enormous amount of support and its availability on practically every coding IDE for every platform is just remarkable. What really made things simple for me to do the migration was an open source project on CodePlex called Git-TF. In fact how I originally used this tool was that I made a separate TFS Project with a git repository. I would work on that new repository and had some CI builds to make sure things kept working and then when I finished a feature I would push this back to the TFVC as a single changeset however because I always link my commits with a work item in the TFVC project it had a side effect that I was not expecting. If you opened the work item you would see some commits listed in the links section. Clicking on the commit link would open up the code in compare mode to the previous commit so you could see what changes were made. Of course this only works if you are looking at work items from web access.

Git-TF also has some other uses and one of those is the ability to take a folder from TFVC and convert that into a git repository with full history. That is what I am going to cover in this post. There are some rules to this that I would like to lay down here as best practises as you don’t want to just take a whole TFVC repository and turn it into one big git repository as that just is not going to work. One of the things to get your head around git is that those respoitories need to be small and should be small remember that you are not getting latest when you clone a repository you are getting the whole thing which includes all the history.

Install Git-TF

One of the easiest ways to install Git-TF on a windows machine is via Chocolatey since it will automatically wire up the PATH for you.

1
choco install git-tf -y

No Chocolatey or you just don’t want to use this package managment tool you can follow the manual instructions on CodePlex https://gittf.codeplex.com/

Clean up your Branches

If you have been a client of mine or ever hear me talk about TFS you will certainly have heard me recommending one collection and one TFS Project. You would also have heard me talk about minimizing the use of branches for when you need them. If you have branches going all over the place and code that has never found it’s way back to main you are going to want to clean this up as we are only going to clone main for one of these solutions into a git repository. One of the things that is very different about the git enhanced TFS is that a single TFS project can contain many git repositories. In fact starting from TFS 2015 update 1 you can have a centralized version control TFVC and multiple git repositories in the same TFS project which totally eliminates the need to create a new TFS project just to hold the git repositories. We could move the code with full history into a git repo of the same project we are pulling from.

In our examples that we are pulling into the git repository we are doing this from the solution level as that is where most people using Visual Studio have been doing for decades however the git ideal view of this would be to go even smaller to a single project per repository and stitch the depenancies together for all the other projects through package management through tools like NuGet. Right now that is out of scope for this posting but will delve into this in a future post.

Clone

Now that we have a nice clean branch to create your git repository it is time to run the clone command from the git-tf tool. So from the command line make a nice clean directory and then be in that directory as this is where the clone will appear. Note: if you don’t use the –deep switch you will just get the latest tip and not the full history

1
2
3
mkdir C:\git\MySolutionName
cd c:\git\MySolutionName
git-tf clone https://myaccount.visualstudio.com/DefaultCollection $/MyBigProject/MyMainBranch --deep

You will then be prompted for your credentials (Alt credentials if using visualstudio.com). Once accepted, the download will begin and could take some time depending on the length of your changeset history or size of your repository.

Prep and Cleanup

Now that you have an exact replica of your team project branch as a local git repository, it’s time to clen up some files and add some others to make things a bit more git friendly.

  • Remvoe the TFS source control bindings from the solution. You could have done this from within Visual Studio, but its just as easy to do it manually. Simply remove all the *.vssscc files and make small a small edit to your .sln file removing the GlobalSection(TeamFoundationVersionControl) ...
    EndGlobalSection in your favorite text editor.
  • Add a .gitignore file. It’s likely your Visual Studio project or solution will have some files you won’t want in your repository (packages, obj, ect) once your solution is built. A near complete way to start is by copying everything from the standard VisualStudio.gitignore file into your own repository. This will ensure all the build generated file, packages, and even your resharper cache folder will not be committed into your new repo. As you can imagine if all you used was Visual Studio to sling your code that would be that. However with so much of our work now moving into more hibrid models where we might use several different tools for different parts of the application tying to manage this gitignore file could get pretty complicated. Recently I came across an online tool at https://www.gitignore.io/ where you pick the OS, IDEs or Programming Language and it will generate the gitignore file for you.

    Commit and Push

    Now that we have a local git repository, it is time to commit the files, add the remote (back to TFS), and push the new branch (master) back to TFS so the rest of my team can clone this and continue to contribute to the source which will have full history of every check-in that was done before we converted it to git. From the root, add and commit any new files as there may have been some changes from the previous Prep and Clean step.
    1
    2
    git add .
    git commit -a -m "initial commit after conversion"

We need a git repository on TFS that we want to push this repository to. So from TFS in the Project that you want this new repository:

Create a new Repository
  1. Click on the Code tab
  2. Click on the repository dropdown
  3. Click on the New Repoisotry big “+” sign.
Name your Repository
  1. Make sure the type is Git
  2. Give it a Name
  3. Click on the Create button.
Useful Git Information

The result page gives you all the information that you need to finish off your migration process.

  1. This command adds the remote address to your local repository so that it knows where to put it.
  2. This command will push your local repository to the new remote one.

That’s it! Project published with all history intact.

A New Start on an Old Blog

It has been quite a while since I have posted my last blog so today I thought I would bring you up to speed on what I have been doing with this site. The last time I did a post like this was back in June of 2008. Back then I talked about the transition that I made going from City Desk to Microsoft Content Management System which evenually was merged into SharePoint and from there we changed the blog into DotNetNuke.

Since that time we have not created any new content but have moved that material to BlogEngine.Net and this really is a great tool but not the way I wanted to work. I really do not want a Content Management system for my blog, I don’t want pages that are rendered dynamically and the content pulled from a database. What I really wanted were static pages and the content for those pages be stored and built the same way that I build all my software, stored in Version Control.

Just before I move on and tell you more about my new blog workflow I thought I would share a picture from my backyard and that tree on the other side of the fence is usually green it does not change colors every fall but this year the weather has been cooler than usual, so yes we sometimes do get fall colors in California and here is the proof.

Hexo

Hexo is a static page generator program that takes simple markup and turns it into static html pages. This means I can deploy this anywhere from a build that I can generate it just like a regular ALM build because all the pieces are in source control. It fully embrasses git and is a github open source project. I thought that moving my blog to Hexo would help me in too ways, besides giving me the output that I am really looking for but also to use as a teaching tool on how the new Build system that is part of TFS 2015 fully embraces other technologies outside of dotNet and the Visual Studio family. From here I check-in my new blogs into source control and that triggers a build which puts the source into a drop folder which is then deployed to my web site which is hosted on Azure.

As of this post I am using FTP in a PowerShell script which is used to deploy the web site which is not ideal. I am working on creating an MSDeploy package that can then be deployed directly onto the Azure website that is hosting this blog.

The Work Flow

The process begins when I want to start a new blog. Because my git repositories are available to me from almost any computer that I am working with I go to the local workspace of my Blog git repository checkout the dev branch and at the command line enter the following command

1
hexo new Post "A New Start on an Old Blog"

This will place a new md file in the _post folder with the same name as the title but the spaces replaced by hyphens (“-“). After that I like to open the folder at the root of my blog workspace using Visual Studio Code. The thing that I like about using Visual Studio Code as my editor is that it understands simple markdown and will give me a pretty good preview as I am working on it and if my screen is wide enough I can even have one half of the screen to type in the raw simple markdown and the other half to see what it looks like.

The other thing that I like about this editor is that it understands and talks git. Which means I can edit my files and save them and Visual Studio Code is going to inform me that I have uncommitted changes so I can add them to staging and commit them to my local repository as well as push them to my remote git repository. Above you may have noticed that before I began this process I checked out the dev branch which means that I do not write my new posts in the master branch and the reason for that is that I have a continious integration trigger on the build server that is looking for anything that is checked into the master on the remote git repository. Because I might start a blog on one machine and finish it on another I need some way to keep all these in sync and that is what I use the dev branch for. Once I am happy with the post I will then merge the changes from dev into master and this will begin the build process.

Publishing the Post

Once I am happy with my post all I need to do is to merge the dev branch into Master and this starts the build process. Which is really just another Hexo command that is called against my source which then generates all the static pages, javascript, images and so on and puts it into a public folder.

1
hexo generate

It is the content of this folder that then becomes my drop artifacts. Because the Release Manager also has a CI trigger after the build has been sucessful it will begin a Release pipeline to get this drop into my web site. My goal is to get this wrapped up into an MSDeploy package that can then be deployed directly onto my Azure web site. I am still working on that and will provide a more detailed post on what I needed to do to get that to happen. In the meantime, I need to make sure that my Test virtual machine is up and running in Azure as one of the first things that this Release Manager pipeline will do is to copy the contents of the drop onto this machine. Then it calls a CodedUI test which really is not testing it will run my PowerShell script that will FTP the pages to my Azure web site. It needs to do this as a user and the easiest way without me having to do this manually is to run the CUI to do it and complete it.

Summary

So there you have it, I have my blog in source control so I have no dependancy of a database and all the code to generate the web site and my content pages are in source control which makes it really easy if I ever need to make a move to a different site or location or anything like rebuild from a really bad crash. As an ALM guy I really like this approach and what would be even better was having a new pre-production staging site to go over the site and give it a last and final approval before it goes live to the public site.

Database Unit Testing from the Beginning

The concept of unit testing for a database and really this means a database project still seems like a wild idea. Of course I am still surprise how many development shops still use their production database as their source of truth which it shouldn’t be but that’s because they do not have their database in source control. In order to take you down the road to explore some of the benefits that are going to be available to you with being able to run unit tests on your database I need to get us all caught up with how to create a database project as this is where the magic happens.

Creating a Database Project

You need to have Visual Studio 2010 Premium or higher to create a database project. One of the options that are available to us is to reverse engineer an existing database and that is what we are going to do in these first steps. I have installed the sample database called AdventureWorks. This is available as a free download from the Codeplex site.

Create a Project

From Visual Studio you will want to create a new Project and select the SQL Server 2008 Wizard which can be found under the SQL Server node found under the Database category. Give it a name, I called my AdventureWorks and give it a location on your hard drive where you want the project to be located.

A wizard will popup and take you through a number of pages, just accept the defaults until you get to the Import Database Schema page as this is something we do want to do is to import the AdventureWorks database.

New Project Wizard

Make sure you check the Import existing schema and then you will likely want to click on the New Connection button unless you have made a previous connection to the database, that connection string won’t be found in the dropdown.

Connection Properties

If you have connected to databases in the past this dialog box should be very familiar to you. Basically we need to say where the SQL server is. In this case it is on my local machine and is the default instance. Other common server names are also localhost\SQLExpress as that is the name instance that SQL Express creates when it is installed. After you get the server instance completed the dropdown of database names will be populated and from there you should be able to find the AdventureWorks database. I also like to click on the Test Connection button just to confirm that there aren’t any connectivity issues. Click OK and we are ready to move on.

Click Next and continue through the wizard pages and accepting the defaults. On the last page click Finish. This is where this Visual Studio wizard really does it’s work as it creates the project and does a complete reverse engineering of the database. The end result is a Visual Studio SQL Database project that represents the database in code which is suitable for checking into Source Control, capable of deploying changes that might be made to this project, being able to compare changes between versions and much much more.

Lets get to Unit Testing

When you are on a database project as in I have physically clicked on it so that it has focus you will see that a number of toolbar buttons appear. We want to click on the one called Schema View.

Solution Explorer

This brings up another little window in the same area as the Solution and Team Explorer area of Visual Studio called the Schema View.

Schema View

From this view you will want to expand Schemas, then expand the HumanResources, expand Programmability, Stored Procedures and finally you want to right click onto the uspUpdateEmployeePersonalInfo and choose Create Unit Tests…

If you don’t already have a Test Project the next step will let you create a skeleton Unit test for this stored procedure and the option to create a new Test project in the language of your choice.

Create Unit Tests

You will find that when this window opens you can choose more than just the one stored procedure that we choose in the previous step but yours is the only one that is checked. If you did want to have more than one stored procedure in the same class file you could pick them as well. Then set the Project name or select and existing Test Project and give it a decent class name. I named mine HumanRecourceUnitTests.cs. After you click OK it will build all the pieces the test project and default unittest.cs file that we don’t need and everything just starts to look like a typical Unit Test until the following dialog pops up.

Project DatabaseUnitTests Configuration

Now in order to run unit tests against the database you need a connection to the database. In the first part of this you should be able to find your original stored procedure that you used to create the database project. You will notice that this dialog has an optional additional what it calls a secondary data connection to validate unit tests. In this sample we will not need this but in a real world application you may so let me explain that scenario. When an application that is built with a database connection, typically that application and the connection string would just have enough rights to run the stored procedures and nothing else. In those cases you will want to test those connection string when running the stored procedure that you are testing but that same connection string would not have the rights to check the database to see if those rights are valid especially in a scenario where you want to check if the right values got inserted or deleted, that is where this secondary data connection comes in, it would be a data connection that had higher rights to look at those values directly from the tables.

After you have clicked the OK button Visual Studio will display a skeleton of a unit test to test this stored procedure.

Testing Stored Procedure

In theory we have a unit test that we could run, but the results would indicate that the results are inconclusive because although this stored procedure is being run, it is really just exercising the stored procedure and not really testing it as in giving it some values to insert and checking if those values come back.

We are going to replace the unit test calls here with the following code snippet. I have it all in one piece here for you to easily grab this but following this I will break down this code so you can see what is going on. It is very similar to what the skeleton provided with us but we give it some distinct values.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
-- Prepare the test data (expected results)
DECLARE @EmployeeId int
SELECT TOP 1 @EmployeeId = EmployeeId
FROM HumanResources.Employee
-- Wrap it into a transaction to return us into a clean state after the test
BEGIN TRANSACTION
-- Call the target code
EXEC HumanResources.uspUpdateEmployeePersonalInfo
@EmployeeId, '987654321', '3/2/1987', 'S', 'F'
-- Select the results to verify
SELECT NationalIdNumber, Birthdate, MartialStatus, Gender
FROM HumanResources.Employee
WHERE EmployeeId = @EmployeeId
ROLLBACK TRANSACTION

The first part of this code is to capture the EmployeeId that we want to update so that is what the first DECLARE statement does. In the next call we just want to capture an existing EmployeeId from the Employee table and because we really don’t care which on it runs us but we only want want we use the TOP 1 clause in that statement. At this point our declared variable @EmployeeId now has this value.

Note: I have found that there could be a breaking change here that depends on which version of the adventure works database that you have as some will have the employeeId and others will have this column named BusinessEntityID. To find which one you have go back to the Schema View of the project and expand the Schemas, HumanResources and Tables. Find the Employee table and expand the Columns, the column in question will be that first one right there.

Schema View

Because the stored procedure will make changes to the data in the table and we may not want to actually commit those changes we just want to test these changes we surround the next pieces around a transaction and after we have collected our validation values we can roll this back.

After the transaction we call the update stored procedure and pass in some specific data. Next we call a select statement to get those values from the table with the EmployeeId that we just passed into the previous steps. Finally we roll the whole transaction back so that we do not actually make any changes to the database so we can run this test over and over.

Before we can actually run this test we need to make some changes to the Test Conditions portion of the unit test. First you will want to remove the existing entry that is shown there by clicking on the Delete Test button.

Test Conditions: Data Checksum

After you have removing the existing Test Condition we can then add a new one or more to verify the results. Select Scalar Value from the dropdown control and click on the “+” button.

Test Conditions: Scalar Value

On the scalarValueCondition1 line that this action creates, right click on this line and choose Properties, which will display the properties window. Update the following information:

  • Name: VerifyNationalId
  • Column number: 1
  • Enabled: True
  • Expected value: 987654321
  • Null expected: False
  • ResultSet: 1
  • Row number: 1
Properties

What is really happening here is that we are going to look at that first column and see if it matches the NationalId that we sent to the stored procedure. NationalId is the first column that is returned in the select statement.

We are now ready to run the unit test and see that it is working and pass the test. Typically in a unit test you could be anywhere in the method of the unit test do a right click and you will see one of the context choices being to run test. However what we have been working on so far has been the design surface of the database unit tests which is why we were able to write SQL statement to write our tests. To see or get to the actual code page you need to go back to the HumanResourceUnitTests.cs file and right click on it and choose view code.

Solution Explorer / View Code

As an alternative you could select the file in the solution and press the F7 key, either way you will then be looking at the actual test and if you right click anywhere within that method you will see that one of your choice is to Run Tests. Do that now and you will see the test results go from a Pending to hopefully a Pass. If you do get a failure with an exception you will want to check the column names from this table. Some of the names changed and even the way they are spelled. It appears to be case sensitive as well. Like I mentioned before, there seem to be more than one version of this sample database out there and they did make some changes.

Test Results

Now that we do have a working test, I always like to make a change to prove that it is working by making it fail. So to make it fail, change the Expected value to 9876543210000. I basically just added 4 zeros to the end of the expected result. Re-run the test and it should fail and if we look at the Test Result Details we can see that the expected results did not match, which is exactly what we expected.

Take out the padded zeros and run the test once more so that we get a passed test once more. This is just a step to keep or tests correct.

Associate the Unit Test to a Test Case

The following section is going to need TFS 2010 in order complete this part of the walk through, and even better if you have Lab Management setup to complete the development, build, deploy, test cycle on these database unit tests.

Right now, the unit test that we created can be run from Visual Studio just like we have done in this walk through. You can also make these part of an automated build which if this test project was included in the solution for an automated build in Team Foundation Server (TFS) it would automatically run and be part of the build report. However, this would not update the Test Plan / Test Suite / Test Case that the QA people are using to manage their tests, but it can.

In Visual Studio, Create a new Work Item of type: Test Case, and call it “uspUpdateEmployeePersonalInfo Stored Procedure Test”. We won’t fill anything in the steps section as we are going to go straight to automation with this Test Case. Click on the Associated Automation tab and click on the ellipse “…” button

Choose Test

This will bring up the Choose Test dialog box and because we have just this one test open in Visual Studio we will see the exact test that we want associated with this test case. Click on the OK button.

We now have a test case that can be used to test the stored procedure in automation. When this test case is run in automation it will update the test results and will be reported to the Test Plan and Suite that this test case is a part of.

Database Schema Compare where Visual Studio goes that extra mile

There are a number of good database tools out there for doing database schema comparisons. I have used different ones over the years at first initially to help me write SQL differencing scripts that I could use when deploying database changes. If your background is anything like mine where you were namely a Visual Basic or a C# developer and could get by with working on SQL if you could write directly to the database. There were challenges with being able to script everything out using just SQL. Today that is not nearly an issue for me and I can do quite a bit with scripting and could build those scripts by hand, but why?

WHAT… Visual Studio for database development?

Over the years I have tried to promote SQL development to be done in Visual Studio. I made a great case, SQL is code just as much as my VB, C#, F# or what ever your favorite language of choice happens to be and should be protected in source control. Makes sense but it is a really hard sell. Productivity goes down hill, errors begin to happen because this is not how the SQL teams are used to working on databases. It was an easier sell for me because I loved working in Visual Studio and found the SQL tools not to be as intuitive to me. I have never been able to figure out how I could walk through a stored procedure in Query Analyzer or Management Studio but have always been able to do this with stored procedures that I wrote from within Visual Studio and that was long before the data editions of Visual Studio.

Ever since the release of the Data Dude or its official name back then, Visual Studio Team Edition for Database Professionals, this was what I did and I tried to convince others that this is what we should be doing. It was never an easy sell, yea the schema comparison was nice but our SQL professionals already had all kinds of comparison tools for SQL and it would be too hard for them to work this way. They wanted to be able to make changes in a database and see the results of those changes, not have to deploy it somewhere first.

So as a quick summary of what we figured out so far. Schema comparison from one database to another, nothing new, your SQL department probably has a number of these and use them to generate their change scripts. How is Visual Studio schema comparison better than what I already have how is it going to go the extra mile? That my friend starts with the database project which does a reverse engineering of sorts of what you have in the database and scripts the whole thing out into source files that you can check into source control and compare the changes just like you do with any other source code.

Now once you have a database project you are able to not just do a schema comparison with two databases but you can also compare from a database and this project. The extra mile is that I can even go so far as to deploy the differences to your test and production databases. It gets even better but before I tell you the best part lets go through the actual steps that you would take to create this initial database project.

Create the Database Project

I am going to walk you through the very simple steps that it takes to build a database project for the AdventureWorks database. For this you will need Visual Studio 2010 Premium edition or higher.

We start by creating a new project and select “SQL Server 2008 Database Project” template from under the Database - SQL Server project types. Give it a name and set the location. I called mine AdventureWorks because I am going to work with the sample AdventureWorks database. Click OK..
Create a Project
Visual Studio will build a default database project for you, but it is not connected to anything so there is no actual database scripted out here. We are going to do that now. Right click on the database project and a context sensitive menu will popup with Import Database Objects and Settings… click on that now.
Import Objects
This opens the Import Database Wizard dialog box. If you have already connected to this database from Visual Studio then you will find an entry in the dropdown control Source database connection. If not then you will create a new connection by clicking on the New Connection… button.
Import Wizard
So if you have a ready made connection in the dropdown, choose it and skip the next screen and step as I am going to build my new connection.
New Connection
Because my adventure works database in on my local machine I went with that but this database could be a database that is anywhere on your network, this will all just work provided you do have the necessary permissions to connect to it in this way. Clicking on OK takes us back to the previous screen with the Source database connection filled in.

Everyone, click Start which will bring up the following screen and start to import and script out the database. When it is all done click the Finish button. Congratulations you have built a Database Project.
Import Wizard Finishing
You can expand the solution under Schema Objects, Schemas, and I am showing the dbo schema and it has 3 table scripts. All the objects of this database are scripted out here. You can look at these files right here is Visual Studio.
Solution Explorer
However you might want to use the Schema View tool for looking at the objects which gives you a more Management Studio type of view.
Toolbar
Just click on the icon in the Solution Explorer that has the popup caption that says Database Schema Viewer.
Schema View

Updating the Visual Studio Project from the database

In the past these were the steps that I would show and demonstrate on how to get a database project scripted out and now that it is code is really easy to get into version control because of the really tight integration from Visual Studio. My thoughts after that is this is the tool that you should be working in to evolve the database. Work in Visual Studio and deploy the changes to the database.

Light Bulb Moment

Just recently I discovered how the SQL developer does not really need to leave their favorite tool for working on the database, Management Studio. That’s right, the new workflow is to continue to make your changes in your local or isolated databases so that you can see first hand how the database changes are going to work. When you are ready to get those changes into version control you use Visual Studio and the Database Schema comparison.
Switch Control
So here we see what I always thought was the normal workflow, with the Project on the left and the database that we are going to deploy to on the right. If instead we are working on the database and we want to push those change to the Project, then switch the source and target around.
Options
Now when you click the OK button you will get a schema comparison just like you always did but when deployed it will check out the project and update the source files. This will then give you complete history and the files will move through the system from branch to branch with a perfect snapshot of what the database looked like for a specific build.
Options

  1. Click this button to get the party started.
  2. This comment will disappear in the project source file.
  3. The source will be checked out during the update.

    The Recap of what we have just seen.

    This totally changes my opinion on how to go forward with this great tool. The fact that we can update the project source from the database was probably always there but if I missed the fact that this was possible then I am sure many others might have missed it as well. It makes SQL development smooth and safe (all schema scripts under version control) and the ready for the next step to smooth and automated deployment.

The Two Opposite IT Agenda's

The Problem

I have been in the Information Technology (IT) field for a long time and most of that time has been spent in the development space. Each environment different from the previous one and in some cases there were huge gaps in the level of technology that was used and made available in each location. This has stumped me for a long time why this was. You go to a user group meeting and when ever the speaker was speaking about a technology that was current and he would conduct a quick survey around the room how many were using this technology, the results would be very mixed. There would even be lots of users at these meetings where they were still using technologies that were over 10 years old and no sign of moving forward.

Why is this happening?

Good question, and after thinking about this for a long, long time I think I have the answer. It really depends on which aspect of the IT spectrum is controlling the development side. I think it has become quite acceptable to break up the whole IT space into two major groups, the IT Professionals, and the Software Developers. When I first moved to California I worked for a company that was a software developer and they did software development for their clients on a time and materials basis. There was no question as to which wing of IT influenced the company with regards to technology and hardware. The developers in this case were golden, if you needed special tools, you got them. Need a faster computer, more monitors, special machines to run beta versions of the latest OS and development tools, you got it. You were on the bleeding edge and the IT Professionals were there to help you slow down the bleeding when that go out of control. However, this company was always current got the best jobs and in a lot of cases when we deployed our final product to their production systems that would be the point at which their IT department would then be forced to update their systems and move to the new round of technology.

Welcome to the Other Side

What happens when the influence is on the other foot, the IT Professionals. They have a different agenda as their number one goal is stability, security, and easy deployment. However this does come with a cost, especially when the company is heavily relying on technology to push its products. I have heard this from many different companies all with in this same situation, that they are not a technology company, the technology is just the tool or one of the tools to deliver their products. When this group controls the budget and the overall technical agenda things like this will happen. Moving forward will be very, very slow and the focus will be purely on deployment issues and keeping those costs under control and not on the cost of development which could get very expensive as the technology changes and you are not able to take advantage of those opportunities. Over time, the customers that receive your products will start to evaluate your future as not being able to move fast enough for them because they are going to expect you to be out there and fully tested these waters before they move there and if your not it is not going to look favorable in their eyes. This is especially true if you have some completion in your space that are adapting the new technologies faster then your company is.

There is another side to this that I have witnessed which bothers me even more. The decision to move all enterprise applications to the web was never from the development side of IT but came from the IT Professionals. Remember one of their big agendas is the easy, easy deployment and as a result they have made software development so expensive that we have been forced to move as much as we can to off shore development companies. In most cases this hasn’t even come close to a cost savings for the applications as you never seem to get what you thought your were designing and it is not always the fault of the off shore companies, they are giving your exactly what you asked for. In more cases it is the wrong technology for the solution. Most high volume enterprise applications were desktop applications with a lot of state (data that you are working with). The web is stateless and over the years many things have been developed to make the web appear state full but is it not. I have seen projects spend 100 times more time and money into implementing a features on a web to make it appear and feel like a desktop application. Now to be clear this argument started when deployment of desktop applications was hard as in the early days there was no easy way to keep all the desktops up to date except to physically go around and update them as patches and newer versions became available. However, in the last 5 years or more that has totally changed with things like click-once technology you can implement full automatic updates and license enforcement just as easily as web sites and maybe even better. We all know there are new tricks every day to spoof a web site into some new security trick.

What’s the Answer

I don’t really have the answer but I do have a few thoughts that I have been thinking about and I would love to hear about some other ideas that you might have. My thought is that you should separate the IT push down two paths and this advice is for the companies that are currently being held back by the stabilizing IT Professionals. I would even go so far as to keep the developers on a separate network then the rest of the employees this will keep the bleeding on the other side of the fence and not affect your sales and support staff which are there to sell and support products that are stable and need to keep them that way. This will allow the development to improve and expand their technical expertise and provide better and more relevant solutions for your customers, internal and external.

Goal Tracking

Since about the beginning of the year I have been thinking about goal tracking. I compiled a long list of technologies that I wanted to learn, experiment with and maybe even build some projects using some of these newly learned skills. Nothing quite like turning something new into something useful. I find that this technique provides me with the best understanding of how and why a technology would be used in one scenario over another. My goals for this year is a long list and some have a dependency of a previous goal being completed before I even begin, like reading the book before I begin my project based on the technology.

However, I suffer from the getting bored and just needing a break from a certain goal and then forget to get back to it at the appropriate time illness. It’s like I need something to help me track what my goals are and an easy to see a KPI like indicator to show me which goals I need to pay attention to right now or I might miss my target date altogether. Before I go much farther I should define KPI:

KPI’s are Key Performance Indicators which help organizations achieve organizational goals through the definition and measurement of progress. The key indicators are agreed upon by an organization and are indicators which can be measured that will reflect success factors. The KPIs selected must reflect the organization’s goals, they must be key to its success, and they must be measurable. Key performance indicators usually are long-term considerations for an organization.
This is what I need for my goals, some way to track my progress. I went to work on it, storing the goals was easy. Give it a name, what your target date is for completing the goal and some exit criteria. Okay, so I had to think a little bit about that last one, but I needed something that would tell me when the goal was completed. So, I started with an easy one, reading a book. I know when I have completed that goal when my current page is equal to the total number of pages in the book. Sorry, I just jumped into some logic thinking that a computer program could use to determine if it was completed. So in the case of tracking the progress for my book reading goals I could keep track of what page I was on each day and how long I spent reading. The last one is going to help in figuring out how fast I am reading this book and checking this against how much time I have set aside to work on my goals.

Okay, then from that information I could recalculate my goal target date by calculating the rate at which I am going what I should actually reach my goal. If the new target date is earlier then I had planned then the KPI should show me a green light. If it is later then this, it should show me a yellow (warning) light if I am just slipping but I still have time in my allocated time frame to meet this goal. Of course the KPI would be a red light if there was no way that I could meet this goal. This one is harder to determine as it is an indicator which would come up when I certainly have gone past the target date, how I can determine if I have run out of time before this date is hard to calculate especially if I have alot of goals. There are things that I cannot really know like sacrificing one goal so that I can put all my effort toward the other goal. If you are behind I will show the warning light, if we missed the goal I will show the red light…but at least I have something that I can track for my goals.

There were a couple of other types of goals that I thought of tracking. My projects that I build are not based on any page number but I thought I would set a goal in the amount of time I would spend on the goal by a certain target date and track it that way. This also should work quite well and can easily see when I am on and off track but the red can again only be shown if I have already missed the mark. Then just to throw something different into the goal tracking mix, I thought about setting up some goals for my weight. This one is really different in that there is no time element here at all. In stead we are tracking the weight on a regular basis and let the goal tracker estimate and the rate that I am loosing or gaining weight when I should be able to reach my ideal weight. I think that the KPI’s are going to start showing me problem indicators when I am moving in the opposite direction that I was planning. If this is going to work or not I am not sure, for instance for the past week I have had no change in any direction and the goal tracker is still saying I will reach my ideal weight within the date I have targeted….time will tell.
KPI of a few goals
Anyway, as you can probably tell by now I have actually started to put together a goal tracking program. It is still rough and most certainly is a beta product.

Good luck with your goals, I am finding that I am a lot more focused on my goals and staying on track then when I wasn’t tracking my goals, so I think it is working.

Who's the Boss

For most of our lives we have a constant struggle to try to be the boss of ourselves. Does it ever happen? When you grew up as a child I am sure you have memories similar to mine where at some point in your life you were struggling to gain control of your own life. Could not wait to move out of the house and get out on your own, so that you could be the boss of you. How’s that going for you? Are you the boss of you yet?

It is not long after you move out that you find you have a whole bunch of new people that have stepped in to take over the boss position. You have to pay rent so you have to answer to your landlord as he becomes a certain boss and when you can’t pay the rent, he fires you by way of eviction. Then in order to make some money to pay the rent you have to find a job and that usually leads to a boss and might even have a complete entourage of bosses. You know what I mean, there is your manager, the assistant manager, then there is the shift manager and none of them are shy at giving you orders and commands. Come to think of it, maybe living at home wasn’t so bad after all.

Self Employed

Then one day you wake up with this fantastic idea. If you start your own company you could become your own boss. Then you would truly have reached your goal of being the boss of your self. Then as the company grows you could end up being the boss for lots of other people. Yea, this is what you are going to do to be the boss of you. Well it is never quite like that because if you want to remain in business you will need to listen to your customers. You need to provide them with a service that they will value and will want to pay you for. One of the very reasons why a small company has a good chance of competing against a larger competitor is the ability to deliver better quality customer service. Wait a minute! If I have to listen to my customers and do what they want me to do, then they are my new boss? That’s right and as your business grows and you attract more and more customers and you want to continue to be successful, the number of people you need to listen to increases as well. You could just ignore the requests of your customers and we all know how that is going to affect your newly formed company. Remember the last time you were fed up with a business that was ignoring your needs. Why, you found a new place of business who was more willing to listen to your needs and even provide you with that service you were looking for.

Going Public

Okay, let’s take the self employed business a step farther. Let say that you do make a real honest effort in your new business and listen to your customers and follow through on many of their suggestions to improve the products and services that you provide. You make improvements’ in your goods and services for the benefit of your customers. The company grows and grows, you are the boss of hundreds maybe even thousands of employees, your customers love your products so you decide to take the business to the next level and go public. You know trade shares of your company on the stock market. This was of course in an effort to reach more customers and to expand to other geographical areas, expand your horizons and get your products and services into your new deserving customers. This changes things. All of a sudden you are hearing from a new group of people that want your attention and they keep talking about steady growth, make more profit and drive the share price up. These are your investors and it sounds like a new set of bosses to me. They don’t seem to share the same passion that you had with pleasing your customers, in fact they don’t seem to care about them other then to make them pay more money and anything to show growth and make the stock price go up. This can be a problem, if you grow too fast and the profits are a little slow at coming in you are going to be under pressure to increase profits somewhere and decline expenses in other areas. Both of these decisions could greatly affect your fine customer service that you have been able to provide in the past.

Politics

Let us talk about one more area in this topic of bosses and that is in the area of politics. I think that sometimes politicians forget that there positions are in a role reversal of sorts. Politicians work for the people, I think the correct term is the servant of the people. Yes the highest ranked position in the country, that of the president is really a servant of the people and we expect them to serve the needs of its citizens and make decisions that are for the good of the people not themselves and the many friends that they have made to get to this fine position of servant hood.

Conclusion

I think that having a boss and having to answer to someone is a fact of life. You can even get to be the president of the United States only to answer to the people, who are your bosses. So, in conclusion be the best boss that you can be to the people who look to you for leadership and threat those in a boss position to you with respect. If they do not deserve your respect, then maybe it is time to leave and find a new and better boss. There are a number of them out there, I know, I have worked for a few of them myself.