Donald On Software

Just my thoughts, my interests, my opinions!

When Should we Move the Work Items to DONE?

This is a very common question that I get asked by different software development teams as I make my rounds to helping clients with their ALM practises. There is a common pattern associated with this question and I know this is the practise when I see a lot of columns on their Kanban boards or worse yet a lot of states that they are tracking on the work items.

Fewer States (keep it down to no more than 4 or 5)

When I see more than the 4 or 5 out of the box states that start in a TFS out of the box template it tells me that the team is trying to micro manage the work items. They are adding more work to their plates then they need to. It really gets hard to manage the work when it goes beyond doing the work because then the question comes up with who is responsible for moving the work item to Done and when is it Done?

The goal behind the work items and here I am specifically referring to the Product Backlog Items (the requirement type) and the Bugs is to track the work to complete the described work. This is in conflict to the pattern way of thinking that I spoke of earlier where the thought is that we need to track this work item through all the environments as we are testing and deploying. I am telling you that you do not. Initally when we are in the development cycle we are working closely with the testing team and as soon as we have something ready to test, they can test it right away because we have a proper CI/CD pipeline in place and can approve work that we have completed so that they can have a go at it to confirm that the new functionality or fix works as expected.

If the functionality is correct, the initial tests are passing then we can go ahead and push the code to the parent branch (could be master or develop, depending on the process you are following) which starts the beginning of the code review and a new set of testing can begin as this should trigger yet another CI/CD pipeline but this time we are testing this against other code as well and making sure that all the code in the build is working nicely together.

The Wrong Assumptions

An incorrect assumption that comes up when testing some of those very same test cases that were passing when we were doing the functional testing the first round are the same bugs. Or are they? The first round of testing you were in an almost isolated environment along side the development team but now that we are working from a merged branch such as master or develop. There is a good chance that they are related or it could be some other piece of code that is acting badly and we just happen to have caught it using the test case we used to test that new bit of functionality.
Not having that assumption and instead creating a new bug gives us a cleaner slate from where we can analyise this incorrect behaviour. Remember that test cases live on until they are no longer useful for the purposes of testing the application. Bugs and PBI’s and Stories do not, they always end after the work has been completed. They can come back as there are times where we might have missed something, but do not assume that is what happened.

When Does the State for the Work Item switch to DONE?

The simple answer to that question is when the work is done. The work is done when the coding and testing have been complete but this is going to be functional testing that we are talking about here and that testing was done from that active branch that was created for the development of this work. We have developers and testers working side by side and in a CI/CD environment this is a very natural flow. Work gets checked into source control, the build kicks off and deploys to the development environment (not your laptop) where the developer can give it a quick smoke test. From there they can approve the build to move on into a QA environment. If the testing from QA is successful then this could be a good place to implement a Pull Request.

The Pull Request does a couple of things, it provides a great opportunity to force a code review and squish the multiple commits into one nice clean commit and to automatically close the work item (set it to DONE).
That Pull Request will then start another Build which then deploys to Dev to QA (this time funtion and regression testing) as this could be a potential candidate for production.

Work Item is DONE but the testing continues

In a previous post Let the Test Plan Tell the Story I explain how the test plan is the real tool that tells us if the build we are testing is ready for a release into Production. This is the tool we use to verify that the functionality of the current new changes as well as the older features are working as expected through test cases. We are not testing the Stories and Bugs directly those are DONE when the work is done.

Master Only in Production, an Improvement

Some time ago I wrote a blog post about My New 3 Rules for Releases and one of those rules was to only release into production code that was built from the master branch. In that solution I wrote a PowerShell script that would run first thing on the deployment to only go forward if the branch from the build came from master otherwise it would fail the deployment. This gave me a guarantee that builds that did not come from master would never get deployed into Production.

This solution worked very well and guaranteed builds that did not come from master would ever get into Production, it was my safety net. It still is and I will probably continue to use it but there has been an improvement in the process to make this even cleaner. In my solution it was there as a safety net just to make sure that one day when I was clicking on things so fast and maybe doing more than one thing at a time that I did not cause this kind of error.

Artifact Condition

The new improvement is what is called an Artifact condition and it can be specific to each environment that you are deploying to. In this case I have selected my Production environments and said to only trigger a deployment in my Production environment when the Dev deployment succeeded and the branch is master. Of course it still includes all the approval and acceptance gates but the key to note here is if those first two conditions are not met it is not even going to trigger a Deployment to Production. In the past when a code from a none master branch was successful in Dev or QA I would have to fail it some where along the way to stop the pipeline and in this case the pipeline just nicely ends. Much, much cleaner.

How do you set it up

This is kind of tricky because in the VSTS Microsoft has just deployed a new Release Editor that seems to be missing this piece for now, not to worry as the new Release editor is still in preview and you can easily switch back and forth. When you go to Releases and click on the Edit link and if the screen looks like the following, click on the Edit (Old Editor) link to switch back to the old style Release editor.
The New Release Editor is missing this functionality

Next you select your Production environment and click on the ellipse button and select the Deployment conditions link.

Selecting the Deployment conditions

Finally the Configuration Screen

Now we are finally on the configuration page where all the real magic happens. I have listed 5 simple steps that you follow to setup a deployment that will only trigger when the build came from the master branch and the previous build was successful.
Configure for master branch only

  1. First make sure that you have the option set to trigger a deployment into production after a successful deployment of the previous environment.
  2. Next click on the new checkbox to check it as this sets some conditions to the new deployment
  3. Click the Add artifact condition big green plus button.
  4. Set the repository to only include the master branch as that condition
  5. Finally click the OK button to save all you adjustments.

Now, you won’t even be given the opportunity to promote the build into Production if it was not built from the master branch.

One Build Definition to Support Multiple Branches

Before I moved to git, I had the same situation that many of you have had when it comes to managing build definitions. I had a build definition for each branch and for a single product this could have been several all doing the same thing. Yea, sure they were clones of each other and all I really needed to do was to change the path to the source in each case. Then in order to keep track of what each of these builds was for and what might have triggered it I would develop some sort of naming convention so that I could sort of tell without having to open it up. This really felt dirty and raised a red flag for me because once again we were introducing something into our environment that was not the same, but sort of the same. Wouldn’t it be better to actually have one build definition that we can use for all these various types of builds and different branches?

Builds with Git

When you really look at git, you learn that a branch is nothing more than a pointer to a changeset. When you compare this to any of the centralized source control systems out there including VSTF the branch is pointing to a copy of the source control in a different location. With that said, then I should be able to create one build definition and with a wild card be able to even trigger a Continuous Integration (CI) build by checking in code and it would use the appropriate branch. That is absolutely true, and for the remainder of this post we will go over the simple steps to make that happen.

Same Build Definition for All Branches

I will assume that you have a build that is working and your source code for this build is a git repository on TFS or VSTS. Because it is a Git repo, you can specify path filters to reduce the set of files that you want to trigger a build. According to the documentation, if you don’t set path filters, the the root folder of the repo is implicitly included by default. When you add an explicit path filter, the implicit include of the root folder is removed.

This is exactly what we want to do but we want to include a couple of different paths. So lets start by going to the Build Definition and clicking on the Triggers sub-menu. Make sure that the Continuous Integration switch is turned on and next pay our attention to the Branch Filters. In my branching schema I use three (3) kinds of paths. Master of course, as this is where all the finished and releasable code lands up. I also use features for any new items I am implementing and I usually include the Work Item Number in my branch as well as a short description. So an example of a feature branch for me would look something like:

1
feature/3660_NewFunctionality

With that said I have a similar path for bugs which are things that have an incorrect behavior or something that needs to be fixed. In my branch Filters I would include 3 paths and the feature and bug would include the wild card to have everything included that is part of a feature or bug branch.
CI Branch Filters
With this in place my commit pushed to the remote repository will kick off a new build for any new features and bugs that I have been working on. Even better, the very same build definition kicks off when ever I complete a pull requests into Master. Not a clone or a copy but exactly the same build. There is never a question about what happened to the build, but rather what code change or merge did we introduce that caused this problem.

Before I discovered this I was happily flipping the branch name between my features and bugs, the definition defaulted to master. Because of that I wasn’t even bothering with CI for the development branch and the trick was to always remember to build from the correct branch. Now I don’t even have to think about that because the branch that triggered the build is the branch that is being built. Just another thing that I could have easily screwed up is out of the picture. I don’t even have to think about kicking off a build and deployment as this just happens every time I commit my code and push those commits up to the remote Git.

Sending an Email to the Developer when the Build Failed

Over the many versions of TFS there were existing workarounds that allowed us to send an email to the developer that queued the build and it had failed. Although these workarounds did work, I always felt that this should have been handled by the alert system within TFS. What was lacking was some sort of condition that if the build failed it should go to the developer that queued it up.

More recently I was tasked to find or build another workaround that would work within the vNext version of the Build engine. Well I started down this quest collecting api’s that I could call when I thought I would have one more look at the TFS alerts, maybe there were some updates to that part of the tool.

New Notification Engine

What do you know, there were alot of changes made with this engine but not for TFS 2015, these updates show up in TFS 2017 Update 1. One more reason to update to TFS 2017 for all those still using an on premise version of TFS as this has been in VSTS for a while now. In the remainer of this post I will walk you through the steps to implement this big improvement in the notifications and how to solve that problem of just sending an email to the developer that caused the build to fail.

If you are on VSTS or TFS 2017 Update 1, the steps are exactly the same which is nice as in my line of work I always hate having to remember two different ways for doing the same thing.

New Name

Click or hover over the settings icon

First off, the alerts name has been changed to Notifications and you get to them by hovering on the Gear icon and selecting Notifications.
Click on Notifications
However, there is a difference in where you select this gear. Make sure that you are in a TFS Project, if you see the drop down on the left say Projects, this would indicate that you are a level too high and in that case click on the Project and select one. After this page loads you should see a big blue button called “+ New”, click this button. The page changes to allow you to select “Build” under the Category and “A build fails” under the template. After you have done this click on the “Next” button.
Make your selections and click the Next button

This opens up a very different looking screen but the conditions to make this work are all there. First off we select “Specific team members” for the Deliver to choice, and in the Roles choice select the “Requested by”. This is the portion of the Notification that only selects the team member that queued up the build, in other words requested the build.

Although we had to select a project before we could get to this Notification area, in the next section the Filter, we can select “Any team project” which would apply this notification to all the TFS Projects. The filter criteria should be correct and not require any changes as this is basically gets fired off when the build has Failed. You just click on the “Finish” button and the notification is ready for testing.
Complete the details and click the Finish button

What did the Notification area above the Project do?

Well just before I let you go setting up your Notifications using the proper tool, I thought I would let you know what would have happened if we did not select a Project first before we went to the Notification screen. If you do this you will notice that on the screen some of the criteria information that we used to narrow down the notification down to the developer that requested the build would not be there. These Notifications are the subscription only ones that have been in TFS since the beginning of that product. This does feel a bit strange to me, almost like these two concepts should be the opposite of what they currently are. It is what it is but at long last we can now use the Notification engine to better suite our needs.

When is Waterfall a Good Choice

In my work as an ALM consultant I will often be asked the question or told that a team can’t go and practice agile they have to do waterfall. I think they are looking at this in the wrong way. One of the things to think about in waterfall versus agile is what these two methodologies are really all about. Is waterfall really all that bad? The answer to that question is: No, waterfall is actually a great methodology and a great pattern that has worked for some projects. Not a lot in the software field; simply because in Waterfall you need to know all the requirements up front and work towards completing that plan. In other words you are working the plan and the schedule becomes king not the actual priority or benefit that you can provide to the end user.

Inspection and Adaption

In an agile setting, we know up front that we will not know everything that there is to know about this solution that we are designing and coding until we start. We recognize and acknowledge that as we start to develop and get early feedback that we may have to go back to the work that we have done and make changes. This is part of the Inspection and Adaption that goes on in Agile regularly. This is totally missing in Waterfall. Before you start attaching me with your arrows and spears, yes I know you can make changes in Waterfall, heaven knows how many times I have heard the excuses that projects were not completed on time or at all because the scope kept on changing. However, lets explore that thought process for a minute. Lets go through the steps that it takes to make a change in a Waterfall project.

First off someone has to invoke a change request in order to make that change. This is likely coming from the development team as they ran into a road block and would not be able to complete the project by the way the requirements were written. This change request would almost never be coming from the end user because they won’t likely be able to provide any feedback until the application has moved into testing, which is always done near the end of the project. Next there has to be an impact analysis on how this change may affect the rest of the project. What I find interesting here is that we are still bound to theories. The requirements were developed on theories of how things should work in the minds of a Business Analysis and now we have an impact analysis which is also based on the same air, how we think it should work. One of the things about any agile project is that it is based on living breathing code. If something isn’t working the way we need it to we can make changes and continue to get feedback until everyone is happy.

Big Design Up Front

With waterfall, things need to happen in a very precise set of steps. After the requirements are gathered and everyone is in agreement on what should go into this application, it goes to the architects who will come up with the design. Many of the choices that are made during this step are made based on the specific known aspects of the requirements documentation. The problem here is that the organization may not know if these are all the requirements and there is a high level of possibilities that they aren’t. There is a myth that the Scrum community states in their training material. The myth is that the longer you spend studying and analyzing the problem the more precise your solution will be.

The worst part of “Big Design Up Front” is that there might have been weeks and maybe even months of work to create this design. Which might be okay if the project was to come together in exactly that way. Chances are, there is going to be a change request that comes along that is going to break the design, sending the architects back to the drawing board. In any agile environment, we expect that the application and the design will probably change many times as we continue to inspect and adapt during development. The big difference here is that agile does not spend a lot of time up front on the design but continue to design and redesign as the project moves forward.

But We Need those Big Requirement Documents

Oh really? I have challenged many of my clients to prove me wrong on this. No one likes to read these because of the amount of boiler plate material that is in the document. Way back when I was leading teams on a new project I would often get these 60 to 70 page documents. I would go through them with my highlighter and find about a page and a half of things that we needed to do, the rest was filler. I am not alone in this thinking as I have seen lots of teams doing something similar to my highlighter exercise except they may put them into tools similar to Team Foundation Server. These teams would love to have the BA’s enter the requirements directly into TFS, but they struggle with having to have these big documents.

Again I ask you who are you writing these documents for? Many hours are spent to put these documents together to only end up in an archive somewhere. Developers don’t want them, they want the actual requirements or stories pulled out and track the things that they need to build. Testers tend to follow closer to what is going on in development than the big document simply because there is so much boiler plate stuff in these documents that makes it hard to work with. Now, you might be getting upset with me again because there is important stuff in that boiler plate text. True, but I don’t think it belongs in this document which is just a document. The problem with a document is that unless it has some way of being enforced it is just ink on a piece of paper. Everyone who has spent any time with your company knows what these requirements are that have to be in every product so wouldn’t it make more sense to have them in a Regression Test that gets run at least once before we release into Production. How about the really important ones being in a series of smoke tests which must get run before the Testers are even going to look at that build. This way those boiler plate requirements will be enforced.

Conclusion

Okay, I will admit I used the title of this blog post to attract development teams that are determined to work in a waterfall methodology and you probably thought this post would give you some much needed ammunition to fire back at the agile folks. Waterfall works great if everything is known about the project and you have done this same kind of thing many, many times. In those circumstances it is going to work great as you will know exactly how long it is going to take you and when the end user can have it in their hands. However, I have to say that very little of that kind of development is done in the United States or Canada. Those are the kinds of projects that can easily be done off shore for a lot less money because it then becomes just a labor exercise. Real development involves going where no one has ever gone before and doing things you were not even sure could be possible. That is why you need to adhere to an agile approach and take a couple of steps forward, and expect to take a step back, adjust and move forward again. Development involves trying things and retrying things until you get the results that the end users are expecting.

An Argument against the Date Based Version Number

In the past I have followed two types of version numbers for the products that I build and support on the side. Products that were customer facing all followed the Semantic concept of version control. If there was a big change but not breaking then the minor number incremented. If the change could have potential breaking changes then the Major number was incremented. This concept works well in that everytime that code was changed the third digit, the build number was incremented. We ignored the fourth number which was the revision as that was just a number to keep the build ID which was a makeup of the major, minor, build and revision, unique. If I have 1 through 18 in revision numbers all for the same build, it means that nothing in the code has changed since revision 1. We are working on changes to the actual build definition and these are just builds of the same code.

Projects that are not Customer facing

For other types of products, things that I used internally were given a different format because at the time I didn’t think it mattered and my only goal was to be able to look at the properties of an assembly and know which build it came from. For that I used a format that would change automatically for each build and I would never have to change any of the version numbers ever. This format followed a pattern like YY.MM.DD.RR, the RR representing the revision number that I allow TFS to create automatically to keep the build number unique. So for a build that was run on say February 23, 2017 that version number could look like:

1
17.02.23.01

I would use a PowerShell script to write this to the assembly just prior to compile time and this would work as my version number.

I have used this format for years and there have been many blog posts on how to do this automatically in TFS ever since the 2010 release. Back then we were building activites to be used in the xaml builds and since then the ALM rangers have converted this into a PowerShell script as part of the Build Extensions, as well as many others that are available in the TFS Market Place as a build task. The basic idea is to have this format as part of the build definition and most of these tools will extract the version like number out of the build name and that becomes the version number.

1
Build number format: MyBuildName_v$(Year:yy).$(Month).$(DayOfMonth)$(Rev:.rr)

But I Have Changed My Mind

Since moving into a more DevOps mindset if you please, I was beginning to see that I was loosing some valuable information about my internal builds. I had no way of knowing when an actual code change occured because if I built the product on Feb 23 and then built it again on Feb 24 because I wanted to try something on the build machine there was no way to tell from the build number or the version of the assembly if anything had changed. This is important stuff, but I also did not want to have to manually tweak the build number every time I did push something new into production and looking back at my old post My New 3 Rules for Releases the tools and solution to accomplish this were right at my finger tips.

But this is done on the Releases

Yes, they are and guess what? I did not have a formal release pipeline for some of these internal products. Hey some of them were just packaged up as Nuget packages with a wrapper of Chocolate. You will want to check out my post on How I Use Chocolatey in my Releases to really understand what I am talking about here.

After thinking about this for a while and having similar discussions with clients I came up with the idea of having at a minimum a Dev and Prod environment. The Dev environment would do what I pretty much have always done, it would deploy the application and maybe even run some tests to verify that the build has been sucessful. Sometimes I find issues here and I return back to the source, fix it up and send out another build.

When I am happy with the results I promote it to Production. The promotion does not do anything to any environment or machine but does lock the build, increment the build number and my newest thing create a Release work item.

Why Create a Release Work item

I will talk about this feature in more detail with some code samples in a future post. Briefly, the whole reason for the creation of a Release work item when I deploy to Production is to keep track of how many releases I have done in the last quarter. I love good metrics and this is one that lets me know I am pushing code out into production and not just tweaking it to death. Remember you can’t get good feedback if you don’t get it out there.

In Conclusion

So there you have it, all my products internal or customer facing I have much more clarity as to when a build has new code in it. I could have gone though source control and found out from the code history and found the lastest changeset number and see the first time that this was used in a build but so much work for something that I can see at a glance and not having to look anywhere else for it.

Security Configuration for Teams

Typically if it does not matter if team member’s can view the work of other teams or maybe they even work across teams which is usually the case, then having Contributor access at the TFS Project is all that is needed and desired. However, there may be those situations where you find that you need to guard data from each team so that the other teams cannot see the source or the work items of the other team and yet be within the same TFS Project so that we can get good cross team reporting that makes sense.

This post will take you through the steps that you will need to take in order to put that level of security in place.

Creating Teams

You create the teams and administrate the security from the TFS admin page. You would need to be a Project Administrator in order to create teams and you would have to be a Collection Administrator to create TFS Projects. Assuming that you have the appropriate permission we start from the normal TFS web access page and click on the gear icon on the very far right of the page.

~

Then just click on the New team button to create a new Team.

~

When creating a team it is important not to put them into any of the built in TFS Security groups that exist. These groups are setup from the TFS Project level and their rights and permissions filter all the way down to include all the teams. The end result is that you add a member into one team and they would still see the work and source from all the other teams because they got their permissions from the TFS Project level.

~

When you create the team make sure that you set the permissions to (Do not add to a security group) and although it is not saying this what happens is that this team also gets its own TFS Security Group with that name. This means that anyone we add to this team (provided they did not get higher permissions by being a member of some others team that does have a higher elevated security group) they would only have access to the things that we have given permission to for this team.

Before we move on to set the actual security we will have to set up the security for this team from the perspective of the TFS Project. There are a few things that we would have to set here otherwise the team members would not be able to even see their team. You do this by starting from the root team (this would match the name of the TFS Project) in the admin page. While still in the page where you created the team click on the Security tab.

~

Here you want to select your new team and then allow the permissions at the TFS Project level. You might be tempted to not set the View project-level information but doing that would not allow them to even see the project let alone get to their team. Things you defiantly don’t want to allow is the ability to Delete team project or edit that project-level information that sort of thing should be reserved for someone like the Project Administorators.

~

Area Path

The next thing that we need to tackle is the area path. In TFS starting from TFS 2012, the area path is what represents the team. Work items in the area path of the team is what we are able to use to keep the work items only visible to the appropriate team.

~

When this security screen first pops up you can see all the security groups that are from the Project level and it is important to note that if you want to restrict any users you want to make sure that they do not fall into any of these groups otherwise it will leave you wondering why they are able to access things that you did not give them permission.

~

The first thing you will want to do is to add the team security group to the area.

~

Find your team security group (it will existing from the creation of the team) and click the Save changes button.

~

With the new TFS group selected you will see on the right that nothing is set by default. Click on all the permissions that you want to grant to the users of this group and then click on the Save changes.

~

Version Control Security

Version control security works in a similar way that we had going with the areas. To start, the security is placed at a folder and then the permissions would be set on each of the folders for the team that has permission to access that folder and down (recursive).

~

The first step is to right click on the folder where you want to apply the security then go down to Advanced from the context menu that pops up and finally click on the Security tab.

~

When this folder opens up for the first time the group for the team will not be in the list of roles that have permissions. First thing you will need to do on this screen is to click on the Add button and choose the menu option of “Add TFS group”.

~

Next you will need to select the team group and add the permissions that you want this new group to have and finally click on the Save changes.

~

That is really all it takes to setup security at the team level. The thing to keep in mind is that the members should not be members of any of the default roles, as you can see from the image above all these roles have some sort of permission to at a minimum read (Readers role). If you follow this pattern where the members are only members of their team, then they would only see source that their team group can see. It would be like the other source would not even be there.

Shared Folders Security

For each of the teams to be able to show query tiles on their home page, those queries must exist in the Share Queries area. Because each team will have different needs and reporting on items that are different from other teams they should have their own folder area that only their team can see. One of the ways we can manage this is to create a query folder for each of the teams under the Share Query folder and then add security specific to each team.

Start in the Shared Queries folder, you can do this in either Web Access or with Visual Studio. Web Access is shown here as everyone will have access to this tool but the steps are very similar to this to do this in Visual Studio. Here we start from the home page and click on the View queries link

~

Expand the Shared Queries folder to expose all the folders and out of the box queries. Then right click onto the Shared Queries folder and select “New query folder”.

~

Enter the name of the team for this query folder. After it has been created right click while on the Team Folder and select Security…

~

Click on the Add dropdown control and the “Add TFS group” selection. This will open another dialog box so that we can add the Donald Team group to this folder.

~

Find or enter the name of the Team and then click on the Save changes button.

~

With the team security group selected you can select the permissions that they are allowed to have. Typically this would be the Contribute and the Read permissions. Then click on the Save changes button.

~

Now going back to that Shared Query view, you want to look at what this looks like from the view that a member who is only a member of this team would see. They can only see their team folder under Shared Queries, even the defaults are not visable.

~

Active Directory Groups

One final discussion in this area of Security and that is showing how the Active Directory Groups play into this whole thing. The TFS Groups are used to manage the permissions but instead of adding any individuals to the Group you add the AD Group instead.

It pretty much has to be done this way because TFS automatically makes a TFS Group at the time that the new team is created. Another way that this could have been done was by using a TFS Group and give it the permissions directly but the way that TFS works, this is the cleaner way to go because the TFS Group is going to be created regardless.

Start from the home page of the Team and make sure that you are in the team that you want to add the active directory groups. Next click on the Manage all members link which will open up a new window.

~

In this window click on the Add… dropdown and choose “Add Windows user or group”. This is where you would add the Active Directory (AD) group to be used to manage the actual users. From this point on as you add or remove people from the AD Groups they would get or loose the rights that were assigned to the appropriate team.

~

My New 3 Rules for Releases

Everyone of my products have an automated build and a properly managed release pipeline. At the time I just thought business as usual as I was always on my way to having a well performing DevOps operation in my personal development efforts. Well something happened in the way that I started approaching things which you don’t really plan, things will just start to happen when you get into a situation where everything is automated or at least they should and that is what this post is about.

I don’t have to wait

One of the first things that I did notice was that I didn’t have the feeling like I needed to wait until this big plan of mine to do a release. In the past I was using the Epic work item to plan out the finished features the I would need to complete to get the next release out. I even noticed before I had all these steps automated that plans would change quite often. The priorities and the well-meaning releases would take a turn to become something different like finding a critical bug that could affect the current customers. I would want to release that bug or feature as quickly as possible.

Before everything was automated, these things bothered me but there wasn’t an easy way to just get the release out there as there were still enough manual steps that you want to limit these. However, now there is no reason to get a build that has a complete bug fix or feature and push it down the pipeline and get it released into production. However, if this rush to production is now suddenly available to me isn’t there the possibility that something that wasn’t quite ready get into production by accident? That is why I came up with these 3 new rules that I set for myself that need to be followed before the build can be pushed into production.

My New 3 Rules for Releases

  1. Don’t allow any builds that came from any branch other than Master (git) or Main (tfvc) into production. If it is not Master then it should just be rejected in the deployment steps.
  2. A build that is released with success into Production, will be locked indefinitely with regards to the retention policy.
  3. The build number must incremented any time that we successfully released into production.

What follows are the ways that I automated these 3 rules and made them part of my operation. Now there is never a fear that something might get deployed into production that really should not. I can push things into production when it is important to do so and sometimes I might delay a release because there is no big benefit and saves the customers from having to download and install a release that could be packaged up with a later release. The point being that a release can be made any time it needs to and no more of this long range planning which never happens the way you expected anyway.

No Builds into Production that did not come from Master

As you may have gathered from some of my earlier posts, my personal projects have pretty much all landed up in git repositories that are remotely hosted in Visual Studio Team Services which is Microsoft’s cloud implementation of TFS. With that I am following a very typical git like workflow. Every Product Backlog Item or Bug starts with a new feature or bug branch. This is really nice as it gives me a nice level of isolation and knowing that my work will not affect the working code. It also gives me the flexibility to fix an important bug or PBI that changed in priority and know that the code I tore apart will not affect the outcome.

This also gives me the opportunity to test this code, confirm that it is complete and give it one more look through as the only way code from a branch can get into master is through a pull request. The pull request has a number of options with it as well such as squashing all the commits into a single commit (so I get a very clean and clear way of answering the question, how did you add this feature.) and deleting the branch after the merge.

Master is always the branch that represents production or ready for production. I wanted the code only to come from master because this is where all the branches come back to. Having a rule like this makes sure that the merge will always happen and that nothing gets left out. I have seen some very complicated branching structures when working with clients and something that I have seen quite often is that branches did not always get merged back to where they should. There would be these complicated discussions about where the code that goes to production should really come from. Here I have eliminated all the complexity by having a rule that says you can’t push a build that did not come from master into Production.

Now, how do you enforce this automatically? Well I could not find a task that would help me with this but I did know how I could do this with a simple PowerShell script.

$branch = "$Env:BUILD_SOURCEBRANCHNAME"

if ($branch -ne "master") {
    Write-Host "Cannot deploy builds from the $branch branch into Production" 
    Write-Error ("Can only deploy builds from the master branch into Production")
    exit 1
}
else {
    Write-Host "Carry on, this build qualifies for a deployment into Production"
}
Implementing the Master Branch Only Rule

Using a PowerShell task at the top of a release for the Production environment as an inline script to implement this rule. If for some reason I pushed a build that came from some other branch this task will fail and not go any farther. In my world I typically have one build definition that is by default pointing to the master branch but I override that when I am working on one of my feature branches to get feedback on how the code is building and deploying. Which I really like because I am using the very same build and deployment scripts that I would use when going into production. So you can see how a build from one of these branches could accidentally get into production if I did not have this very elegant rule enforcement.

Locking A Released Build

During the process of development, there are several builds and deployments are happening all the time. However, most of these I don’t really care about as their only real value is to give feedback that the application was still able to build and deploy as it always has. So one thing I never want to do is to lock down a build that came from anything other than the master branch. I used to have a task on the build definition that would lock down any build that was created from the master branch. However this is not always a good rule to live by either as there have been times when the deployment of a master branch did fail while going through the release pipeline and other times it might not have failed but there was a conscious decision to hold off on a release but was merged into master to be added with a few more features.

What I needed was a task that would update the build with an infinite lock on the build when ever it was successfully deployed into Production. For that task I did find one in the Microsoft Market Place that did exactly that. This task is part of a small collection of BuildTasks written by Richard Fennell who is a long time ALM MVP. In the Market Place it is called “Build Updating Tasks” and if you search for that, “Richard Fennel” or “Black Marble” I am sure you will find it.

I have this task near the end of my Prod deployment and set the Build selection mode to “On primary build artifact” and done. Works like a charm, when I deploy to production and it was successful it will find that build and set its retention to keep forever. I no longer have to think about making sure I don’t lose those builds that are in Production.

Increment the Build number

This rule has really allowed me to move freely into my new DevOps approach and no longer have this dependancy of the long planned release which I explained earlier did not ever get released the way I thought that it would. Things and priorities change, that is life. In my build definition I have a set of variables. One called the MajorMinorNumber and the other is the BuildNumber. These combined with the TFS revision number on the end gives me the version number of my release. So in the build definition under the general sub tab my Build number format looks similar to:

Product-v$(MajorMinorNumber).$(BuildNumber)$(rev:.r)

Now lets break this down a little. The MajorMinorNumber change rarely as they would represent big changes in the application. This follows something close to semantic versioning in that if there is going to be a breaking change I would update the Major Number, if there was going to be a big change but would remain backwards compatible then the minor number would be incremented. In the case where I am just adding some new features that are additive to the application or fixing some bugs then the build number would be incremented. The 4th number which is the revision is left for TFS to make guarantee that we always have a unqiue build number.

In the past I have been known for using a date like version number for applications that I didn’t think would really matter. However, I have even noticed with them that there is some very important information that gets lost. If I had a daily build going on and so the day part of the version number would increment everyday even though I might still be working on the same PBI or Bug. Instead I want to have a new build number after I have a successful deployment into Production. This means that I have customers out there who may have upgraded to a newer version and with that I can even produce some release notes as to what was part of that release. But I did not want to go and increment the build number in the build everytime this happened, I wanted this to be automatic as well.

The solution for this is using the another special task that is part of the last extension that we installled. There is a task called “Update Build Variable” and I have this as the very last task for the deployment into my Prod Environment. Very simple to setup, the Build selection mode is: “Only primary build artifact” the Variable to update: “BuildNumber” and the Update mode is “Autoincrement”.

Now after a successful deployment into Production and my build number is incremented and ready to go for either my next long planned set of feature or getting out that really quick important fix or special feature that I just needed to get out there.

My Experience with Git Sub-modules

I just replaced my phone with a new Microsoft Lumina 950 XL which is a great phone. In my usual fashion of checking out the new features of my phone I wanted to see how my web sites looked.

The operating system of this phone is the Mobil version of Windows 10 and of course is using the new browser called Edge. Well it seems that my blog did not look good at all on this new platform and was in fact not even close to being workable. Even though I had the fonts set to the smallest setting, what was displayed were hugh letters so hardly any words fit on a line and was just crazy looking. However, I noticed that other web sites looked just fine especially the ones that I recognized and truely being built around the bootstrapper framework.

I was also surprised as to how many other web sites look badly in this browser with the same problems that I had. Anyway I may address some of that in a later post but right now, what I wanted to find out is if I changed the syle of this blog would it solve my problem. If I just changed the theme or something could it be possible that my site would look great again. This was all very surprising to me as I had tested the responsiveness of this site and it always looked good, just don’t know why my new phone made it look so bad.

New Theme, based on Bootstrapper

Looking for different themes for Hexo was not a problem, there are many of them and most of them are even free. I am really loving the work that I have done working with the Bootstrapper Framework so when I found a Hexo theme that was built around the Bootsrapper Framework, you know I just had to try it. Well this theme looked great a lot simpler looking theme than what I was using which was really the default theme with a few customizations. The new theme was also open source and in another git hub repository. The instructions said to use some sub-module mumbo jumbo to pull the source into the build. Well now I was curious as there was something that I saw on the build definition when working with git repositories, a simple check box that says include sub-modules. Looks like it is time to find out was git sub-modules is all about.

Welcome to another git light bulb moment.

What is a git sub module.

The concept of a git sub module is a whole new concept for me as a developer that has been using for the most part, a centralized version control system of one sort or another for most of my career. I then looked up the help files for these git sub modules and read a few blog posts, and it can get quite complicated but rather then going through all that it can do let me explain how this worked for me to quickly update the theme for my blog. In short, a git sub module is another git repository that may be used to prove source for certain parts for yet another git repository without being a part of that repository.
In other words, instead of having to add all that source from this other git repository and adding it to my existing Blog git respoitory it instead has a reference to that repository and will pull down that code so that I can use it during my build both locally and on the build machine. And the crazy thing is it makes it really easy for me to keep up with the latest changes because I don’t have to manage that it is pulling the latest from this other repository through this sub module.

I started from my local git repository and because I wanted this library in my themes folder I navigated to that folder as this is where hexo is going to expect to see themes. Then using git-posh (PowerShell module for working with git) I entered the following command.

1
git submodule add https://github.com/cgmartin/hexo-theme-bootstrap-blog.git

This created the folder hexo-theme-bootstrap-blog and downloaded all the git repository into my local workspace and added a file called .gitmodules at the root of my Blog bit repository. Looking
inside the file, it contains the following contents:

1
2
3
[submodule "themes/bootstrap-blog"]
path = themes/bootstrap-blog
url = https://github.com/cgmartin/hexo-theme-bootstrap-blog.git

When I added these changes to my staging area by using the add command:

git add .

It only added the .gitmodules file and of course the push only added that file as well to my remote git repository in TFS. Looking at the code of this Blog repository in TFS there is no evidence that this theme has been added to the repository, because it has not. Instead there is this file that tells the build machine and any other local git repositories where to find this theme and to get it. The only thing left was to change my _config.yml file to tell it to use the bootstrap-blog theme and run my builds. Everything works like a charm.

I really don’t think that there is any way that you can do something like this using centralized version control. Humm, makes me wonder, where else can I use git sub modules?

Some MSDeploy Tricks I've Learned

In an earlier post I talked about Hexo the tool I use for this Blog. In that post I talked about how delighted I was with this process except for one thing that did bother me and that was the deployment to the Azure website. For this process I was using FTP to push the files from the public folder to Azure. Instead I was hoping for an MSDeploy solution but that is harder than it sounds especially when you are really not using a Visual Studio Project and MSBuild to create the application.

In this post I will take you on my journey to find a working solution that does enable me to deploy my blog as a MSDeploy package to the Azure website.

What is in the Public Folder

First off I guess we should talk about what is in this folder that I call Public. As I have mentioned in my Hexo Post, the Hexo Generate command takes all my posts written in simple markup and creates the output that is my website and places it in a folder called public.

It is the output of this folder that I wish to create the MSDeploy package from. This is quite straight forward as I already knew that you can use MSDeploy to not only deploy a package but also create one. This will require knowing how to call MSDeploy from the command line.

Calling MSDeploy directly via Command Line

The basic syntax to create a package using MSDeploy is to call the program MSDeploy.exe then the parameter -verb and the verb choice is pretty much always sync. Then you pass in the parameter -source and this one we are going to say where the source is and finally the -dest which we tell it where to place the package or where to deploy the package to if the source is a package.

Using Manifest files

MSDeploy is very powerful with so many options and things you can do with it. I have found it difficult to learn because as far as I have found, there is no good book or course that you can take that will really take you into any real depth to learn this tool. I did come across a blog: DotNet Catch that covers MSDeploy quite often. It was there that I did learn about creating and deploying MSDeploy packages using Manifest files.

In this scenario I have a small xml file that says where the content is found and for that I write out a path to where the public folder is on my build machine. I call this file: manifest.source.xml

1
2
3
4
5
<?xml version="1.0" encoding="utf-8"?>
<sitemanifest>
<contentPath path="C:\3WInc-Agent\_work\8\s\public" />
<createApp path="" />
</sitemanifest>

With the source manifest and an existing application that I want to package up sitting in the public folder at the disclosed location, I just have to call the following command to generate an MSDeploy package. If you are calling this from the commandline on your machine then this should all be on one line.

1
2
3
4
"C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe"
-verb:sync
-source:manifest="C:\3WInc-Agent\_work\8\s\msdeploy\manifest.source.xml"
-dest:package=C:\3WInc-Agent\_work\8\s\msdeploy\blog.zip

If you are calling this from TFS you would use the commandline task and in the first line called Tool you would put the path to the msdeploy.exe program. The other two lines would be one line and entered into the Arguments box.
Build Task to Create Package from Manifest file

Now in order for that to work I need a similar xml file that is used for the destination file to tell MSDeploy that this package is a sync to the particular website. This file I called: manifest.dest.xml

1
2
3
4
5
<?xml version="1.0" encoding="utf-8"?>
<sitemanifest>
<contentPath path="Default Web Site" />
<createApp path="Default Web Site" />
</sitemanifest>

The syntax to call this blog.zip package and the destination manifest file is:

1
2
3
4
"C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe"
-verb:sync
-source:package="C:\3WInc-Agent\_work\8\s\msdeploy\blog.zip"
-dest:manifest="C:\3WInc-Agent\_work\8\s\msdeploy\manifest.dest.xml"

This works great except that I cannot use the xml files when deploying to my Azure websites as I do not have that kind of control on them. It is not a virtual machine that I can log onto or use a remote PowerShell script against to do my bidding and this package won’t deploy onto that environment without it. I need another approach to getting this to work the way I need it to.

Deploy to Build IIS to create a new MSDeploy package

This next idea that I came up with is a little strange and I had to get over the fact that I was configuring a web server on my Build Machine but that is exactly what I did do. My build machine is a Windows Server 2012 R2 virtual machine so I turned on the Web Server Role from the Roles and Features Service. Then using the above set of commands that I called from a Command Line task just like the test I used to create the package from the public folder I Deployed it to the Build Machine.

At this point I could even log into the build machine and confirm that I do indeed have a working web site with all my latest posts in it. I then called MSDeploy once more and created a new Blog.zip package from the web site.

1
2
3
4
"C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe"
-verb:sync
-source:iisApp="Default Web Site"
-dest:package="C:\3WInc-Agent\_work\8\s\msdeploy\blog.zip"

The resulting blog.zip was easily deployed to my Azure website without any issue what so ever. As you may have noticed that I have the blog.zip file with the exact same name and place as the old one. There was no need to keep the first one as that was just used to get it deployed to the build machine so that we could creat the one that we really want. In order to make sure that went smoothly I deleted the old one before I called this last command which is also a command line task in the build definition.

Success on Azure Website

In my release definition for the Azure web site deployment I just needed to use the built-in out of the box task called “Azure Web App Deployment” point it to where it could find the blog.zip file and tell it the name of my Azure web site and it took care of the rest.

Deploy the zip package to Azure

How I Use Chocolatey in my Releases

I have been using Chocolatey for a while as an ultra easy way to install software. It has become the prefered way to install tools and utilities from the open source community. Recently I have started to explore this technology in more depth just to learn more about Chocolatey and found some really great uses for it that I did not expect to find. This post is about that adventure and how and what I use Chocolatey for.

Built on NuGet

First off I guess we should talk about what Chocolatey is. It is another packaged technology based on NuGet. In fact it is NuGet with some more features and elements added to it. If you have been around me over the last couple of years, I have declared that NuGet is probably one of the greatest advancements that we have had in the dot net community in the last 10 years. Initially introduced back in 2010, it was a package tool to help resolve the dependencies in open source software. Even back then I could see that this technology had legs and indeed it did as it has proven to resolve so many hard development problems that we have worked on for years to resolve. That being able to have shared code within multiple projects that does not interfere with the development of the underlying projects that depend on them. I will delve into this subject in a later post as right now I want to focus on Chocolatey.

While NuGet was really about installing and resolving dependencies at the source code level as in a new Visual Studio project, Chocolatey takes that same package structure and focuses on the Operating System. In other words I can create NuGet like packages (they have the very same extension as NuGet *.nupkg) that I can install, uninstall or upgrade in Windows. I have a couple of utility like programs that run on the desktop that I use to support my applications. These utilities are never distributed or a part of my application that I distribute through click-once but I need up to date version of these on my test lab machines. It has always been a problem with having some way to get these installed and up to date on these machines. However, with the use of Chocolatey this is now an easy solution and a problem that I no longer have.

Install Chocolatey

Let’s start with how we would go about installing Chocolatey. If you go to the Chocolatey.org web-site there are about 3 ways listed to download and install the package all of them using PowerShell.
This first one assumes nothing, as it will Bypass the ExecutionPolicy and has the best chance of installing on your system.

1
@powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))" && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin

This next one is going to assume that you are an administrator on your machine and you have set the Execution Policy to at least RemoteSigned

1
iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))

Then this last script is going to assume that you are an administrator, have the Execution Policy set to at least RemoteSigned and have PowerShell v3 or higher.

1
iwr https://chocolatey.org/install.ps1 -UseBasicParsing | iex

Not sure what version of PowerShell you have? Well the easiest way to tell is to bring up the PowerShell console (you will want to run with Administrator elevated rights) and enter the following:

1
$PSVersionTable

Making my own Package

Okay so I have Chocolatey installed and I have a product that I want to install so how do I get this package created? Good question so lets tackle that next. I start by using file explorer, go to your project and create a new folder. In my case I was working with a utility program that I called AgpAdmin so at the sibling level of that project I made a folder called AgpAdminInstall and this is where I am going to build my package.

The file structure

Now I would bring up PowerShell running as an administrator and navigate over to that new folder that I just created and enter the following Chocolatey command.

1
Choco New AGPAdmin

This will create the nuspec file with the same name that I entered in that New command as well as a tools folder which will contain two powershell scripts. There are a couple of ways that you can build this package as the final bits don’t even need to be in this package. They could be referenced in other locations where they can be downloaded and installed. There is a lot of documentation and examples that you can find to do that. I would say that most of the Chocolatey packages that can be found on Chocolatey.org are done this way. I found that they mention that the assemblies could be embedded but I never found an example and that was the way that I wanted to package this so that is the guidance I am going to show you here.

Lets start with the nuspec file. This is the file that contains the meta data and where all the pieces can be found. If you are familiar with creating a typical NuGet spec this should all look pretty familiar but there are a couple of things that you must be aware of. In the Chocolatey version of this spec file you must have a projectUrl (in my case I was pointing to my VSTS implementation dashboard page. You must have a packageSourceUrl (in my case I pointed to my source url to my git repository) and a licenseUrl which needs to point to a page that describes your license. I never needed these when building a NuGet package but are required in order to get the Chocolatey package built. One more thing we need for the nuspec file to be complete is the files section where we tell it what files need to be included in the package.

There will be one entry there already which is to include all the items found in the folder tools and to place it within the nuget package structure of tools. We want to add one more file entry where we add a relative path from where we are to include the setup file that is being constructured up one folder and then then down 3 folders through the AGPAdminSetup tree and the target also being within the nuget package structure of tools. This line is what embeds my setup program into the Chocolatey package.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
<?xml version="1.0" encoding="utf-8"?>
<!-- Do not remove this test for UTF-8: if “Ω” doesn’t appear as greek uppercase omega letter enclosed in quotation marks, you should use an editor that supports UTF-8, not this one. -->
<package xmlns="http://schemas.microsoft.com/packaging/2015/06/nuspec.xsd">
<metadata>
<!-- Read this before publishing packages to chocolatey.org: https://github.com/chocolatey/chocolatey/wiki/CreatePackages -->
<id>agpadmin</id>
<title>AGPAdmin (Install)</title>
<version>2.2.0.2</version>
<authors>Donald L. Schulz</authors>
<owners>The Web We Weave, Inc.</owners>
<summary>Admin tool to help support AGP-Maker</summary>
<description>Setup and Install of the AGP-Admin program</description>
<projectUrl>https://donald.visualstudio.com/3WInc/AGP-Admin/_dashboards</projectUrl>
<packageSourceUrl>https://donald.visualstudio.com/DefaultCollection/3WInc/_git/AGPMaker-Admin</packageSourceUrl>
<tags>agpadmin admin</tags>
<copyright>2016 The Web We Weave, Inc.</copyright>
<licenseUrl>http://www.agpmaker.com/AGPMaker.Install/index.htm</licenseUrl>
<requireLicenseAcceptance>false</requireLicenseAcceptance>
</metadata>
<files>
<file src="..\AGPAdminSetup\bin\Release\AGPAdminSetup.exe" target="tools" />
<file src="tools\**" target="tools" />
</files>
</package>

Before we move on to the automated steps that we want to implement so that we don’t even have to think about building this package every time, we will need to make a couple of changes to the PowerShell scripts that are found in the tools folder. When you open this powershell script it is well commented and the variable names used are pretty clear in describing what they are for. You will notice that it seems to be ready out of the box to get you to provide a url where it can get your program to install. I want to use the embedded solution so un-comment the first $fileLocation line and replace the ‘NAME_OF_EMBEDDED_INSTALLER_FILE’ with the name of the file you want to run and I will also assume that you have it in this same tools folder (in the compiled nupkg file). In my package I did create an install program using the wix toolset which also gives it the capability to uninstall itself automatically. Next I commented out the default silentArgs and the validExitCodes found right under the #MSI comment. There is a long string of commented lines that all start with #silentArgs and what I did was un-comment the last one and set the value as ‘/quiet’ and un-comment the validExistCodes line right below that so the line looks like this:

1
2
silentArgs = '/quiet'
validExitCodes= @(0)

That is really all that there is to it. The rest of this script file should just work. There are a number of different cmdlet’s that you can call and they are all shown in the InstallChocoletey.ps1 file that appeared when you called the Choco new command and they are all commented fairly well. I was creating the Chocolatey wrapper around an Install program so I chose the cmdlet “Install-ChocolateyInstallPackage”. So to summarize the PowerShell Script ignoring the commented lines the finished PowerShell script looks a lot like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ErrorActionPreference = 'Stop';
$packageName= 'MyAdminProg' # arbitrary name for the package, used in messages
$toolsDir = "$(Split-Path -parent $MyInvocation.MyCommand.Definition)"
$fileLocation = Join-Path $toolsDir 'MyAdminProgSetup.exe'
$packageArgs = @{
packageName = $packageName
unzipLocation = $toolsDir
fileType = 'EXE' #only one of these: exe, msi, msu
url = $url
url64bit = $url64
file = $fileLocation
silentArgs = '/quiet'
softwareName = 'MyAdminProg*' #part or all of the Display Name as you see it in Programs and Features. It should be enough to be unique
checksum = ''
checksumType = 'md5' #default is md5, can also be sha1
checksum64 = ''
checksumType64= 'md5' #default is checksumType
}
Install-ChocolateyInstallPackage @packageArgs

One thing that we did not cover in all this is the fileType value. This is going to be an exe, msi or msu depending on how you created your setup file. I took the extra step in my wix install program to create a bootstrap which takes the initial msi and checks the prerequists such as the correct version of the dot net framework and turns that into an exe. You will need to set this to the value of your install program what you want to run.

Another advantage to using an install package is that it knows how to uninstall itself. That means I did not need that other PowerShell script that was in the tools directory which was the chocolateyuninstall.ps1 file. I deleted mine so that it would use the automatic uninstaller that is managed and controlled by windows (msi). If this file exists in your package than Chocolatey is going to run that script and if you have not set this up properly will give you issues when you run the Choco uninstall command for the package.

Automating the Build in TFS 2015

We want to make sure that we place all these two files folders and the nuspec file into source control. Besides having this is a place where we can repeat this process and keep track of any changes that might happen between changes we will be able to automate the entire operation. Our goal here is to make a change which when we check in the code change of the actual utility program will kick off a build create the package and publish it to our private Chocolatey feed.

To automate the building of the chocolatey package I started with a Build Definition that I already had that was building all these pieces. It built the program, then created an AGPAdminPackage.msi file and then turned that into a bootstrapper and gave me the AGPAdminSetup.exe file. Our nuspec file has indicated where to find the finished AGPAdminSetup.exe file so that it will be embedded into the finished .nupkg file. Just after the steps that compile the code, run the tests, I add a PowerShell script and switch it to run inline and write the following script:

1
2
3
4
# You can write your powershell scripts inline here.
# You can also pass predefined and custom variables to this scripts using arguments
cpack

This command will find the .nuspec file and create the .nupkg in the same folder as the nuspec file. From there the things that I do are to copy the pieces I am interested in having in the drop and place them into the staging work space $(Agent.BuildDirectory)\b and then for the Copy Publish Artifacts I just push everything I have in staging.

Private Feed

Because Chocolatey is based on Nuget technology it works on exactly the same principal of distribution which is a feed but it could also be a network file share. I have chosen the private feed as I need this to be a feed that I can access from home, the cloud, and when I am on the road. Okay so you might be in the same or similar situation as myself so how do you setup a Chocolatey Server? With Chocolatey of course.

1
choco install chocolatey.server -y

On the machine that you run this command on, it will create a chocolatey.server folder inside of a folder off of the root drive called tools. Just point IIS to this folder and you have a Chocolatey feed ready for your packages. The packages actually go into the App_Data\packages folder that you will find in this ready to go Chocolatey.server. However I will make another assumption that this server may not be right next to you but on a different machine or even the cloud so you will want to publish your packages. To do that you will need to make sure that you give the app pool modify permissions to the App_Data folder. This in the build definition after the Copy Publish Artifact add another PowerShell script to run inline and this time call the following command:

1
2
3
4
# You can write your powershell scripts inline here.
# You can also pass predefined and custom variables to this scripts using arguments
choco push --source="https://<your server name here>/choco/" --api-key="<your api key here>" --force

That is it really you have a package in a feed that can be installed and upgraded with just a simple Chocolatey command.

Make it even Better

I went one step farther to make this even easier and that was to modify the chocolatey configuration file so that it looks in my private repository first before looking at the public one that is set up by default. This way I can install and upgrade my private packages just as if they were published and exposed to the whole world but it is not. You find the chocolatey.config file in the C:\ProgramData\Chocolatey\config folder. When you open the file you will see an area called sources and probably one source listed. Just add an additional source file give it an id (I called my Choco) and the value should be where your chocolatey feed can be found and set the priority to 1. That is it but you need to do this to all the machines that are going to be getting your program and all the latest updates. Now when ever you are doing a build to about to run tests on a virtual machine you can call have a simple powershell script do it for you.

1
2
3
4
5
6
7
choco install agpadmin -y
Write-Output "AGPAdmin Installed"
choco upgrade agpadmin -y
Write-Output "AGPAdmin Upgraded"
Start-Sleep 120

The program I am installing is called agpadmin and I pass the -y so that it skips the confirm as this is almost always part of a build. I call both the install and then the upgrade as it does not seem to do both but it just ignores the install if it is already installed and will then do the upgrade if there is a newer version out there.

Hope you enjoy Chocolatey as much as I do.

Who left the Developers in the Design Room

This post is all about something that has been starting to bug me and it has been bugging me for quite a while. I have been quiet about this and have started the conversation with different people at random and now it is finally time I just say my piece. Yes this is soap box time and so I am just going to unload here. If you don’t like this kind of post, I promise to be more joyful and uplifting next month but this month I am going to lay it out there and it just might sound a bit harsh.

Developers are bad Designers

I come from the development community with over 25 years I have spent on the craft and originally I got there because I was tired of the bad workflows and interfaces that people who thought they understood how accounting should work, just did not. I implemented a system that changed my workload from 12 hour days plus some weekends to getting everything done in 10 normal days. Needless to say I worked my way out of a job, but that was okay because that led me to opportunities that really allowed me to be creative. You would think that with a track record like that I should be able to design very usable software and be a developer, right?

Turns out that being a developer has given me developer characteristics and that is that we are a bit geeky. As a geeky person, you tend to like having massive control and clicking lots of buttons, but this might not be the best experience for a user that is just trying to get their job done. I once made the mistake of asking my wife, who was the Product Owner of a little product that we were building, what the message should be when they confirm that they want to Save a Student. Her remarks threw me off guard for a moment when she asked why do I need a save button? I made the change so just save it, don’t have a button at all.

Where’s the Beef

Okay, so far all I have enlightened you with is that I am not always the best designer and that is why I have gate keepers like my wife who remind me every so often that I am not thinking about the customer. However, I have noticed that many businesses have been doing a revamping of their websites with what looks like a focus on mobile. I get that but the end result is that it is harder for me to figure out how to use their site and somethings that I was able to do before are just not possible anymore. You can tell right away that the changes were not based on how a customer might interact with the site, I don’t think the customer was even considered.

One rule that I always try to follow and this is especially true for an eCommerce site is that you need to make it easy for the customer if you want them to buy. Some of the experiences that I have had lately almost leave you convinced that they don’t want to sell their products or do business with me. For some of these I have sought out different vendors because the frustration level is just too high.

Who Tests this Stuff?

That leads right into my second peeve in that no one seems to test this stuff. Sure the developer probably tested their work for proper functionality and there might have even been a product owner who understood the steps he needed to take after talking to the developer and proved to him or herself that the feature was working properly. That is not testing my friend, both of these groups of people test applications the very same way, it’s called the Happy Path. No one is thinking about all the ways that a customer may expect to interact with the new site. Especially when you have gone from an older design to the new one, ah, no one thought of that and now your sales numbers are falling because no one knows how to buy from you.

Testers have a special gene in their DNA that gives them the ability to think about all the ways that a user may interact with the application and even attempt to do evil things with it. You want these kind of people on your side, it is best to find it while it is still under development than having a customer find it and worse yet you get hacked which could really cost you financially as well as trust.

In my previous post “Let the Test Plan tell the Story” I laid out the purpose of the test plan. This is the report that we can always go back to and see what was tested and how much of it was tested and so on. I feel that the rush to get a new design out the door is hurting the future of many of these companies because they are taking the short cuts of not designing these sites with the customer in mind and eliminating much of the much needed testing. At least that is how it seems to me, my opinion.

Let the Test Plan Tell the Story

This post has been the result of some discussions that I have had lately when trying to determine the work flow for a client but this often comes up with others in the past but what I had never used as an argument was the role of the test plan in all this. Besides being an eye opener and an aha moment for the client and myself I thought I would explore this thought a little more as others might also find this helpful in understanding and getting better control of your flows.

What is this flow?

There is a flow in the way that software is developed and tested no matter how you manage your projects. Things typically start from some sort of requirement type of work item that describes the business problem and what the client desires to do and should include some benefit that the client would receive if this was implemented. Yea I just described the basics of a user story which is where we should all be by now when it comes to software development. The developers and testers and whoever else might be contributing to making this work item a reality start breaking down this requirement type into tasks that they are going to work on to make it happen.

The developers get to work as they start writing the code and completing their tasks while the testers start writing test cases that they will use to either prove that the new requirement is working as planned or if it has not and simply is not working. These test cases would all go into a test plan that would represent the current release that you are working on. As the developers complete their coding the testers will start testing and any test cases that are not passing is going to go back to the developers for re-work. Now how this is managed is going to depend on how the teams are structured. Typically in a scrum team where you have developers and testers on the same team this would be a conversation and the developer might just add more tasks because this is work that got missed. In some situations where the flow between developers and testers is still a separate hand off, a hold out from the waterfall days, then a bug might be issued that goes back to the developers and you follow that through to completion.

As the work items move from the business to the developers they become Active. When the developers are code complete the work items should become resolved and as the testers confirm that the code is working properly they become closed. Any time that the work item is not really resolved (developer wishful thinking) the state would move back to Active. In TFS (Team Foundation Server) there is an out of the box report called Reactivations which keeps track of the work items that moved from resolved or closed back to active. This is the first sign that there are some serious communication problems going on between development and test.

With all the Requirements and Bugs Closed How will I know what to test?

This is where I find many teams start to get a little weird and over complicate their work flows. I have seen far to many clients take the approach of having additional states that say where the bug is by including the environment that they are testing it in. For instance they might have something that says Ready for System Testing or Ready for UAT and so on. Initially this might sound sensible and the right thing to do. However, I am here to tell you that this is not beneficial, and loses the purpose of the states and this work flow is going to drown you in the amount of work that it takes to manage this. Let me tell you why.

Think of the state as a control on how developed that requirement or bug is. For instance it would start off as New or Proposed, depending on your Process template, from there we approve it by changing the state to approved or active. Those that use active in their work flow don’t start working on it until it is moved into the current iteration. The process that moves it to approved also moves it into a current iterationn to start working on it but they then move the state to committed when they start working on it. At code completion the active ones go to resolved where the testers will then begin their testing and if satisfied will close the work item. In the committed group they always work very close to the testers who have been testing all along here so when the test cases are passing then the work item moves to done. The work on these work items are done, so what happens next is that we start moving this build that represents all the work that has been completed and move it through the release pipeline. Are you with me so far?

This is where I typically hear confusion, as the next question is usually something like this: If all the requirement and bug types have been closed how do we know what to test? The test plan of course, this should be the report that tells you what state that these builds are in. It should be from this one report, the results of the test plan that we base our approvals for the build to move onto the next environment and eventually to production. Let the Test Plan Tell the Story. From the test plan we can not only see how the current functionality is working and matches our expectations but there should also be a certain amount of regression testing going on to make sure features that have worked in the past are still working. We get all that information from this one single report, the test plan.

Test Plan Results

The Test Impact Report

As we test the various builds throughout the current iteration as new requirements are completed and bugs fixed the testers are running those test cases to verify that this work truly is completed. If you have been using the Microsoft Test Manager (MTM) and this is a dot net application, you have turned on the test impact instrumentation through the test settings we have the added benefit of the Test Impact Report. In MTM as you update the build that you are testing it does a comparison to the previous build and what has been tested before. When it detects that some code has changed near the code that we previously tested and probably passed it is going to include those test cases in the test impact report as tests that you might want to rerun just to make sure that the changes that were made do not affect your passed tests.

Test Impact Results

The end result is that we have a test plan that tells the story on the quality of the code written in this iteration and specifically lists the build that we might want to consider to push into production.

Living on a Vegan Diet

In all my blog posts that I have written over the years I have never talked about health or a healthy lifestyle. This will be a first and you as a technology person might be wondering what has living a Vegan Lifestyle have anything to do with software. After all the blog title is “Donald on Software”.

For years I would go through these decade birthdays and just remark how turning thirty was just like turning twenty except I had all the extra knowledge called life. Going from thirty to forty, same thing but things took a turn when I moved into my fifties. I have had doctors notice that my blood pressure was a bit elevated. I took longer to recover from physical activates. Felt aches I never noticed before and I promised my wife that I would live a long, long time and that wasn’t feeling all that convincing. I didn’t have the same get up and go that I had known before.

A Bit About My Family

My wife and step daughter have been vegetarian/vegans for many years. I was open to other types of food like plant based meals and would eat them on occasion when we were at a vegan restaurant or that was what was being cooked at home. However, I travel a lot so most of my food would be from a restaurant where I could eat anything I wanted. This went on for several years, I was taking a mild blood pressure pill every day. This was keeping my blood pressure under control but there were other things that it appeared to be affecting as well in a negative way.

The Turning Point for Me

During Thanksgiving weekend in November 2014, Mary (my wife) and I watched a documentary on Netflix called “Forks over Knives”, and at the end of that I vowed never to eat meat again and start moving towards a Vegan lifestyle.
The documentary is about two doctors one that came from the medical field and one from the science side of things and their adventure to unravelling the truth about how the food that we eat is related to health. One of the biggest studies that has ever been done is called “The China Study” and is a 20 year study that examines the relationship between the consumption of animal products (including dairy) and chronic illnesses such as coronary heart disease, diabetes, breast cancer, prostate cancer and bowel cancer.

Not only reducing these numbers but now that the toxic animal products were out of our system, our bodies would start to repair some of this damage that we have always been told could never be repairable naturally.

Getting over the big Lie

Yes there is a very large lie that we have all believed to be the truth because we assumed that it came from the medical field and sanctioned by the government to be the truth. That being the daily nutritional guide. This is the guide that told use to eat large amounts of meat and dairy products to give us energy and strong bones but this did not come from any medical study this came from the agriculture and dairy industries to sell more products.

Most of that animal protein that we take in our body rejects, there is very small amounts that it actually uses. Now common sense would tell me if my body is rejecting all this animal based protein it is working extra hard and something is going to break down in the form of disease and other difficulties especially as we get older. Oh wait, they now make a pill for that so we can continue to live the way we always have. So now we are not only supporting an industry that never had that big of a market before but now we are spending billions of dollars every year to pharmaceutical companies as well in order to correct the mistakes we made with the things we eat. One thing that I did learn in physics is that one action creates another and opposite reaction so this is not solving anything either just keep making it worse and now health care costs are through the roof with bodies that normally know how to heal themselves.

Now for the Good News

I know I got you all depressed and disappointed as I just dissed your favorite food and called it bad and toxic but there is a happy ending here. I felt like you are right now for about five minutes and then decided to say “NO to Meat”. If you get a chance I would encourage you to look up that documentary “Forks over Knives” as one other thing that disturbed me was the way they were harvesting these animals and called it ethical or within the approved guidelines. These animals were under stress and that stress goes into the meat and you wonder why everyone seems so stressed, I know there is a relationship here.

Anyway, the good news is my latest checkup with my doctor. I am currently on no medication what so ever and my blood pressure numbers are very normal and very impressive for a guy my age. I did a stress test and was able to reach my ideal heart rate easily and effortlessly and I feel great. If I had any plaque buildup it is certainly repairing itself as I feel great. Still can’t seem to lose the 15 pounds I have been working on for the last couple of years but I know I will accomplish that soon enough. I am done with meat and all animal proteins as in milk, eggs, honey and I am going to live a long, long time and feel great. Won’t you join me?

Migrate from TFVC to Git in TFS with Full History

Over the last year or so I have been experimenting and learning about git. The more I learned about this distributed version control the more I liked it and finally about 6 months ago I moved all my existing code into git repositories. They are still hosted on TFS which is the best ALM tool on the market by a very, very, very long mile. Did I mention how much I love TFS and where this product is going? Anyway, back to my git road map as this road is not as simple as it sounds because many of the concepts are so different and at first I even thought a bit weird. After getting my head around the concepts and the true power of this tool there was no turning back. Just to be clear I am not saying that the old centeralized version control known as TFVC is dead, by no means there are somethings that I will continue to use it for and probably always will like my PowerPoint slides, and much of my training material.

Starting with Git

One thing about git is that there is just an enormous amount of support and its availability on practically every coding IDE for every platform is just remarkable. What really made things simple for me to do the migration was an open source project on CodePlex called Git-TF. In fact how I originally used this tool was that I made a separate TFS Project with a git repository. I would work on that new repository and had some CI builds to make sure things kept working and then when I finished a feature I would push this back to the TFVC as a single changeset however because I always link my commits with a work item in the TFVC project it had a side effect that I was not expecting. If you opened the work item you would see some commits listed in the links section. Clicking on the commit link would open up the code in compare mode to the previous commit so you could see what changes were made. Of course this only works if you are looking at work items from web access.

Git-TF also has some other uses and one of those is the ability to take a folder from TFVC and convert that into a git repository with full history. That is what I am going to cover in this post. There are some rules to this that I would like to lay down here as best practises as you don’t want to just take a whole TFVC repository and turn it into one big git repository as that just is not going to work. One of the things to get your head around git is that those respoitories need to be small and should be small remember that you are not getting latest when you clone a repository you are getting the whole thing which includes all the history.

Install Git-TF

One of the easiest ways to install Git-TF on a windows machine is via Chocolatey since it will automatically wire up the PATH for you.

1
choco install git-tf -y

No Chocolatey or you just don’t want to use this package managment tool you can follow the manual instructions on CodePlex https://gittf.codeplex.com/

Clean up your Branches

If you have been a client of mine or ever hear me talk about TFS you will certainly have heard me recommending one collection and one TFS Project. You would also have heard me talk about minimizing the use of branches for when you need them. If you have branches going all over the place and code that has never found it’s way back to main you are going to want to clean this up as we are only going to clone main for one of these solutions into a git repository. One of the things that is very different about the git enhanced TFS is that a single TFS project can contain many git repositories. In fact starting from TFS 2015 update 1 you can have a centralized version control TFVC and multiple git repositories in the same TFS project which totally eliminates the need to create a new TFS project just to hold the git repositories. We could move the code with full history into a git repo of the same project we are pulling from.

In our examples that we are pulling into the git repository we are doing this from the solution level as that is where most people using Visual Studio have been doing for decades however the git ideal view of this would be to go even smaller to a single project per repository and stitch the depenancies together for all the other projects through package management through tools like NuGet. Right now that is out of scope for this posting but will delve into this in a future post.

Clone

Now that we have a nice clean branch to create your git repository it is time to run the clone command from the git-tf tool. So from the command line make a nice clean directory and then be in that directory as this is where the clone will appear. Note: if you don’t use the –deep switch you will just get the latest tip and not the full history

1
2
3
mkdir C:\git\MySolutionName
cd c:\git\MySolutionName
git-tf clone https://myaccount.visualstudio.com/DefaultCollection $/MyBigProject/MyMainBranch --deep

You will then be prompted for your credentials (Alt credentials if using visualstudio.com). Once accepted, the download will begin and could take some time depending on the length of your changeset history or size of your repository.

Prep and Cleanup

Now that you have an exact replica of your team project branch as a local git repository, it’s time to clen up some files and add some others to make things a bit more git friendly.

  • Remvoe the TFS source control bindings from the solution. You could have done this from within Visual Studio, but its just as easy to do it manually. Simply remove all the *.vssscc files and make small a small edit to your .sln file removing the GlobalSection(TeamFoundationVersionControl) ...
    EndGlobalSection in your favorite text editor.
  • Add a .gitignore file. It’s likely your Visual Studio project or solution will have some files you won’t want in your repository (packages, obj, ect) once your solution is built. A near complete way to start is by copying everything from the standard VisualStudio.gitignore file into your own repository. This will ensure all the build generated file, packages, and even your resharper cache folder will not be committed into your new repo. As you can imagine if all you used was Visual Studio to sling your code that would be that. However with so much of our work now moving into more hibrid models where we might use several different tools for different parts of the application tying to manage this gitignore file could get pretty complicated. Recently I came across an online tool at https://www.gitignore.io/ where you pick the OS, IDEs or Programming Language and it will generate the gitignore file for you.

    Commit and Push

    Now that we have a local git repository, it is time to commit the files, add the remote (back to TFS), and push the new branch (master) back to TFS so the rest of my team can clone this and continue to contribute to the source which will have full history of every check-in that was done before we converted it to git. From the root, add and commit any new files as there may have been some changes from the previous Prep and Clean step.
    1
    2
    git add .
    git commit -a -m "initial commit after conversion"

We need a git repository on TFS that we want to push this repository to. So from TFS in the Project that you want this new repository:

Create a new Repository
  1. Click on the Code tab
  2. Click on the repository dropdown
  3. Click on the New Repoisotry big “+” sign.
Name your Repository
  1. Make sure the type is Git
  2. Give it a Name
  3. Click on the Create button.
Useful Git Information

The result page gives you all the information that you need to finish off your migration process.

  1. This command adds the remote address to your local repository so that it knows where to put it.
  2. This command will push your local repository to the new remote one.

That’s it! Project published with all history intact.

A New Start on an Old Blog

It has been quite a while since I have posted my last blog so today I thought I would bring you up to speed on what I have been doing with this site. The last time I did a post like this was back in June of 2008. Back then I talked about the transition that I made going from City Desk to Microsoft Content Management System which evenually was merged into SharePoint and from there we changed the blog into DotNetNuke.

Since that time we have not created any new content but have moved that material to BlogEngine.Net and this really is a great tool but not the way I wanted to work. I really do not want a Content Management system for my blog, I don’t want pages that are rendered dynamically and the content pulled from a database. What I really wanted were static pages and the content for those pages be stored and built the same way that I build all my software, stored in Version Control.

Just before I move on and tell you more about my new blog workflow I thought I would share a picture from my backyard and that tree on the other side of the fence is usually green it does not change colors every fall but this year the weather has been cooler than usual, so yes we sometimes do get fall colors in California and here is the proof.

Hexo

Hexo is a static page generator program that takes simple markup and turns it into static html pages. This means I can deploy this anywhere from a build that I can generate it just like a regular ALM build because all the pieces are in source control. It fully embrasses git and is a github open source project. I thought that moving my blog to Hexo would help me in too ways, besides giving me the output that I am really looking for but also to use as a teaching tool on how the new Build system that is part of TFS 2015 fully embraces other technologies outside of dotNet and the Visual Studio family. From here I check-in my new blogs into source control and that triggers a build which puts the source into a drop folder which is then deployed to my web site which is hosted on Azure.

As of this post I am using FTP in a PowerShell script which is used to deploy the web site which is not ideal. I am working on creating an MSDeploy package that can then be deployed directly onto the Azure website that is hosting this blog.

The Work Flow

The process begins when I want to start a new blog. Because my git repositories are available to me from almost any computer that I am working with I go to the local workspace of my Blog git repository checkout the dev branch and at the command line enter the following command

1
hexo new Post "A New Start on an Old Blog"

This will place a new md file in the _post folder with the same name as the title but the spaces replaced by hyphens (“-“). After that I like to open the folder at the root of my blog workspace using Visual Studio Code. The thing that I like about using Visual Studio Code as my editor is that it understands simple markdown and will give me a pretty good preview as I am working on it and if my screen is wide enough I can even have one half of the screen to type in the raw simple markdown and the other half to see what it looks like.

The other thing that I like about this editor is that it understands and talks git. Which means I can edit my files and save them and Visual Studio Code is going to inform me that I have uncommitted changes so I can add them to staging and commit them to my local repository as well as push them to my remote git repository. Above you may have noticed that before I began this process I checked out the dev branch which means that I do not write my new posts in the master branch and the reason for that is that I have a continious integration trigger on the build server that is looking for anything that is checked into the master on the remote git repository. Because I might start a blog on one machine and finish it on another I need some way to keep all these in sync and that is what I use the dev branch for. Once I am happy with the post I will then merge the changes from dev into master and this will begin the build process.

Publishing the Post

Once I am happy with my post all I need to do is to merge the dev branch into Master and this starts the build process. Which is really just another Hexo command that is called against my source which then generates all the static pages, javascript, images and so on and puts it into a public folder.

1
hexo generate

It is the content of this folder that then becomes my drop artifacts. Because the Release Manager also has a CI trigger after the build has been sucessful it will begin a Release pipeline to get this drop into my web site. My goal is to get this wrapped up into an MSDeploy package that can then be deployed directly onto my Azure web site. I am still working on that and will provide a more detailed post on what I needed to do to get that to happen. In the meantime, I need to make sure that my Test virtual machine is up and running in Azure as one of the first things that this Release Manager pipeline will do is to copy the contents of the drop onto this machine. Then it calls a CodedUI test which really is not testing it will run my PowerShell script that will FTP the pages to my Azure web site. It needs to do this as a user and the easiest way without me having to do this manually is to run the CUI to do it and complete it.

Summary

So there you have it, I have my blog in source control so I have no dependancy of a database and all the code to generate the web site and my content pages are in source control which makes it really easy if I ever need to make a move to a different site or location or anything like rebuild from a really bad crash. As an ALM guy I really like this approach and what would be even better was having a new pre-production staging site to go over the site and give it a last and final approval before it goes live to the public site.

Database Unit Testing from the Beginning

The concept of unit testing for a database and really this means a database project still seems like a wild idea. Of course I am still surprise how many development shops still use their production database as their source of truth which it shouldn’t be but that’s because they do not have their database in source control. In order to take you down the road to explore some of the benefits that are going to be available to you with being able to run unit tests on your database I need to get us all caught up with how to create a database project as this is where the magic happens.

Creating a Database Project

You need to have Visual Studio 2010 Premium or higher to create a database project. One of the options that are available to us is to reverse engineer an existing database and that is what we are going to do in these first steps. I have installed the sample database called AdventureWorks. This is available as a free download from the Codeplex site.

Create a Project

From Visual Studio you will want to create a new Project and select the SQL Server 2008 Wizard which can be found under the SQL Server node found under the Database category. Give it a name, I called my AdventureWorks and give it a location on your hard drive where you want the project to be located.

A wizard will popup and take you through a number of pages, just accept the defaults until you get to the Import Database Schema page as this is something we do want to do is to import the AdventureWorks database.

New Project Wizard

Make sure you check the Import existing schema and then you will likely want to click on the New Connection button unless you have made a previous connection to the database, that connection string won’t be found in the dropdown.

Connection Properties

If you have connected to databases in the past this dialog box should be very familiar to you. Basically we need to say where the SQL server is. In this case it is on my local machine and is the default instance. Other common server names are also localhost\SQLExpress as that is the name instance that SQL Express creates when it is installed. After you get the server instance completed the dropdown of database names will be populated and from there you should be able to find the AdventureWorks database. I also like to click on the Test Connection button just to confirm that there aren’t any connectivity issues. Click OK and we are ready to move on.

Click Next and continue through the wizard pages and accepting the defaults. On the last page click Finish. This is where this Visual Studio wizard really does it’s work as it creates the project and does a complete reverse engineering of the database. The end result is a Visual Studio SQL Database project that represents the database in code which is suitable for checking into Source Control, capable of deploying changes that might be made to this project, being able to compare changes between versions and much much more.

Lets get to Unit Testing

When you are on a database project as in I have physically clicked on it so that it has focus you will see that a number of toolbar buttons appear. We want to click on the one called Schema View.

Solution Explorer

This brings up another little window in the same area as the Solution and Team Explorer area of Visual Studio called the Schema View.

Schema View

From this view you will want to expand Schemas, then expand the HumanResources, expand Programmability, Stored Procedures and finally you want to right click onto the uspUpdateEmployeePersonalInfo and choose Create Unit Tests…

If you don’t already have a Test Project the next step will let you create a skeleton Unit test for this stored procedure and the option to create a new Test project in the language of your choice.

Create Unit Tests

You will find that when this window opens you can choose more than just the one stored procedure that we choose in the previous step but yours is the only one that is checked. If you did want to have more than one stored procedure in the same class file you could pick them as well. Then set the Project name or select and existing Test Project and give it a decent class name. I named mine HumanRecourceUnitTests.cs. After you click OK it will build all the pieces the test project and default unittest.cs file that we don’t need and everything just starts to look like a typical Unit Test until the following dialog pops up.

Project DatabaseUnitTests Configuration

Now in order to run unit tests against the database you need a connection to the database. In the first part of this you should be able to find your original stored procedure that you used to create the database project. You will notice that this dialog has an optional additional what it calls a secondary data connection to validate unit tests. In this sample we will not need this but in a real world application you may so let me explain that scenario. When an application that is built with a database connection, typically that application and the connection string would just have enough rights to run the stored procedures and nothing else. In those cases you will want to test those connection string when running the stored procedure that you are testing but that same connection string would not have the rights to check the database to see if those rights are valid especially in a scenario where you want to check if the right values got inserted or deleted, that is where this secondary data connection comes in, it would be a data connection that had higher rights to look at those values directly from the tables.

After you have clicked the OK button Visual Studio will display a skeleton of a unit test to test this stored procedure.

Testing Stored Procedure

In theory we have a unit test that we could run, but the results would indicate that the results are inconclusive because although this stored procedure is being run, it is really just exercising the stored procedure and not really testing it as in giving it some values to insert and checking if those values come back.

We are going to replace the unit test calls here with the following code snippet. I have it all in one piece here for you to easily grab this but following this I will break down this code so you can see what is going on. It is very similar to what the skeleton provided with us but we give it some distinct values.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
-- Prepare the test data (expected results)
DECLARE @EmployeeId int
SELECT TOP 1 @EmployeeId = EmployeeId
FROM HumanResources.Employee
-- Wrap it into a transaction to return us into a clean state after the test
BEGIN TRANSACTION
-- Call the target code
EXEC HumanResources.uspUpdateEmployeePersonalInfo
@EmployeeId, '987654321', '3/2/1987', 'S', 'F'
-- Select the results to verify
SELECT NationalIdNumber, Birthdate, MartialStatus, Gender
FROM HumanResources.Employee
WHERE EmployeeId = @EmployeeId
ROLLBACK TRANSACTION

The first part of this code is to capture the EmployeeId that we want to update so that is what the first DECLARE statement does. In the next call we just want to capture an existing EmployeeId from the Employee table and because we really don’t care which on it runs us but we only want want we use the TOP 1 clause in that statement. At this point our declared variable @EmployeeId now has this value.

Note: I have found that there could be a breaking change here that depends on which version of the adventure works database that you have as some will have the employeeId and others will have this column named BusinessEntityID. To find which one you have go back to the Schema View of the project and expand the Schemas, HumanResources and Tables. Find the Employee table and expand the Columns, the column in question will be that first one right there.

Schema View

Because the stored procedure will make changes to the data in the table and we may not want to actually commit those changes we just want to test these changes we surround the next pieces around a transaction and after we have collected our validation values we can roll this back.

After the transaction we call the update stored procedure and pass in some specific data. Next we call a select statement to get those values from the table with the EmployeeId that we just passed into the previous steps. Finally we roll the whole transaction back so that we do not actually make any changes to the database so we can run this test over and over.

Before we can actually run this test we need to make some changes to the Test Conditions portion of the unit test. First you will want to remove the existing entry that is shown there by clicking on the Delete Test button.

Test Conditions: Data Checksum

After you have removing the existing Test Condition we can then add a new one or more to verify the results. Select Scalar Value from the dropdown control and click on the “+” button.

Test Conditions: Scalar Value

On the scalarValueCondition1 line that this action creates, right click on this line and choose Properties, which will display the properties window. Update the following information:

  • Name: VerifyNationalId
  • Column number: 1
  • Enabled: True
  • Expected value: 987654321
  • Null expected: False
  • ResultSet: 1
  • Row number: 1
Properties

What is really happening here is that we are going to look at that first column and see if it matches the NationalId that we sent to the stored procedure. NationalId is the first column that is returned in the select statement.

We are now ready to run the unit test and see that it is working and pass the test. Typically in a unit test you could be anywhere in the method of the unit test do a right click and you will see one of the context choices being to run test. However what we have been working on so far has been the design surface of the database unit tests which is why we were able to write SQL statement to write our tests. To see or get to the actual code page you need to go back to the HumanResourceUnitTests.cs file and right click on it and choose view code.

Solution Explorer / View Code

As an alternative you could select the file in the solution and press the F7 key, either way you will then be looking at the actual test and if you right click anywhere within that method you will see that one of your choice is to Run Tests. Do that now and you will see the test results go from a Pending to hopefully a Pass. If you do get a failure with an exception you will want to check the column names from this table. Some of the names changed and even the way they are spelled. It appears to be case sensitive as well. Like I mentioned before, there seem to be more than one version of this sample database out there and they did make some changes.

Test Results

Now that we do have a working test, I always like to make a change to prove that it is working by making it fail. So to make it fail, change the Expected value to 9876543210000. I basically just added 4 zeros to the end of the expected result. Re-run the test and it should fail and if we look at the Test Result Details we can see that the expected results did not match, which is exactly what we expected.

Take out the padded zeros and run the test once more so that we get a passed test once more. This is just a step to keep or tests correct.

Associate the Unit Test to a Test Case

The following section is going to need TFS 2010 in order complete this part of the walk through, and even better if you have Lab Management setup to complete the development, build, deploy, test cycle on these database unit tests.

Right now, the unit test that we created can be run from Visual Studio just like we have done in this walk through. You can also make these part of an automated build which if this test project was included in the solution for an automated build in Team Foundation Server (TFS) it would automatically run and be part of the build report. However, this would not update the Test Plan / Test Suite / Test Case that the QA people are using to manage their tests, but it can.

In Visual Studio, Create a new Work Item of type: Test Case, and call it “uspUpdateEmployeePersonalInfo Stored Procedure Test”. We won’t fill anything in the steps section as we are going to go straight to automation with this Test Case. Click on the Associated Automation tab and click on the ellipse “…” button

Choose Test

This will bring up the Choose Test dialog box and because we have just this one test open in Visual Studio we will see the exact test that we want associated with this test case. Click on the OK button.

We now have a test case that can be used to test the stored procedure in automation. When this test case is run in automation it will update the test results and will be reported to the Test Plan and Suite that this test case is a part of.

Database Schema Compare where Visual Studio goes that extra mile

There are a number of good database tools out there for doing database schema comparisons. I have used different ones over the years at first initially to help me write SQL differencing scripts that I could use when deploying database changes. If your background is anything like mine where you were namely a Visual Basic or a C# developer and could get by with working on SQL if you could write directly to the database. There were challenges with being able to script everything out using just SQL. Today that is not nearly an issue for me and I can do quite a bit with scripting and could build those scripts by hand, but why?

WHAT… Visual Studio for database development?

Over the years I have tried to promote SQL development to be done in Visual Studio. I made a great case, SQL is code just as much as my VB, C#, F# or what ever your favorite language of choice happens to be and should be protected in source control. Makes sense but it is a really hard sell. Productivity goes down hill, errors begin to happen because this is not how the SQL teams are used to working on databases. It was an easier sell for me because I loved working in Visual Studio and found the SQL tools not to be as intuitive to me. I have never been able to figure out how I could walk through a stored procedure in Query Analyzer or Management Studio but have always been able to do this with stored procedures that I wrote from within Visual Studio and that was long before the data editions of Visual Studio.

Ever since the release of the Data Dude or its official name back then, Visual Studio Team Edition for Database Professionals, this was what I did and I tried to convince others that this is what we should be doing. It was never an easy sell, yea the schema comparison was nice but our SQL professionals already had all kinds of comparison tools for SQL and it would be too hard for them to work this way. They wanted to be able to make changes in a database and see the results of those changes, not have to deploy it somewhere first.

So as a quick summary of what we figured out so far. Schema comparison from one database to another, nothing new, your SQL department probably has a number of these and use them to generate their change scripts. How is Visual Studio schema comparison better than what I already have how is it going to go the extra mile? That my friend starts with the database project which does a reverse engineering of sorts of what you have in the database and scripts the whole thing out into source files that you can check into source control and compare the changes just like you do with any other source code.

Now once you have a database project you are able to not just do a schema comparison with two databases but you can also compare from a database and this project. The extra mile is that I can even go so far as to deploy the differences to your test and production databases. It gets even better but before I tell you the best part lets go through the actual steps that you would take to create this initial database project.

Create the Database Project

I am going to walk you through the very simple steps that it takes to build a database project for the AdventureWorks database. For this you will need Visual Studio 2010 Premium edition or higher.

We start by creating a new project and select “SQL Server 2008 Database Project” template from under the Database - SQL Server project types. Give it a name and set the location. I called mine AdventureWorks because I am going to work with the sample AdventureWorks database. Click OK..
Create a Project
Visual Studio will build a default database project for you, but it is not connected to anything so there is no actual database scripted out here. We are going to do that now. Right click on the database project and a context sensitive menu will popup with Import Database Objects and Settings… click on that now.
Import Objects
This opens the Import Database Wizard dialog box. If you have already connected to this database from Visual Studio then you will find an entry in the dropdown control Source database connection. If not then you will create a new connection by clicking on the New Connection… button.
Import Wizard
So if you have a ready made connection in the dropdown, choose it and skip the next screen and step as I am going to build my new connection.
New Connection
Because my adventure works database in on my local machine I went with that but this database could be a database that is anywhere on your network, this will all just work provided you do have the necessary permissions to connect to it in this way. Clicking on OK takes us back to the previous screen with the Source database connection filled in.

Everyone, click Start which will bring up the following screen and start to import and script out the database. When it is all done click the Finish button. Congratulations you have built a Database Project.
Import Wizard Finishing
You can expand the solution under Schema Objects, Schemas, and I am showing the dbo schema and it has 3 table scripts. All the objects of this database are scripted out here. You can look at these files right here is Visual Studio.
Solution Explorer
However you might want to use the Schema View tool for looking at the objects which gives you a more Management Studio type of view.
Toolbar
Just click on the icon in the Solution Explorer that has the popup caption that says Database Schema Viewer.
Schema View

Updating the Visual Studio Project from the database

In the past these were the steps that I would show and demonstrate on how to get a database project scripted out and now that it is code is really easy to get into version control because of the really tight integration from Visual Studio. My thoughts after that is this is the tool that you should be working in to evolve the database. Work in Visual Studio and deploy the changes to the database.

Light Bulb Moment

Just recently I discovered how the SQL developer does not really need to leave their favorite tool for working on the database, Management Studio. That’s right, the new workflow is to continue to make your changes in your local or isolated databases so that you can see first hand how the database changes are going to work. When you are ready to get those changes into version control you use Visual Studio and the Database Schema comparison.
Switch Control
So here we see what I always thought was the normal workflow, with the Project on the left and the database that we are going to deploy to on the right. If instead we are working on the database and we want to push those change to the Project, then switch the source and target around.
Options
Now when you click the OK button you will get a schema comparison just like you always did but when deployed it will check out the project and update the source files. This will then give you complete history and the files will move through the system from branch to branch with a perfect snapshot of what the database looked like for a specific build.
Options

  1. Click this button to get the party started.
  2. This comment will disappear in the project source file.
  3. The source will be checked out during the update.

    The Recap of what we have just seen.

    This totally changes my opinion on how to go forward with this great tool. The fact that we can update the project source from the database was probably always there but if I missed the fact that this was possible then I am sure many others might have missed it as well. It makes SQL development smooth and safe (all schema scripts under version control) and the ready for the next step to smooth and automated deployment.

The Two Opposite IT Agenda's

The Problem

I have been in the Information Technology (IT) field for a long time and most of that time has been spent in the development space. Each environment different from the previous one and in some cases there were huge gaps in the level of technology that was used and made available in each location. This has stumped me for a long time why this was. You go to a user group meeting and when ever the speaker was speaking about a technology that was current and he would conduct a quick survey around the room how many were using this technology, the results would be very mixed. There would even be lots of users at these meetings where they were still using technologies that were over 10 years old and no sign of moving forward.

Why is this happening?

Good question, and after thinking about this for a long, long time I think I have the answer. It really depends on which aspect of the IT spectrum is controlling the development side. I think it has become quite acceptable to break up the whole IT space into two major groups, the IT Professionals, and the Software Developers. When I first moved to California I worked for a company that was a software developer and they did software development for their clients on a time and materials basis. There was no question as to which wing of IT influenced the company with regards to technology and hardware. The developers in this case were golden, if you needed special tools, you got them. Need a faster computer, more monitors, special machines to run beta versions of the latest OS and development tools, you got it. You were on the bleeding edge and the IT Professionals were there to help you slow down the bleeding when that go out of control. However, this company was always current got the best jobs and in a lot of cases when we deployed our final product to their production systems that would be the point at which their IT department would then be forced to update their systems and move to the new round of technology.

Welcome to the Other Side

What happens when the influence is on the other foot, the IT Professionals. They have a different agenda as their number one goal is stability, security, and easy deployment. However this does come with a cost, especially when the company is heavily relying on technology to push its products. I have heard this from many different companies all with in this same situation, that they are not a technology company, the technology is just the tool or one of the tools to deliver their products. When this group controls the budget and the overall technical agenda things like this will happen. Moving forward will be very, very slow and the focus will be purely on deployment issues and keeping those costs under control and not on the cost of development which could get very expensive as the technology changes and you are not able to take advantage of those opportunities. Over time, the customers that receive your products will start to evaluate your future as not being able to move fast enough for them because they are going to expect you to be out there and fully tested these waters before they move there and if your not it is not going to look favorable in their eyes. This is especially true if you have some completion in your space that are adapting the new technologies faster then your company is.

There is another side to this that I have witnessed which bothers me even more. The decision to move all enterprise applications to the web was never from the development side of IT but came from the IT Professionals. Remember one of their big agendas is the easy, easy deployment and as a result they have made software development so expensive that we have been forced to move as much as we can to off shore development companies. In most cases this hasn’t even come close to a cost savings for the applications as you never seem to get what you thought your were designing and it is not always the fault of the off shore companies, they are giving your exactly what you asked for. In more cases it is the wrong technology for the solution. Most high volume enterprise applications were desktop applications with a lot of state (data that you are working with). The web is stateless and over the years many things have been developed to make the web appear state full but is it not. I have seen projects spend 100 times more time and money into implementing a features on a web to make it appear and feel like a desktop application. Now to be clear this argument started when deployment of desktop applications was hard as in the early days there was no easy way to keep all the desktops up to date except to physically go around and update them as patches and newer versions became available. However, in the last 5 years or more that has totally changed with things like click-once technology you can implement full automatic updates and license enforcement just as easily as web sites and maybe even better. We all know there are new tricks every day to spoof a web site into some new security trick.

What’s the Answer

I don’t really have the answer but I do have a few thoughts that I have been thinking about and I would love to hear about some other ideas that you might have. My thought is that you should separate the IT push down two paths and this advice is for the companies that are currently being held back by the stabilizing IT Professionals. I would even go so far as to keep the developers on a separate network then the rest of the employees this will keep the bleeding on the other side of the fence and not affect your sales and support staff which are there to sell and support products that are stable and need to keep them that way. This will allow the development to improve and expand their technical expertise and provide better and more relevant solutions for your customers, internal and external.

Goal Tracking

Since about the beginning of the year I have been thinking about goal tracking. I compiled a long list of technologies that I wanted to learn, experiment with and maybe even build some projects using some of these newly learned skills. Nothing quite like turning something new into something useful. I find that this technique provides me with the best understanding of how and why a technology would be used in one scenario over another. My goals for this year is a long list and some have a dependency of a previous goal being completed before I even begin, like reading the book before I begin my project based on the technology.

However, I suffer from the getting bored and just needing a break from a certain goal and then forget to get back to it at the appropriate time illness. It’s like I need something to help me track what my goals are and an easy to see a KPI like indicator to show me which goals I need to pay attention to right now or I might miss my target date altogether. Before I go much farther I should define KPI:

KPI’s are Key Performance Indicators which help organizations achieve organizational goals through the definition and measurement of progress. The key indicators are agreed upon by an organization and are indicators which can be measured that will reflect success factors. The KPIs selected must reflect the organization’s goals, they must be key to its success, and they must be measurable. Key performance indicators usually are long-term considerations for an organization.
This is what I need for my goals, some way to track my progress. I went to work on it, storing the goals was easy. Give it a name, what your target date is for completing the goal and some exit criteria. Okay, so I had to think a little bit about that last one, but I needed something that would tell me when the goal was completed. So, I started with an easy one, reading a book. I know when I have completed that goal when my current page is equal to the total number of pages in the book. Sorry, I just jumped into some logic thinking that a computer program could use to determine if it was completed. So in the case of tracking the progress for my book reading goals I could keep track of what page I was on each day and how long I spent reading. The last one is going to help in figuring out how fast I am reading this book and checking this against how much time I have set aside to work on my goals.

Okay, then from that information I could recalculate my goal target date by calculating the rate at which I am going what I should actually reach my goal. If the new target date is earlier then I had planned then the KPI should show me a green light. If it is later then this, it should show me a yellow (warning) light if I am just slipping but I still have time in my allocated time frame to meet this goal. Of course the KPI would be a red light if there was no way that I could meet this goal. This one is harder to determine as it is an indicator which would come up when I certainly have gone past the target date, how I can determine if I have run out of time before this date is hard to calculate especially if I have alot of goals. There are things that I cannot really know like sacrificing one goal so that I can put all my effort toward the other goal. If you are behind I will show the warning light, if we missed the goal I will show the red light…but at least I have something that I can track for my goals.

There were a couple of other types of goals that I thought of tracking. My projects that I build are not based on any page number but I thought I would set a goal in the amount of time I would spend on the goal by a certain target date and track it that way. This also should work quite well and can easily see when I am on and off track but the red can again only be shown if I have already missed the mark. Then just to throw something different into the goal tracking mix, I thought about setting up some goals for my weight. This one is really different in that there is no time element here at all. In stead we are tracking the weight on a regular basis and let the goal tracker estimate and the rate that I am loosing or gaining weight when I should be able to reach my ideal weight. I think that the KPI’s are going to start showing me problem indicators when I am moving in the opposite direction that I was planning. If this is going to work or not I am not sure, for instance for the past week I have had no change in any direction and the goal tracker is still saying I will reach my ideal weight within the date I have targeted….time will tell.
KPI of a few goals
Anyway, as you can probably tell by now I have actually started to put together a goal tracking program. It is still rough and most certainly is a beta product.

Good luck with your goals, I am finding that I am a lot more focused on my goals and staying on track then when I wasn’t tracking my goals, so I think it is working.

Who's the Boss

For most of our lives we have a constant struggle to try to be the boss of ourselves. Does it ever happen? When you grew up as a child I am sure you have memories similar to mine where at some point in your life you were struggling to gain control of your own life. Could not wait to move out of the house and get out on your own, so that you could be the boss of you. How’s that going for you? Are you the boss of you yet?

It is not long after you move out that you find you have a whole bunch of new people that have stepped in to take over the boss position. You have to pay rent so you have to answer to your landlord as he becomes a certain boss and when you can’t pay the rent, he fires you by way of eviction. Then in order to make some money to pay the rent you have to find a job and that usually leads to a boss and might even have a complete entourage of bosses. You know what I mean, there is your manager, the assistant manager, then there is the shift manager and none of them are shy at giving you orders and commands. Come to think of it, maybe living at home wasn’t so bad after all.

Self Employed

Then one day you wake up with this fantastic idea. If you start your own company you could become your own boss. Then you would truly have reached your goal of being the boss of your self. Then as the company grows you could end up being the boss for lots of other people. Yea, this is what you are going to do to be the boss of you. Well it is never quite like that because if you want to remain in business you will need to listen to your customers. You need to provide them with a service that they will value and will want to pay you for. One of the very reasons why a small company has a good chance of competing against a larger competitor is the ability to deliver better quality customer service. Wait a minute! If I have to listen to my customers and do what they want me to do, then they are my new boss? That’s right and as your business grows and you attract more and more customers and you want to continue to be successful, the number of people you need to listen to increases as well. You could just ignore the requests of your customers and we all know how that is going to affect your newly formed company. Remember the last time you were fed up with a business that was ignoring your needs. Why, you found a new place of business who was more willing to listen to your needs and even provide you with that service you were looking for.

Going Public

Okay, let’s take the self employed business a step farther. Let say that you do make a real honest effort in your new business and listen to your customers and follow through on many of their suggestions to improve the products and services that you provide. You make improvements’ in your goods and services for the benefit of your customers. The company grows and grows, you are the boss of hundreds maybe even thousands of employees, your customers love your products so you decide to take the business to the next level and go public. You know trade shares of your company on the stock market. This was of course in an effort to reach more customers and to expand to other geographical areas, expand your horizons and get your products and services into your new deserving customers. This changes things. All of a sudden you are hearing from a new group of people that want your attention and they keep talking about steady growth, make more profit and drive the share price up. These are your investors and it sounds like a new set of bosses to me. They don’t seem to share the same passion that you had with pleasing your customers, in fact they don’t seem to care about them other then to make them pay more money and anything to show growth and make the stock price go up. This can be a problem, if you grow too fast and the profits are a little slow at coming in you are going to be under pressure to increase profits somewhere and decline expenses in other areas. Both of these decisions could greatly affect your fine customer service that you have been able to provide in the past.

Politics

Let us talk about one more area in this topic of bosses and that is in the area of politics. I think that sometimes politicians forget that there positions are in a role reversal of sorts. Politicians work for the people, I think the correct term is the servant of the people. Yes the highest ranked position in the country, that of the president is really a servant of the people and we expect them to serve the needs of its citizens and make decisions that are for the good of the people not themselves and the many friends that they have made to get to this fine position of servant hood.

Conclusion

I think that having a boss and having to answer to someone is a fact of life. You can even get to be the president of the United States only to answer to the people, who are your bosses. So, in conclusion be the best boss that you can be to the people who look to you for leadership and threat those in a boss position to you with respect. If they do not deserve your respect, then maybe it is time to leave and find a new and better boss. There are a number of them out there, I know, I have worked for a few of them myself.

C#.net or VB.net

Starting from the Beginning

I have been a Visual Basic developer for over ten years now. It was not my first language that got me excited about programming. No, that would have been Clipper. I accidentally fell into Clipper much the same way that Visual Basic started as an experiment for me.

It was sometime around 1987 or 1988 when I was working as the accountant and network administrator for my family’s car dealership. I was running into limitations in the current accounting software we were using and I knew that a change would be needed. However, I had great difficulty in finding software that had the functionality and flexibility that was required. I started looking into some source code solutions and found one that was originally written in dBase III and was Clipper ready. This was my start to serious programming as this source code did not work and it took me two months to work through the code learning to work with Clipper as I went along.

Clipper 4 worked well for me, which was a very process level language. There were no surprises. The user could only process data in the exact steps designed in the software. When Clipper 5 was released I upgraded, which exposed me to some new and unfamiliar aspects of programming. Clipper introduced three “built in objects” and soon several “third party vendors” started coming out with add-ons for Clipper that allowed the creation of your own objects and classes. You should realize that by this time I was becoming quite the Clipper programmer. I was designing new features to our accounting software and building complementary add-ons. I was experimenting with Windows but was never able to implement it into the dealership until Windows 3.1 was released. However, we were running our Accounting software in a DOS box through Windows. Nothing special here, but it worked. Nantucket, the company that owned Clipper, made a lot of promises that there would be a Windows version of Clipper coming out soon.

In the mean time, I read an article by a fellow Clipper guru that suggested looking into Visual Basic to get a better handle on working with objects. So I got a copy of Visual Basic 1.0 for exactly that purpose; to get a better understanding about how objects worked and be able to actually write a Window application. I was still thinking that Clipper was going to be my main programming language with Visual Basic as a tool to learn about objects. This was similar to the original purpose of Pascal, which was to learn structured programming. Anyway, I was having a great time with Visual Basic, reading a book or two on it and building some really simple programs.

On a flight to Comdex one year, I was sitting next to two guys from Sequiter Software. They noticed that I was reading one of my Visual Basic books and asked me how I liked Visual Basic? I told them I was enjoying it very much but I was really a Clipper programmer and it would be great if Visual Basic could access a database. Well what followed was some very interesting information that they shared with me: Sequiter makes a product called CodeBase which is a very fast engine that reads and writes to dBase, Fox, and Clipper data. I could use it in Visual Basic by declaring the procedures to the API from within Visual Basic. Well that was it: from then on Visual Basic became a very important tool in what would later become my programming career. Remember I was an accountant that couldn’t find good software that worked the way I needed it to work for me.

So I guess you could say that I have been building database applications in Visual Basic since version 1.0. Just so that you are clear on who I really am, I did not take any short cuts to change careers. I did take a number of correspondence courses where I studied Assembler, Basic, Pascal, C, and COBOL and then went to College where I graduated with distinction. I am constantly learning and doing the best I can to keep up on the latest technology and am always interested in creating better software products.

My Take on C#.net vs. VB.net

C# (sharp) is the language that was designed for Dot Net and was used to build much of the Framework. It shares the same first name as its big brother C++, but it doesn’t really feel like any C++ that I have ever taken out for a spin. C# really is a cross between the best of Visual Basic, the best of C++, and some elements of Java which make it the perfect language for Dot Net.

Microsoft made an announcement that the new version of Visual Basic would finally be corrected to match the standards of other languages. There have always been inconsistencies between VB and other languages when it came to the number of elements that were created in an array and the value of true. Hey, somewhere along the way, this now very powerful language got itself all screwed up, but then VB.net is the first version where the forms are not being handled by the Ruby Engine. The operating system will actually be in charge of what we see on the screen. Anyway, at the time of this announcement I was all excited about the future of VB and what the heck was C# anyway, it did not even seem too important to me.

Then sometime in the early part of 2002, Microsoft made the announcement that I think surprised almost every serious VB programmer. They were going to re-tool Visual Basic.net to make it more compatible to Visual Basic 6. Well this was enough for me to consider looking into C# more seriously.

Before we go on I thought I would take a moment to talk about the bad rap that VB (as a language) has been given. VB has been attacked over the years by the programming community for not being a very serious language. The language started off as a very simple tool to build desktop applications for Windows. Over the years VB has become a very powerful programming language and probably its curse has been the ease in which you can build Windows applications. I say a curse, because anyone who has worked with this language has been able to build a working program quickly and easily. On the other hand you would not typically take a language like C++ straight out of the box and start writing a program without at least taking a number of courses and reading a few books on the subject. You might give up and leave that sort of programming to the professionals. But none of that is involved when programming in VB since it is such a forgiving language. However, even with VB’s simplicity there is a lot more to writing a good solid VB program then just a piece of code that works. There is choosing good programming practices and constantly refining your skill so that you write the most efficient and easily maintainable code that you can possibly write. The bad rap really belongs to the VB programmers that have picked up some very bad habits along the way and have failed to refine their skill to build elegant and well managed code. There are bad C++ programmers out there too just not as many.

I have spent a fair amount of time in a variety of C and C++ environments and have found that it was just too much work to build a Windows desktop application. Visual Basic makes a lot of sense since that was how it was designed. C on the other hand has it roots in building drivers and operating systems and I do not typically engage in those kinds of projects.

Making the Change

I am leaning towards programming in C# instead of VB but not just because I am upset with the decisions that were made on the future of VB: I need something much more powerful then that to justify my reasons.

One of the things I really like in C# is the new inline XML style comments. This is not available in VB.net. With this I am able to produce some very clear comments in my code, where they happen and with them produce a technical document. Many times in the past I have had to duplicate some of this effort, and then update my documents when the code went through some changes. Not anymore, it is all in one place and as I make changes my documents are also updated.

Secondly, Dot Net is built around this new concept of Namespaces, which is the way that Microsoft is getting around the DLL hell issues that have plagued us for years. I have some interesting stories to tell on that subject but will need to wait for another time. In C#, the Namespace is exposed right out there directly in your face. You can adjust the Namespace in VB.net but you need to do this through the IDE and is just not in your face. I have done some work with multiple projects that support other projects and I just think it is a lot cleaner when I have total control over the Namespace.

Thirdly, there is the learning curve. VB.net is not just an upgrade from Visual Basic. It is a new life style and you really need to get into that life style if you really want to take advantage of the Frame Work and go beyond what we have coded and designed in the past. This Frame Work is wonderful and I am almost tempted to say that the only limitation is lack of our imagination. Since I started getting into C#, I have had to take each step with a fresh new approach. When I was playing with the early beta’s I found that I was constantly doing a comparison with the previous version of VB. I think my return on investment with C# is a whole lot better then if I had gone the VB.net route. Something to keep in mind, Dot Net programming is for new programs, not to port over a working application. Microsoft has made sure that the Dot Net Framework supports our old code, so why touch something that works fine. Instead it is to create new applications and rebuilds of programs that lacked functionality that was difficult or impossible to implement in the past.

It is true that all languages compile to a common Immediate Language (IL) file that is used by the Common Language Runtime (CLR) but there are some advantages to using C# over VB.

Conclusions

In a survey of 633 development managers conducted by BZ Research during June of 2002 the results show that VB.net is being used in 21.8% of the current projects being developed while C#.net is being used in 18.8%. Over the next 12 months these same development managers are planning future development, where VB.net will be used in 43.4% of the projects and C#.net will rise to 36.7%. Pretty close.

These numbers would support what I have heard through the grape-vine that many of the VB shops are making plans to go the VB.net route: I think in part it is because of the upgrade path that has been followed in the past. They are not taking into consideration that VB.net is not the same VB that they have worked with over the past decade or so. I am sure that many of these shops will eventually start to move towards C#, since this is the language of the Framework and clearly the best way to start over. I think the training could be more cost effective and less expensive then attempts to retrain them with VB.net.

My prediction is that the growth of C#.net will be even greater then what is being portrayed in this survey which shows them pretty close to a draw. As for me and my house, we are going to skip dabbling in VB.net and go straight to the future of good programming, C#.

The Power of Time Tracking

I love to keep track of time. It could be related to my love of data and all the information that I can extract from it: how much fuel does my car use, how much time do I spend on stuff each week, how many hours am I away from my family.

Actually my attraction to time tracking goes much deeper than that. I never planned to start my own business when I made the career change into software development so many years ago. I had seen how it worked being self employed as I grew up in a family owned business and my father was in charge of the service department of an auto / agriculture dealership. One of the things I noticed was that as the billing was being discussed from the details obtained from the back of the work order, the customers would be requesting a discount because they did not see enough detail that explained why the job took as long as it did. This was not something I was looking forward to experiencing myself being self employed.

I do not like to negotiate. I am not a good negotiator. Instead I would like to have my work speak for itself and it has for many, many years. So when I did end up being self employed presented my client with an invoice, I also had the opportunity to present them with a detailed accounting of what that invoice represented. Sometimes it read just like a book, but I never had to explain my work. There was never any negotiation about the amount of the invoice that I was presenting and I always got paid on time. Mission accomplished.

That was my original motivation to really get into time tracking, I have built various pieces of internal software (after all, I am a software developer) that have helped me to maintain my goals. Since then I have discovered other benefits to keeping tracking of time and for the rest of this article I want to detail these benefits.

Deliverable

When working on a long project many times the client would only pay on some sort of a deliverable. We can all agree that we don’t want to pay for something we have not yet received. I did discover that I could use my detailed time tracking entries as a deliverable since the client was able to receive something from me that they could use to justify approval of payment.

When I was in college an instructor said that we should be paid for each stage of the software life cycle. At first I had trouble with the concept because at that time I only pictured the deliverable as the final finished product. However, my perception was very small in the overall grasp of developing software for a client. I soon discovered that sometimes all that I ended up doing for the client was research and some feasibility studies, sometimes working on the specifications and never got a chance to work on the actual software. Also, I have worked on projects where the specification phase went on for almost a year; collecting rules and processes and writing about the software that would be built. I needed regular pay periods. The only way to do this and justify my demands was to provide a deliverable that ended up being the details from my time tracking efforts.

Resolve Disputes

Anytime you need to resolve a dispute, details play a very important role. I worked on a project quite some time ago that did lead to some legal confrontation. My recorded project details were used to justify the amount of time that was spent on the project and why a deposit should not be returned. It is important to keep track of how and where you spend your time.

I know that this is also a good rule with dealing with tax situations. Revenue Canada and the IRS want details and many legal actions have been taken against individuals simply because they could not produce enough details. I can hardly remember what I did a few hours ago let alone days, months or years, but if I have details in front of me, it sure helps jog my memory.

How Much Does it Cost?

We all have ideas as to how long we work on a task or even a project. Sometimes I can hardly believe that a certain task took me as long as it did. It felt like five minutes but in reality it took me four hours. When you start tracking details of your day with real time, you get very clear evidence of the time that was really consumed.

In my own anal way I not only record the time and detail that I can bill my clients, I also keep track of how long I spend on the road, reading my technical books and papers, and the internal projects. This altogether tells me how much time this is costing me away from my family. It helps me keep my life in perspective and allows me to make better decisions. If I have only allowed for a few hours of time to spend with the family this week, maybe it is time to go and have that game of handball with my stepdaughter or go for a nice leisurely stroll with my wife or give my stepson a call just to see how he is doing.

This is a very monetary world and time costs us money. Sometime this is good since it helps us to provide for our families and sometimes the cost is great because it takes us away from them. However, if you don’t track it, how are you going to know how well you are balancing your life? What is the cost?

This raises another thought from another life. When I was into the financial world, (okay I was an accountant for the family business), I would have people asking me advise on how they could construct a budget. My advise is always the same; you first need to be extremely anal about tracking your spending, because before you can start budgeting you need to know where you money is going. That is how I see Time Tracking.

The Plug

Over the years I have built several applications that tracked time. First with an Access Database, then a VB front end and SQL backend. The problem with both of these was the synchronization to a central data store. For the last three-and-a-half years I worked for a company that built a time machine on the web. I thought that I would continue working for them until I decided to move on or retired, so I stopped thinking about building a better time tracker. Their software allowed me to keep track of all the things I had become accustomed to tracking.

I regretted this decision when the company got sold to a medium size corporation that was acquiring software companies across the country. The head office insisted that we use a multi-million dollar time tracking system which was in my eyes worthless. I could not maintain the level of detail to which I had grown accustom. None of us could see the point of this since it did not produce detailed invoices for our clients. Now red flags were flying for our clients, they all loved to know the details of what we were doing for them. Anyway, the company closed its doors and I found myself again being an independent software developer and needing some form of time tracking system, so I built Time Tracker. The product is still evolving and I may release a commercial version of the product some time in the future.
Time I spent away from my family in September
If you would like to know more about my Time Tracker program and or are interested in finding out how you can implement Time Tracker in your facility, send email to: TimeTracker@TheWebWeWeave.net

My name is Donald L. Schulz and I like to keep track of my time.

No no he's not dead, he's, he's restin'!

The blog grim reaper

Just in case you did not get the Monty Python reference here is a cartoon courtesy of Blaugh which gets right to the point. I have been away from writing anything for my web site for a very, very, long time. Where have I been? Where do I begin? I have been quite busy developing software for a number of clients that I cannot name because of none disclosure clauses in my contracts. I never did understand how disclosing who the actual client is in a public forum would be such a big deal but I can only descibe what I have been doing over the last four years as having worked in the hospitality, mortgage, back to hospitality and now property cost industry. While that has been keeping me busy with all the work that these projects generate, Mary and I have continued to develop and support our AGP Maker program.

What has brought my sudden attention back to this site and providing more articles and input on what I have been working on? I guess because of the change in where and how I am hosting the site and a change in the content program to update the site. This has been the third time that I have changed the content management system for this site. I started out using City Desk because of an article that I came across. I don’t rember which magazine but the article was about content managment systems. The article quoted a couple of paragraphs from Joel Spolsky and he was talking about City Desk. That article took me to Joel’s web site Joel on Software which was the original inspiration for starting my site. I liked Joel’s style and how he looked at things. This was the very first blog that I followed faithfully. Even today, when Joel writes something, I just want to find the time to sit down and read it. I guess part of it is that Joel does not write every day or even every week. When he has something he wants to say and share he does and that has always been my goal. Speak when I have something to say, not just to generate content.

Next I switched the content management system over to Microsoft Content Management System (MCMS). This was a great learning experience and I was able to leverage my dot net skills. It provided me with the ability to edit the pages from where ever I was at home or on the road, which was a problem that I had with City Desk as I had to make changes within the City Desk program and then push out all the files to their final location. The future of MCMS is uncertain as Microsoft is moving that technology into the latest release of Microsoft Office SharePoint Server (MOSS). That was not the reason I am leaving this platform though, as this is a really great product, it was just impossible to host these sites anywhere but on my public exposing web server. I really want to move all the public web sites to be hosted outside our office so that they can be expanded and extended and provide a much more stable environment. Our office is not setup for hosting and right now our hosting needs are not all that great, but things may very well change over time.

This brings us to the third content mangement system that I am switching to. I am moving all our content over to DotNetNuke. Once again that leverages my skills as a dot net developer added with the extra benifit that GoDaddy supports this in their free hosting program natively. This continues to give me the flexability to update the pages were ever I am, give me a better opportunity to get my pages indexed by the search engines and allow readers to link to direct pages and articles. When I had this site hosted in my office you could not link directly to an article unless you knew the name of the page which was all hidden from view. This may even lead to some articles that I might do about working with DotNetNuke.

Over the next couple of weeks and months I want to take on some technical issues like authentication and how I have taken advantage of windows authentication but used it in the way that forms authentication provides some greater flexability. The way that in house internal programs are built and consumed in other companies that I have worked, just bugs me to death. There is no reason why I need to log onto every single tool that I use if I have logged onto the computer that I am using. There is no need for this and I have developed some techinques that I will share on how I use this to work the way it should.

I would also like to cover some topics that I have never covered before. These would just be opion pieces so take them with a grain of salt, but I do want to cover some political and economic issues that have been bugging me. If nothing else they will make you think cause I am sure my views are going to be a little different then what you might have been expecting. I do at times have a unique view on the way I see things working and how I think that they should be working. Keep in mind these are opionions not necessarily based on a lot of facts.

I would like to talk about my conversion and on going conversion of all my web sites going the way of DotNetNuke. This is a great content management tool that gives me a lot of flexabiltiy as the skins are easy to create, now if only I was better at graphics I could really do something with this tool, but over all the experience is quite plesant. Modules that are not provided by what is in the DotNetNuke installation package I can create quite easily, I am after all a software developer. Plus the fact that GoDaddy which has been my domain name registar for years is now providing some free hosting (for the price of a domain registration) and they fully support DotNetNuke as a hosting package.

What is The Web We Weave, Inc?

A little more than a year ago, Mary and I had a discussion about the many projects that we both have had in the backs of our minds and would like to make a reality. We thought a corporation would be good in that it could provide us with the legal entity and a single structure in which we could register our copyrights and trademarks. It might also provide us with some tax relief and if things went well, could very well represent a major part of our future.

Well, these things are all well and good until you find that our vast array of projects are just that, vast, varied and hard to find a simple way to describe what our company “The Web We Weave, Inc.” is all about. I guess the best way to present this is to go through our current list, and how we came about these as they all lead to a very interesting story, at least to Mary and me.

Fuel Consumption

I have been interested in fuel consumption and fuel consumption tracking since about as far back as 1990 or 1991. Somewhere around this time, I was spending a lot of my time traveling between Canada and the USA. I would often travel with my laptop and liked to keep an eye on my fuel consumption. There were a few calculator type programs that could do this when in Canada and we used them at the family-owned dealership to verify fuel consumption for our customers. The problem that I had was when I traveled to the US; I had to do all these extra conversions from a US gallon into liters in order for this other program to do the calculation. I figured that there had to be a better way to do this that would take the various measurements and do the conversions on the fly in the background.

As a result of this, I built an application that I later distributed as shareware called “win-Fuel” and it was written as a desktop application in Visual Basic 1.0. It did exactly what I had in mind. Radio buttons that switched between miles and kilometers and a second set of radio buttons to switch between Liters, Imperial Gallons, and US Gallons. The display of the results showed the consumption calculations in four formats; Miles Per US Gallons, Miles Per Imperial Gallons, Miles Per Liter, and the number of Miles per 100 Liters (the official metric calculation). I did have some success with the application but more importantly to me than the success was that it provided me with the experience of building a simple application and taking it through all the stages; from concept, development, to distribution. I also built a context sensitive “help” and a “setup” program to complete the project.

I had always planned to go back to this little application and add the ability to store the information in a database of some sort and use that data to compare past trips with the current calculation. However, that never did happen. But now with the availability of the Internet and the ease at which what was once deemed a desktop application can now be transported into a web application and provide an even larger data source, this is a possibility. It is my belief that the collected data could be quite valuable to governments, environmental groups, car manufactures as well as individuals. Up to the present, fuel consumption has always been measured under laboratory conditions and real data has never been taken into consideration.

I see this site as a free service to the users of the application which will allow me to be able to provide a market with the collective value of the data. I also see this as a central place where fuel consumptions can be discussed in the form of forums and discussions, as well as, articles on fuel consumption such as tips on getting better gas mileage and regular vs. premium grade gasoline.

Stock Market Analysis

Mary has always had an interest in the Stock Market and has been very good at doing the research and analysis necessary to make good stock market picks. From this interest she has wanted to be able to share some of this research with others in the rather unique way of only looking at and grading stocks that can be bought directly from companies, the official term is DRIPS (Direct Reimbursement Investment Programs).

One of the many interesting things that I learned about this as Mary has been explaining it to me, is that stock bought through a stock broker is not in your name but instead is being held for you under the brokers’ name. Buying directly from companies enables you to buy stock in your own name, have a right to vote at shareholders meetings, avoid stock broker fees and a whole lot of other benefits that Mary can tell you about on her new site. Sounds pretty good, doesn’t it?

Anyway, the model for this site is a choice of either a monthly or yearly subscription basis or on a per use basis. The idea being that during the valid times that the client has chosen they can go into the site and pick out the various stocks that are graded for their safety and growth potential and provide links to the web sites if available, to the various companies that do indeed sell their stock directly to you. The data itself would come from a variety of sources and it can be assured that Mary has already validated that all the stocks listed on her site are available as DRIPS.

Graduation and Awards Program

Then in the last year that Mary was working in the Activities Department of a High School, she came across an interesting opportunity. It seems that this High School would type up a number of lists for the Awards Night and the Graduation Ceremony. One list would be a list of all the Awards, the Presenters and the students that received that award. This would be used during their Awards Night Presentation. Then on Graduation there would be a program that listed all the students and the awards that they received during the Awards Night. Besides this being a lot of repetitive work, it was always easy to have many errors and a lot of time was spent proofreading the lists.

Mary knew there had to be a better solution to this and brought the problem home where we designed an Access Application where Students, Presenters, and Awards were only entered once, with links to each other and the result being two Access Reports that could be either exported into Word for some further formatting adjustments or Printed out right from Access to be given to the Printer to print these two programs. This proved to be quite successful and the next year, this High School had Mary come back to provide some instruction and make some minor tweaks to the application.

It did not take long for Mary and I to realize that if this High School had such a large task in front of them when it came to the end of the school year, so would every other High School and Middle School in the country. However, the Access Application has a bit of a quick and dirty feel to it and would need to be re-engineered into a more commercial product, especially if we were going to support it. Plans are in motion to build a complete application even though the decisions for the final name have not been reached yet. Our plan is to build and then market this application starting with all the schools here in Southern California.

Custom Software Development

Several months ago, Mary was questioning me about the future of our little corporation. We had the company in place, although we had not opened any bank accounts or anything further then shelling out the legal fees involved in getting ourselves set up. There was no point in going too fast, as all the projects that we had been talking about so far would cost us money and time to develop and we really were not sitting on top of any real surpluses of either. Still wanting to do all these projects, we would just go about it more slowly. There was a need to upgrade our entire network, as the servers and workstations were old and having great difficulty keeping up with the technology that we were using for development and storage.

Then near the end of January 2002, everything changed. The company that I had been working for over the last 3.5 years closed its doors. Now I was officially unemployed, and The Web We Weave, Inc. was not able to support us at this time, or so we thought. The last client that I worked for through my employer was attempting to put a number of the team members on their project back together, offering them short term contracts.

I was one of the lucky ones, and as it turns out it worked out quite well in that we put the contract in the name of The Web We Weave, Inc. and now we are doing custom software development.

Beyond Technical

Besides all of these more or less technical projects, Mary and I have interests in writing. Mary has a number of ideas for books that she would like to write and I still have a desire to do things with music. We both would like to write articles for various publications as we both feel we have something to say and would like to share it with the rest of the world.

What is “The Web We Weave, Inc.”?

Well, we are back to this question, what is “The Web We Weave, Inc.”? And you are probably as confused about this as we are. We made a list of words that we thought were some descriptions of what describes the nature of our company, but still no simple mission statement.
-Internet

  • web-FUEL
  • Nothing but Direct
  • Education
  • Fuel Consumption
  • Software Development
  • Web Development
  • AGP Maker
  • Research
  • Business Intelligence
  • E-commerce
  • Consulting
  • Organizing
  • Tracking
  • Money Making
  • Profitable
  • Service Provider
  • Hardware
  • Relaxed
  • Confident
  • Professional
  • Cutting Edge Technology

We will keep working on it, to find that perfect mission statement and motto that clears up exactly what “The Web We Weave, Inc.” is all about. It just shows that I was wrong; you need more then just a cool name.