So I’m used to a certain behavior for the home and end buttons on my logitech edge keyboard. Jumping to the end or the beginning of the page is not the behavior I expect, so after being stung by this sudden change of the entire screen a few times. I finally decided to do something about it and found this lovely blog post describing what todo.

I already had the remap i software installed, the trouble was that it was though figuring out which setting to set.

imageMy Girlfriend has had a lot of discussions at work lately, regarding metrics. So today when I used Graph your inbox, I noticed something funny.

I graphed the communications with my girlfriend in available in my inbox. I noticed that since august the sum per month of our communications is halved. How can this be interpreted?

Well it could be interpreted as something good, were talking more offline than online. It could be interpreted as something bad, were not communicating anymore.

Given just this metric as input we can make no confident assumptions about anything regarding our communications. Even if we can calculate this metric it has no value, because even if account the trend we can still say nothing about how well we are communicating.

Now there is this idea that metrics are a good way to drive a company, for different tasks it might even be the case. But in software development Metrics can often say what-ever you want them to! Or they focus on the wrong results.

So let’s look at examples when measuring can be good and when it can be bad.

At RemoteX we are measuring things as well. Among other things were using EQATEC to collect information about our applications. By collecting information about application crashes, we indirectly measure how often our applications crash. More importantly we also collect the information required to correct the crashes. As of the next release we’re measuring the time it takes for our offline capable applications to synchronize their data. And in the future we plan to measure how long it takes for users on average to finish certain predefined tasks.

What we are measuring is the quality of the product, we’re not measuring the process. The software is the end goal, not how many test cases were running. Not how many bugs we missed in the previous release, this is not important. What is important is to collect information in such a way that you can answer the following questions:

  • How is the quality of our software right now?
  • If something happens that lowers the quality of our product, will our collected information assist us to quickly fix the problem?
  • Are the metrics collected measuring the product?

It is important to focus on the Product. The product is what you want to improve, the process is secondary. If you measure the process you will “improve” the process but the result of the process, the Product, will suffer because of it. The resulting product is the business value you want to measure, not the process.

Since I wrote it I’ve started using Parallels and stopped having my powershell scripts in a fileshare. But the information is usefull none the less.

I set up my Mac with running all my .Net development in a VirtualBox environment. I share my source-code directory to the guest OS in VirtualBox using VirtualBox’s own file shares.

This way I can have all my source-code in one location.

Working with code in a file-share has its own challanges. Recentlly I had issues with running PowerShell scripts from the directory. It turns out I had to add the vboxsrv (name of the host os in VirtualBox) to Internet Explorers Trusted sites.

I followed these instructions and now atleast powershell works on the fileshare.

So I did two talks at the web5 conference in March and I’ve done the Continuous Delivery talk at DevSum as well, and at GTUG. Aslo at Internetdagarna I held the offline capable html5 web app talk.

But the slides havn’t been published, so here they are!

Naturally I had to remove the animation slides and some slides which didn’t make sence without the animations, but the slides are there.

The IPad is a nice tool for surfing the web from your sofa. However if your like me and sometimes want to check out how certain things are done it’s missing a vital feature. It’s missing a view source functionallity.

So I Googled for it and found the following gist to allow you to view the source of any webpage you’d like.

There are also some more sophisticated ones like Snoopy. Where there are complete installation instructions.

At RemoteX we’ve been working a while with different controls that handle links between objects. We’ve called it HyperlinkTextbox and it’s one of the most expensive pieces of code in RemoteX Applications. Why?

It was really expensive to implement well in WinForms for various reasons, and the lookup scenario for the links has been littered with various obscure bugs.

Recently when we started to move the major parts of our Windows Client to WPF we’ve wrote a new one.

The new one mimiks that of an autocompletebox commonly found on webpages and different apps. Since we knew that this control might become an issue it recieved a bit more thought and care this time around.

One of the obstacles for the AutoCompleteBox was its behaviour regarding keyboard navigation.

we wanted the user to start typing, see a dropdown of choices and use the down arrows to select the choices that interested them. Doing this using events turned out to become a big hairy ball of mud to handle. What we then did was to sit down and describe the different states of the control using a statemachine.

It looks something like this:

State machine of the autocomplete flow

Next we implemented the statemachine using the statepattern.

Now instead of handling the behaviour in the autocompletebox in the eventhandles, the eventhandlers instead issued actions on our statemachine which in turn reacted appropiatly. Using this approach we get a structured way to handle the behaviour of the control while it’s used.

Some of the states are transitional in nature, as soon as we enter the state it immidiatly moves on to the next one, changing some aspect of the control in the process. One such example is the set_value state which makes sure that the control hides the different popups and sets the references accordingly. Landing in the has_value state waiting, where it waits for user input.

I was asked this question during a talk I had on Continous delivery I held at GTUG Stockholm (which apparentlly is changing its name). “Why not use puppet for installing the application software”.

I was a bit perplexed, because, were not installing our software using puppet, and to me, puppet is for bootstraping servers. But after a thinking it through later at night, I finally came up with an answer I’m the most comfortable with.

You can use what-ever tool you like to automate your installation process, as long as it is deterministic. If you want to use Puppet, do so. In our homogenous Windows environment we could do all we needed using PowerShell. It has it’s limits. I am certain that should the need araise to adress those limits, we will do so.

The core takeaway is this. Stick to the pinciples. You can use any tool you’d like as long as the following principles hold for the tool:

  • It’s output is deterministic
  • You can version control the configurations

I sure there are more principles but these were just off the top of my head.

We’ve been using MSTest for quite some time but now it’s time for change. First of the reasons for changing unit testing framework.

We’re now building with Hudson/Jenkins which NUnit integrates better with. According to rumours NUnit is faster than MSTest.

In the long run hopefully we’ll benefit from being able to use inheritance in testing as well.

In order to speed up the process of switching to NUnit I made the following bash script which changes most code attributes, project references and so on.

I uploaded it as a gist so everyone can use it, should they happen to need it.

Switch to NUnit bash script

Ever asked your devs to trouble-shoot the system? Say login doesn’t work, where does a dev start looking? Ever considered reducing the cost for doing such a troubleshooting scenario?

In case a system is down, the cost can be quite high. Customers cant work, or testing cant work, and devs get interrupted.

So why not have the developer list all the steps it takes to troubleshoot the system. Then create a script that checks those steps. Now troubleshooting is quicker and can quickly help you narrow down issues.

Also, you now have a smoketest that can run on machines that are set up.

Any incident that occurs in production should be recorded with a trouble-shooting script. “Can the webserver be accessed?” “Can the webserver access the database?” “Are there current errors in the logs?” All these questions should be quite easy to create a program or script that will check if the system is available.

Next load this in to hudson, now with the click of a button a hudson server can run the smoketests and if there are issues detected, send a report to the appropiate ops or dev or devops. Or even better, take some steps to try to solve the problem.

Why don’t people invest in reducing their troubleshooting time?