Role-based thinking – an experiment

12 November, 2014 Leave a comment

I have previously written about the importance of thinking in terms of functional behavior rather than merely in terms of capability. That is, why we are doing something is at least as important as how we are doing it. Now, maybe I’ve been overdoing the whole Scrum thing lately (not likely), but I’ve started thinking more in terms of roles when I formulate tasks for my team. My old way of thinking (which I still frequently fall back into) goes something like this:

We have a lot of technical debt. We need to remove unused dependencies from our build. We need to improve our test coverage. We need to improve our release process. We need to be able to onboard new team members more easily. We need a more robust continuous integration environment. We need to make troubleshooting easier for production support. We need to fully externalize our application configuration. We need to minimize the impact of poor change management on the part of our upstream dependencies….

There’s nothing wrong with working through a task list such as this, but it does have shortcomings. First, it (often) leads us to prioritize personal preference over team productivity. When each team member pursues these types of activities in isolation, the outcome is not always generalizable and so the impact is dampened.

Second, it is difficult to justify these activities from a business perspective. The way I formulated the descriptions, there are no real quantifiable outcomes. While the objectives all sound good, we have to ask, what is the opportunity cost here? What are the tangible benefits to the team members who aren’t engineers? What are the benefits to the stakeholder the team is trying to serve?

Finally, these sorts of to-do lists are often very difficult to estimate against. How do we know when is a task done? Which parts of the task have highest priority? The answer, typically, is when the person who completes the task is happy with the result. That’s not very well-structured, and in fact it smells like an anti-pattern. (Like when engineers write acceptance tests. But I digress.)

So, my experiment, (which is ongoing, so I don’t have outcomes to report), is to take the concept of a user story and extend it to the various functional roles within our team. So there will be “role stories” but they can be formulated as typical user stories, and I will refer to them as user stories throughout. This concept is actually not unusual, or shouldn’t be, but I’ve only seen hints of this thinking on teams that I’ve worked with. I’m describing it here for my own benefit and because I assume many other teams out there fail to take a structured approach to self-organizing. (Certainly true of most of the teams I have worked on.)

So the first user stories from my example would belong to the build manager. As a build manager I need to remove unused dependencies from our modules… etc. By formulating the tasks in this way, we can not only improve the quality of the product (which will benefit the stakeholder), we allow team members to have input into the tasks that they are assigned. It is a mechanism to address stressors and productivity inhibitors. This improves morale, and, done correctly, improves competence.

The next story would be for the quality assurance role. It would go something like this: as the quality assurance owner, I want to improve our test coverage so that defects raised by testers are reduced by at least 50%. Acceptance criteria will be branch coverage of 80% in the following problem modules: … Of course, if the testers continue to raise defects, the issue would need to be revisited. Presumably the team would agree that this story makes sense, and would be able to break it down into small tasks, which should always be done.

The next problem, facilitating team growth, is one that has been a struggle for most teams I’ve worked on. For that reason, I tend to think that it’s a difficult problem to solve. It can often be decomposed into small tasks and the burden shared among multiple members of the team, which I see as an upside. As a team member, I want to assist new team members by reducing the time it takes them to begin being productive without impacting the delivery of the sprint backlog. I’ll know this is possible when we have an accurate deployment diagram, a project overview guide, and a workstation setup guide.

Where this thought experiment starts to get interesting is when we think about maintaining the “product” of these user stories. (Something the product owner really should care about.) So when a backlog shows a story like, as a product user I need to be able to search for widgets, we see that there are implied user stories (role stories!) that exist in parallel. That is, as an engineer I need to create a Maven project that exposes an HTTP-based search API, and also, as a build manager I need to add new projects into our continuous integration environment, and of course as a test designer, I need to write functional tests against all HTTP-based services using SoapUI(You do test your code, right?) So the interesting property of this thought experiment is the natural decomposability of  user stories into tasks, and those tasks are themselves a sort of user story (role story, in my terminology). Add stories for a technical writer, deployment manager, and production support team and you’ve avoided a lot of technical debt.

The other point that must be mentioned, thought it is obvious, is that the tasks associated with these stories may be completed by the same person. One person may perform multiple functional roles. For example, a tech lead may often function as an engineer, team growth facilitator, build manager, documentation maintainer, etc., possibly all within the same day.

Functional decomposition, which is really all I’m talking about, helps to narrow the scope of a task in order to make it manageable. Whether we’re writing use cases, user stories, software, or documentation, functional decomposition is an important tool that is often ignored yet very valuable.

Categories: Idea, XP Tags:

On Not Being Ignorant

6 November, 2014 Leave a comment

One of the more surreal events that has happened to me in my many programming jobs was the time I was contacted about a support issue for a web service that I knew almost nothing about. I don’t remember what that issue was, but I do remember how the situation came about.

I had been working on a web application that displayed search results from a service that was not provided by the application. The search went through another service that had a fairly large database of information, some of which was directly useful to our end users. During development and testing, no performance problems were noted and even in production things seem to be fine when we tested it. As it turns out performance was terrible under load, exactly when our users would need the service.

We opened a support ticket with the team that owned the service, but were not getting quick responses. So I did some research. I found their source code, got a connection into their test database, and was able to determine the kinds of queries that were needed to support the part of the service that we were using. Within about 15 minutes I was able to determine the cause and propose a solution, and I did so on the support ticket that my team had opened.

Somehow, the fact that my name was attached to a viable solution on that ticket led someone to believe that I knew something (about that service anyway). In a sense, I did. In the short time I spent examining the internals of that service, I learned several things. First, that I was not the first person to try to track down a performance problem in that system. Second, I would certainly not be the last person to do so. Third, I realized that the service could never do what it was intended to do without being rewritten. Fourth, I realized that the application ownership and support model in use by this particular corporation was pretty dysfunctional. (Well, I knew that before…)

The reason I started writing about this debacle is that it gave me a way to frame a few thoughts on “not being ignorant.” Because everyone is ignorant about most things (really), it’s worth pointing out that not being ignorant is a very selective thing.

So what really bothered me about this dysfunctional situation was really an acquiescence to ignorance. Let me cover some examples that are specific to this situation, but keep in mind that they reflect an underlying pattern.

I don’t know why this SQL statement is slow, but I bet if I keep adding indexes eventually I’ll find one that works (and hopefully remember to delete the indexes that don’t have any value).

I want to search this column in a case-insensitive way, so I’ll convert the search value and the column value to the same case in my SELECT statement and assume it will perform well.

I know I need to limit the number of results I return to my users, but all I can think to do is have my ORM give me a list and then send back a slice; that should be good enough.

Want to guess what the pattern is? It’s a pain-minimization strategy. It’s like saying, “being ignorant is painful, so I’ll gain just enough knowledge to make the pain go away (temporarily).” Unfortunately, this strategy tends to produce a lot of repetitive variations of the same situation. Hence, I consider it to be a suboptimal lifestyle choice.

What were the effects of this strategy on the application I’ve been talking about? Obviously performance issues are one: exponential response times under moderate load can never be good. Maintenance costs were another: every performance issue had to be addressed as it was discovered. SLA adherence couldn’t be taken seriously, which in some situations can be a pretty high-stakes risk to take. As the application evolved, the possibility of proactive risk management became impossible, which is a big clue that a re-write is in order.

The opposite of ignorance is knowledge. Finding a better strategy for problem-solving is a valuable piece of knowledge. Don’t underestimate it.

Badgile

27 November, 2013 2 comments

Badgile, Adj.

A condition where the product owner continually badgers an agile team for failing to deliver unspecified acceptance criteria. Typically experienced during end-of-sprint demos and retrospectives.

Categories: Rant, XP Tags: , ,

Unit tests as proofs

3 April, 2013 Leave a comment

Lately I’ve found myself writing what I would consider to be odd unit tests. Odd because they are not written to test my code, or really to test any code that is used in my current project. Rather, these tests have served as proofs of the behavioral characteristics of target deployment environments.

One test I wrote was to demonstrate that a particular filesystem location was not writable from within an app server. This had been preceded by proof that assuming the user credentials used by the app server process did indeed have write access. In this situation, quite strangely, the app server had failed to acquire write permissions. Since the application failures we were seeing appeared to be due to this filesystem permissions issue, no further troubleshooting effort was required on my part because I had proven that the failure was not in the application layer.

Another test I wrote to demonstrate that certain HTTP headers were not being passed to a Java Servlet container. Additionally, the test showed that the container was not responding appropriately when those headers were subsequently enabled. Why did this test matter? By demonstrating that the application was not defective, I was able to avoid modifying application code to work around an improperly configured environment.

Finally, I wrote  a test to prove that certain response characteristics of an OAuth-protected resource were incorrect. This would have been extremely difficult to accomplish solely by use of runtime tested (e.g. navigation in a browser) because the OAuth calls were not under the control of the browser. By using automated tests, I not only shortened my debugging time, I also was able to send my test case to another engineer for verification.

By writing tests at the start of problem investigations, I am able to see the impact of changes. (It is often the case that an attempt to fix a problem will create a new problem elsewhere.) I am able to repeat my test many times with minimal effort. I am also able to share my tests with others; one benefit of that is that I may make mistakes that are more easily spotted by others.

Because of all this, I have lately been thinking of unit tests more along the lines of mathematical proofs, rather than just a way to exercise a set of CPU instructions. Properly constructed, a unit test can serve as a very practical sort of mathematical proof. And in my case, these proofs helped to spare me from needless time-wasting activities. By rigorously asserting the conditions required for success or failure, I was able to repeatedly and consistently isolate the causes of these various problems I had encountered. In all of these cases, I would have saved even more time had I started my troubleshooting efforts by writing the test first. Test-driven troubleshooting isn’t always possible, but I do recommend that you consider it, especially if you value your time and do not wish to waste it on needless activities.

Categories: Uncategorized

Windows 2003’s command interpreter and the Consolas font

19 September, 2012 Leave a comment

Windows 2003 does not come with the Consolas font enabled for use by cmd.exe. This is of course very unfortunate, because the only 2 fonts available (a raster font and Lucida Console) are both substandard options. In my previous post I included shortcut files that assume Consolas is available. If you’re stuck with an older version of Windows, the information that follows may be helpful to you.

Copy and paste the following two lines into a command prompt. You will be logged out. When you have logged back in, your command windows will be able to use Consolas.

(reg add "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Console\TrueTypeFont" /v 00 /d Consolas
logoff)

Now you can take full advantage of the shortcuts you downloaded from my previous post.

(This information is based on a blog post by Bill Hill – http://blogs.msdn.com/b/ie/archive/2008/04/22/give-your-eyes-a-treat.aspx.)

Categories: Uncategorized

Beefing up the standard Windows command interpreter

8 August, 2012 2 comments

While out-of-the-box features for Windows’ command interpreter (cmd.exe) have mildly improved over the years (command completion turned on by default comes to mind), the defaults are still boring. Really, really boring. And kinda ugly, too.

Beyond the boring and the ugly, however, stands the dysfunctional. A tiny scroll buffer means you will quickly and easily lose data. Mouse support is turned off by default. The window is too small by default. The font could be better…

So, to spare you, the reader, the time needed to fix all those settings, I created some shortcuts (.lnk files) that will present you with reasonable defaults. You can download them all in a zipfile, or individually select the ones you’re interested in.

Here’s a screenshot with four examples:
Four examples of colorized cmd.exe windows

And here are the individual links:

Of course, if you have improvements/suggestions for working with cmd.exe, I’d be very interested in hearing them.

Categories: Microsoft Tags: ,

Hudson+Maven unable to locate javac

7 August, 2012 Leave a comment

I recently broke pretty much all of the builds on one of the Hudson servers at work. (Oops!) This server happened to have a few dozen fairly homogeneous Maven+Java builds all building with JDK 5. This is all hosted on a Tomcat instance running on a standalone JRE 6.

All I did was add a JDK. Really. Adding a second JDK to the Hudson configuration broke every single Maven+Java build. This is the (initially very confusing) error message I got:

[ERROR] Unable to locate the Javac Compiler in:
  C:\Java\jre6\..\lib\tools.jar

The reason I added a second JDK to Hudson is because I needed to run builds with JDK 6. This seems pretty mundane. As it turns out, Hudson gets really confused when changing from “one installed JDK” to “more than one installed JDK” and starts looking for the default JDK in JAVA_HOME (aka Java System property java.home). Naturally, this means that Hudson has no obvious way to specify a default JDK. It also means that my builds could not run, because Hudson gets its java.home value from Tomcat, which is running under a JRE, not a JDK.

This is an example of poor software design, for three reasons. First, because it leads to unexpected consequences. (All builds now default to a new, and in this case non-existent, JDK.) Second, because builds can be configured to run under a JDK called “(Default)” yet there is no obvious mechanism to actually specify what this default JDK actually is. Third, finally, and most importantly: because it leads to harmful outcomes (i.e. failed builds).

While I suspect the correct course of action would be to modify Hudson to directly support a “default JDK” feature, there is in fact a workaround. On the configuration page there is a “Global Properties” section with a checkbox entitled “Environment variables.” If you select this, you can override java.home with your own JRE, one that is part of a JDK installation. Here is an example:

C:\Java\jdk-1.5.0_22\jre

So far, this override only seems to work when pointing to a JRE that is inside a JDK. Yes, that trailing “jre” really matters. I went ahead with this setting because I didn’t want to edit all of the failed builds. I hope this isn’t a bad idea that will come back to bite me later, but I really don’t want to edit all those builds.

Good luck with all your continuous integration efforts!

example of Hudson configuration screen

Categories: Java Tags: , , ,