Simple is Good

There’s nothing like having to tell your story over and over to make you work on sharpening your rhetoric. Although the events that have led to this point are suboptimal, the opportunity — erm, necessity — to do some close editing of the tales I spin has been appreciated.

I’m a folk mathematician at heart. A lot of what I tend to think about is how systems fit together. Sometimes these systems are (rather) well-defined mathematical structures; more often they’re (still somewhat well-defined) software components or music or organizations or people. It’s a reductionist exercise at its core, though not one that either disregards or demeans the inherent complexity of things as they are. The attempt is to suss out an approximate axiomitization in order to be able to reason about the system in some cogent manner, all the while keeping in mind that such axioms are merely a proxy for — and not a substitute — for the underlying reality.

Much of my recent rumination has been quite specifically about how to approach the building of software. It’s a process I’ve observed from both inside and outside for an absurd (at least to me) number of years. Fashions change; the time of the requirements document with its consistent series of musts, shoulds and must nots, printed and bound, has more or less passed (except in a few corners). Waterfall is said with a certain derisive tone, at least by most. It is the domain of systems that are either exceedingly well-understood or ones that exist in spaces where failure could be exceedingly costly or dangerous. [Note: There have been, unfortunately, instances of more than a few systems that really should have been better defined up front where the kinds of dangers listed above have been needlessly realized.]

These days, though, it seems everyone is espousing some kind of agility, some kind of approach that traces itself back to turn-of-the-century (sorry; I love getting to use that phrase) notions of extreme programming and further back to industrial engineering notions of post-WWII Japan, particularly at Toyota. The short version includes teams having autonomy, having an incremental approach focusing on business value and, most of all, always having working software. It’s a solid idea. Who could be against an incremental approach to things? I mean, developing in small pieces gives you the opportunity to monitor progress — like all the time.

I must backup a moment. That last little bit was an example of what I’ll call toxic agility, the thing that happens when people in an organization look and think “each of these tasks is small, as far as I can see, they’re all the same size, so this will be easy, like stacking shelves. And like stacking shelves if we need to go faster — which we always do — we can just be more insistent and yell louder when necessary. OK, maybe not yell, but at least look supremely disappointed.

Still, this shelf-stacking metaphor does tell us something. It makes the assumption that everything is known at the beginning of a process. If execution falls down, it’s because of a failure of production. And that assumption is typically false. The conception of a product is rather different. A plan does not have to work. Internal inconsistencies are not revealed until there’s the beginning of a living, breathing product. That’s where the learning comes in. The learning that goes on during actual development is often disparaged, blamed on inadequate preparation, on mistakes made. That’s exactly wrong.

Software development is all about learning. It’s about learning the tools and learning the domain. But you can’t just learn everything in isolation. There’s too much. The process of implementation, though, serves to restrict this space. In a totally idealized situation we’d only learn what was absolutely needed. Perhaps, being able to approximate that is a “stretch goal” (an aspirational one, one that you don’t really expect to get to but a point toward which to aim). You may not get there but as experience is acquired a team will learn to recognize rabbit holes and avoid them (unless, of course, one is looking for an actual rabbit; that, however is an entirely different discussion).

Once we accept that learning is an inherent part of the process, we can move towards optimizing that learning. When the knowledge already exists within an organization we find ways to bring that to an individual team. When the knowledge does not already exist in an organization we figure out how to bring it close. Most important, though, is dealing with domain questions. In whatever setup a team has there always needs to be someone who can play the client/user role. This could be a product owner embedded on a given team or a product manager assigned to the team from a dedicated product organization or just about anyone else charged with that duty. That domain expert absolutely has to have product vision and absolutely has to be available. Those learning loops have to be kept tight. That is where a bulk of the potential for optimization resides.

Once more thing I’d like to mention: knowledge transfer is most effective at one of two specific times: the time of discovery and the time of need. Sharing at the time of discovery is effective because code developed in a pair/team context becomes instantly well known by anyone in the room involved in its crafting. Sharing at the time of need is effective is because that’s the moment when there is the most direct context for receiving that information.

Always value learning. But learn the right things. Mostly.

Is that all there is? Perhaps. But a whole lot of the rest is just commentary at best.




Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s