Simple is Good

There’s nothing like having to tell your story over and over to make you work on sharpening your rhetoric. Although the events that have led to this point are suboptimal, the opportunity — erm, necessity — to do some close editing of the tales I spin has been appreciated.

I’m a folk mathematician at heart. A lot of what I tend to think about is how systems fit together. Sometimes these systems are (rather) well-defined mathematical structures; more often they’re (still somewhat well-defined) software components or music or organizations or people. It’s a reductionist exercise at its core, though not one that either disregards or demeans the inherent complexity of things as they are. The attempt is to suss out an approximate axiomitization in order to be able to reason about the system in some cogent manner, all the while keeping in mind that such axioms are merely a proxy for — and not a substitute — for the underlying reality.

Much of my recent rumination has been quite specifically about how to approach the building of software. It’s a process I’ve observed from both inside and outside for an absurd (at least to me) number of years. Fashions change; the time of the requirements document with its consistent series of musts, shoulds and must nots, printed and bound, has more or less passed (except in a few corners). Waterfall is said with a certain derisive tone, at least by most. It is the domain of systems that are either exceedingly well-understood or ones that exist in spaces where failure could be exceedingly costly or dangerous. [Note: There have been, unfortunately, instances of more than a few systems that really should have been better defined up front where the kinds of dangers listed above have been needlessly realized.]

These days, though, it seems everyone is espousing some kind of agility, some kind of approach that traces itself back to turn-of-the-century (sorry; I love getting to use that phrase) notions of extreme programming and further back to industrial engineering notions of post-WWII Japan, particularly at Toyota. The short version includes teams having autonomy, having an incremental approach focusing on business value and, most of all, always having working software. It’s a solid idea. Who could be against an incremental approach to things? I mean, developing in small pieces gives you the opportunity to monitor progress — like all the time.

I must backup a moment. That last little bit was an example of what I’ll call toxic agility, the thing that happens when people in an organization look and think “each of these tasks is small, as far as I can see, they’re all the same size, so this will be easy, like stacking shelves. And like stacking shelves if we need to go faster — which we always do — we can just be more insistent and yell louder when necessary. OK, maybe not yell, but at least look supremely disappointed.

Still, this shelf-stacking metaphor does tell us something. It makes the assumption that everything is known at the beginning of a process. If execution falls down, it’s because of a failure of production. And that assumption is typically false. The conception of a product is rather different. A plan does not have to work. Internal inconsistencies are not revealed until there’s the beginning of a living, breathing product. That’s where the learning comes in. The learning that goes on during actual development is often disparaged, blamed on inadequate preparation, on mistakes made. That’s exactly wrong.

Software development is all about learning. It’s about learning the tools and learning the domain. But you can’t just learn everything in isolation. There’s too much. The process of implementation, though, serves to restrict this space. In a totally idealized situation we’d only learn what was absolutely needed. Perhaps, being able to approximate that is a “stretch goal” (an aspirational one, one that you don’t really expect to get to but a point toward which to aim). You may not get there but as experience is acquired a team will learn to recognize rabbit holes and avoid them (unless, of course, one is looking for an actual rabbit; that, however is an entirely different discussion).

Once we accept that learning is an inherent part of the process, we can move towards optimizing that learning. When the knowledge already exists within an organization we find ways to bring that to an individual team. When the knowledge does not already exist in an organization we figure out how to bring it close. Most important, though, is dealing with domain questions. In whatever setup a team has there always needs to be someone who can play the client/user role. This could be a product owner embedded on a given team or a product manager assigned to the team from a dedicated product organization or just about anyone else charged with that duty. That domain expert absolutely has to have product vision and absolutely has to be available. Those learning loops have to be kept tight. That is where a bulk of the potential for optimization resides.

Once more thing I’d like to mention: knowledge transfer is most effective at one of two specific times: the time of discovery and the time of need. Sharing at the time of discovery is effective because code developed in a pair/team context becomes instantly well known by anyone in the room involved in its crafting. Sharing at the time of need is effective is because that’s the moment when there is the most direct context for receiving that information.

Always value learning. But learn the right things. Mostly.

Is that all there is? Perhaps. But a whole lot of the rest is just commentary at best.

 

 

 

Collaboration

Have you ever watched little kids at a park? The babies, being babies, barely take notice of one another; after all, they’re new at all this, they’re just figuring out how any of this works. The pre-toddlers, those crawling or just starting to walk, start to interact more. Soon they’re playing with toys on the playground, digging, rolling and such. They play serially. They may be sharing toys, but they more play next to each other than with each other. Only a bit later, as mobility skills and language skills develop, do they start to play in a cooperative way.

As software developers, it often seems that we never get past that “playing next to each other” stage. Sure, we define what the edges of things look like, we discuss design, but we rarely really collaborate on an implementation as it’s actually taking shape, at that moment when so much about architecture and approach is decided. And I would argue that that would be the moment when multiple sets of eyes would produce the greatest direct benefit.

There’s an assumption that there’s a benefit to be had from parallelizing the process — “while I do this, you do that, and she can do this other thing” — and we can move faster. Sometimes that may actually be the case, but primarily when the domain is already well understood, the code is well tuned and well expressed, and the team involved is deeply in synch. We can all point to those moments. They’re always notable. They’re relatively rare.

Real collaboration, working with each other, two, three, four or even more developers in front of a single display, avoids dark alleys, shares knowledge, forces minimization of code (but only down to the point of maximal legibility), forces writing for testability (you are writing tests as you go, aren’t you?) and makes for healthier team dynamics and a healthier code base.

 

Code DEVO

Bad code starts out as good code that grows in bad ways.

Let’s think about that one. When I originally wrote the line, I wrote “typically starts out…”. But I’m backing off from that. I will take the more optimistic approach. Code starts out good. Only after the addition of capabilities and features and whatever else does the pressure of complexity foul it up. (Even by personal experience I’m being kind; better to err in that direction.)

But when do I refactor?

Easy one, huh? Refactoring is one of those activities that’s consistently needed, possibly somewhat risky (often in horribly subtle ways) and nearly always something you wish you’d done yesterday, if not the day before.

There are two approaches here. The one that really serves the best is to refactor constantly. Every damn time. If you touch code at all, do a little cleaning, do a little segregation of responsibilities, do a little bit of semantic naming and make that naming consistent.

But, as we know, it’s not always possible. Sometimes you’re faced with a chunk of legacy code that you think works and you have to add something. And the code clearly needs at least a little bit of refactoring. But where to start? How do you avoid introducing errors, whether they be big, bad ones that make it all fall over or worse, nice subtle ones that corrupt the universe just a little bit at time until some time in the future when someone looks at things and says “This can’t be right!!! How long has this been going on?!?

And this is often the place where we get stuck. The little angel of optimism on one shoulder says, “with a series of well understood transformations, we can produce knowably equivalent code that’s clearer and then make the changes we need”. Of course, the angel of caution and pragmatism sits on the other shoulder just as eloquently, saying “we really have no clue as to what’s going on here, but we can add the little bit of functionality we need in as isolated a way as possible and live to code another day! After all, we won’t know any less about how any of this works”.

Because of the fact that in each individual case the second option appears to be locally more prudent (and ‘locally’ is typically all we have) it’s what we do. And we accept it because the increase in uncertainty at each juncture is bounded, lulling us into a (perhaps) false sense of security. We have limited the blast radius (we think). And while it may not all be OK, if it is worse, it will only be by a little.

Ah. Prudent.

And this is not a criticism of behavior. But it does point up a fault in the process. Specifically, it points up the lack of adequate tests, specifically tests that can spot regressions. (Are we talking unit tests or are we talking integration tests? Well yes, we are. Ideally, of course, unit tests being intrinsically simpler we’d like to capture possible regressions at that level, but it’s not always possible. And, in any event, it’s a subject for a different post.)

The important thing, though, is that without adequate testing in place, you’re flying blind. But with adequate testing in place you can refactor. Mercilessly!

And that keeps good code from going bad. Don’t DEVO.

 

Teams and All That

“The construction of software is an essentially social activity typically performed by anti-social people.”

I don’t know who said that first. I doubt it was me, but I’ll take credit for it if no one else will. Or maybe I’ll take the blame. After all, the construction that we who code are anti-social by nature is a convenient construct, just as the neck-beard and the particularly pernicious “you don’t look like a software engineer” have served to pigeonhole — and thus devalue — those who practice our craft.

Stereotypes are rarely cut from whole cloth though. Those who create stereotypes are just not that good at what they do. Consider the normal view you get of coders coding: A line of people staring at one or more screens (often artfully placed), locked-in, headphones (actual or virtual) strapped on, spouting a notable series of oaths out into the cosmos at irregular intervals, punctuated by brief smiles of self-satisfaction. Yes, we do that. Yes, we do that more than a little.

But there’s so much more going on. First of all, little software these days is the product of a single mind. Problems tend to be broad, requiring input from a series of people grounded in different disciplines. Solutions tend to be no less broad, requiring the use of multiple tools from multiple cognitive toolsets. But the mind that can focus equally well on data collection and frontend design, distributed processing concerns and architecture at multiple levels is both exceedingly rare and, seemingly, if you listen to most management rhetoric, in constant danger of either winning massive lottery prizes or being crushed by rogue, on-road, public transportation.

So we build teams — or we try to. We try to build stable, self-regulating teams with a wide array of complementary talents that can build useful software within a palatable time frame. And I say “try to” because it’s damn hard to do. It’s difficult enough to find a minimal group that can cover all the conceptual needs, making sure the human dimensions are covered as well makes the process an order of magnitude more difficult.

But what does a team look like? Well, there’s the “two pizza” rule. No more active members than can be fed with two pizzas. That sets a limit of eight or so. It’s a pretty good limit; any more than that and you start needing internal management (just on a human level) and standups and such quickly become interminable. You also need able representatives of the various stakeholder groups within an organization, where ‘able’ refers to being empowered to make all but the most major calls, otherwise the call chains get unacceptably long.

It’s a non-trivial exercise to be sure.

But, actually, it’s worse than that. Not only do you need to make sure all needs are accounted for under optimal circumstances, you also need to make sure that you at least get close under suboptimal circumstances — which is, frankly, just about always. People get sick. Take vacations. Are unavailable due to some special project. Or a conference. In fact, as soon as you get to the point of having enough people to make a team truly viable, you reach the point where it is more likely than not that some member won’t be available at any given time.

So, OK. It’s impossible.

But just because it’s impossible, doesn’t mean it’s not worth trying. It’s actually essential. If Conway’s Law holds at all, that is.

And when you have a team, or part of a team, or even the germ of a team, by all means nurture it.