Have you ever watched little kids at a park? The babies, being babies, barely take notice of one another; after all, they’re new at all this, they’re just figuring out how any of this works. The pre-toddlers, those crawling or just starting to walk, start to interact more. Soon they’re playing with toys on the playground, digging, rolling and such. They play serially. They may be sharing toys, but they more play next to each other than with each other. Only a bit later, as mobility skills and language skills develop, do they start to play in a cooperative way.

As software developers, it often seems that we never get past that “playing next to each other” stage. Sure, we define what the edges of things look like, we discuss design, but we rarely really collaborate on an implementation as it’s actually taking shape, at that moment when so much about architecture and approach is decided. And I would argue that that would be the moment when multiple sets of eyes would produce the greatest direct benefit.

There’s an assumption that there’s a benefit to be had from parallelizing the process — “while I do this, you do that, and she can do this other thing” — and we can move faster. Sometimes that may actually be the case, but primarily when the domain is already well understood, the code is well tuned and well expressed, and the team involved is deeply in synch. We can all point to those moments. They’re always notable. They’re relatively rare.

Real collaboration, working with each other, two, three, four or even more developers in front of a single display, avoids dark alleys, shares knowledge, forces minimization of code (but only down to the point of maximal legibility), forces writing for testability (you are writing tests as you go, aren’t you?) and makes for healthier team dynamics and a healthier code base.



Bad code starts out as good code that grows in bad ways.

Let’s think about that one. When I originally wrote the line, I wrote “typically starts out…”. But I’m backing off from that. I will take the more optimistic approach. Code starts out good. Only after the addition of capabilities and features and whatever else does the pressure of complexity foul it up. (Even by personal experience I’m being kind; better to err in that direction.)

But when do I refactor?

Easy one, huh? Refactoring is one of those activities that’s consistently needed, possibly somewhat risky (often in horribly subtle ways) and nearly always something you wish you’d done yesterday, if not the day before.

There are two approaches here. The one that really serves the best is to refactor constantly. Every damn time. If you touch code at all, do a little cleaning, do a little segregation of responsibilities, do a little bit of semantic naming and make that naming consistent.

But, as we know, it’s not always possible. Sometimes you’re faced with a chunk of legacy code that you think works and you have to add something. And the code clearly needs at least a little bit of refactoring. But where to start? How do you avoid introducing errors, whether they be big, bad ones that make it all fall over or worse, nice subtle ones that corrupt the universe just a little bit at time until some time in the future when someone looks at things and says “This can’t be right!!! How long has this been going on?!?

And this is often the place where we get stuck. The little angel of optimism on one shoulder says, “with a series of well understood transformations, we can produce knowably equivalent code that’s clearer and then make the changes we need”. Of course, the angel of caution and pragmatism sits on the other shoulder just as eloquently, saying “we really have no clue as to what’s going on here, but we can add the little bit of functionality we need in as isolated a way as possible and live to code another day! After all, we won’t know any less about how any of this works”.

Because of the fact that in each individual case the second option appears to be locally more prudent (and ‘locally’ is typically all we have) it’s what we do. And we accept it because the increase in uncertainty at each juncture is bounded, lulling us into a (perhaps) false sense of security. We have limited the blast radius (we think). And while it may not all be OK, if it is worse, it will only be by a little.

Ah. Prudent.

And this is not a criticism of behavior. But it does point up a fault in the process. Specifically, it points up the lack of adequate tests, specifically tests that can spot regressions. (Are we talking unit tests or are we talking integration tests? Well yes, we are. Ideally, of course, unit tests being intrinsically simpler we’d like to capture possible regressions at that level, but it’s not always possible. And, in any event, it’s a subject for a different post.)

The important thing, though, is that without adequate testing in place, you’re flying blind. But with adequate testing in place you can refactor. Mercilessly!

And that keeps good code from going bad. Don’t DEVO.


Teams and All That

“The construction of software is an essentially social activity typically performed by anti-social people.”

I don’t know who said that first. I doubt it was me, but I’ll take credit for it if no one else will. Or maybe I’ll take the blame. After all, the construction that we who code are anti-social by nature is a convenient construct, just as the neck-beard and the particularly pernicious “you don’t look like a software engineer” have served to pigeonhole — and thus devalue — those who practice our craft.

Stereotypes are rarely cut from whole cloth though. Those who create stereotypes are just not that good at what they do. Consider the normal view you get of coders coding: A line of people staring at one or more screens (often artfully placed), locked-in, headphones (actual or virtual) strapped on, spouting a notable series of oaths out into the cosmos at irregular intervals, punctuated by brief smiles of self-satisfaction. Yes, we do that. Yes, we do that more than a little.

But there’s so much more going on. First of all, little software these days is the product of a single mind. Problems tend to be broad, requiring input from a series of people grounded in different disciplines. Solutions tend to be no less broad, requiring the use of multiple tools from multiple cognitive toolsets. But the mind that can focus equally well on data collection and frontend design, distributed processing concerns and architecture at multiple levels is both exceedingly rare and, seemingly, if you listen to most management rhetoric, in constant danger of either winning massive lottery prizes or being crushed by rogue, on-road, public transportation.

So we build teams — or we try to. We try to build stable, self-regulating teams with a wide array of complementary talents that can build useful software within a palatable time frame. And I say “try to” because it’s damn hard to do. It’s difficult enough to find a minimal group that can cover all the conceptual needs, making sure the human dimensions are covered as well makes the process an order of magnitude more difficult.

But what does a team look like? Well, there’s the “two pizza” rule. No more active members than can be fed with two pizzas. That sets a limit of eight or so. It’s a pretty good limit; any more than that and you start needing internal management (just on a human level) and standups and such quickly become interminable. You also need able representatives of the various stakeholder groups within an organization, where ‘able’ refers to being empowered to make all but the most major calls, otherwise the call chains get unacceptably long.

It’s a non-trivial exercise to be sure.

But, actually, it’s worse than that. Not only do you need to make sure all needs are accounted for under optimal circumstances, you also need to make sure that you at least get close under suboptimal circumstances — which is, frankly, just about always. People get sick. Take vacations. Are unavailable due to some special project. Or a conference. In fact, as soon as you get to the point of having enough people to make a team truly viable, you reach the point where it is more likely than not that some member won’t be available at any given time.

So, OK. It’s impossible.

But just because it’s impossible, doesn’t mean it’s not worth trying. It’s actually essential. If Conway’s Law holds at all, that is.

And when you have a team, or part of a team, or even the germ of a team, by all means nurture it.


Finding the Sweet Spot

We’re developers. We love code.

We’re developers. We hate code. “No code is better than no code”.

We have developers. They are a great resource.

We have developers. They are a terrible expense.

Each developer is unique.

Developers are a commodity.

A long time ago, when I was a “programmer/analyst” for a major advertising firm, we ran into a bit of a conflict with someone from account management. I don’t remember the details, but it had something to do with scope changes and ill-defined requirements. And no matter how off the wall their requests were, how self-contradictory, how internally inconsistent, we had to go with them. It was disturbing and would sometimes get a little heated.

Finally a senior executive — silver-haired and well-suited as you might reasonably imagine in those immediately post-Mad Men days — took me aside, saying, “One thing you have to remember: They are a profit center. You are a cost center. The sooner you incorporate that into your being, the less discomfort you’ll feel.”

Whether you call us programmers or software engineers or developers, we’re a pretty fortunate bunch. With the exception of just a couple of downturns over the last number of decades, we’ve been generally in demand. We’ve been generally pretty well-compensated. And, perhaps because of that last item, there’s always been an attempt to commoditize us. Whether it’s the not-so-virtuous cycle of technology adoption (“we’ll use X because there are a lot of software engineers that know X so a lot of people learn X because it’s popular and some of them shouldn’t be writing code at all so the quality of  those who write X goes down so…”) or off-shoring or whatever there’s a desire to make things at least predictable but mostly, well, cheaper.

And that’s a shame. Admittedly looking from the inside, developers can often have a rather broad view of a company and deep concerns for its success. And we tend to be a reasonably clever bunch as well. We do, in general, blanch at the way much of business is done, with its reliance on hierarchy and on deeply abstracted metrics (often preferring, it seems, the state of the chosen metrics over the reality of the situation) but tend not to limit our view of things to the associated bits-and-bytes we push around.

This is not to say,  for a moment, that we don’t enjoy the pure delight of techie nerd-wrestling; finding a more efficient way to move data or a more effective algorithm to find a small bit of truth is always a delight. But we care about the bottom line. We have a true understanding that if the organization does not have success neither do we.

But finding the sweet spot is a non-trivial exercise. Engineers need to be engaged and involved as well as being left alone to do their jobs. They need to be appreciated as humans contributing as well as being appreciated as a resource. And they need to be respected as people who are often dealing with the unknown. Developers spend a good deal of their time finding needles in haystacks, discovering ways to model a not-completely-consistently-defined reality.

Given what we cost, finding that sweet spot is well worth it.