Teamwork is lesser work

The whole is greater than the sum of its parts—that’s not true. Not universally, at least. Science has known this since the 19th century, that a proper team working together will often deliver less, and of lesser quality, than if the same individuals were working on their own. It’s high time we question the holy grail of “teamwork” to see exactly where it works and where it fails.

I had an eye-opening moment reading Professor Richard Wiseman’s beautifully practical book 59 Seconds: Think a Little, Change a Lot:

In the late 1880s, a French agricultural engineer named Max Ringelmann became obsessed with trying to make workers as efficient as possible. (…) One of Ringelmann’s studies involved asking people to pull on a rope to lift increasingly heavy weights. Perhaps not unreasonably, Ringelmann expected people in groups to work harder than those on their own. But the results revealed the opposite pattern. When working alone, individuals lifted around 185 pounds, but they managed only an average of 140 pounds per person when working as a group. [emphasis mine]

How is that possible? It’s due to diffusion of responsibility:

When people work on their own, their success or failure is entirely the result of their own abilities and hard work. If they do well, the glory is theirs. If they fail, they alone are accountable. However, add other people to the situation, and suddenly everyone stops trying so hard, safe in the knowledge that though individuals will not receive personal praise if the group does well, they can always blame others if it performs badly. [emphasis mine]

Lest you think this applies only to physical work—not at all. It’s the same for blue- and white-collar jobs, physical and creative alike, and across cultures:

Ask people to make as much noise as possible, and they make more on their own than in a group. Ask them to add rows of numbers, and the more people involved, the lower the work rate. Ask them to come up with ideas, and people are more creative away from the crowd. It is a universal phenomenon, emerging in studies conducted around the world, including in America, India, Thailand, and Japan.

Our experience with teamwork

My manager once said in a loose conversation “remember how we were thirty people total? I’ve a feeling we got much more work shipped back then than we do now with 200 people”. Can we test his hypothesis? Can we measure it? Hardly, since to measure correctly would require lots of metrics which we never have nor probably ever will collect.

My personal, best experience with teamwork was in preparing several conferences for Toastmasters in Poland, from the country-scale Toastmasters Leadership Institute to the half-continent scale of the biannual District 95 Conference. All of those teams truly rocked.

  • everyone had their own, individual space of responsibility. I did the website and all of IT—if I blew it, I had myself to blame.
  • we all respected boundaries. I had the final say on the website format and content, but I might’ve merely expressed my opinions on marketing.
  • we worked individually, collaborated on a case by case basis as needed and met every few weeks for two hours total to synchronize.
  • we were all volunteers with no formal ties, so the two or three persons who couldn’t keep pace with the rest were promptly removed and replaced.

Contrast that with the reality of Scrum or any other Agile teams in a typical company—large and small:

  • you’re supposed to share responsibility for the work; all code is everyone’s code.
  • every sprint you’re asked to spend a half-day to a day in meetings, drilling into everyone’s performance (also called a “retrospective”) and collectively brainstorming the work scope (known as “sprint planning”).
  • everyone is on some contract and even in countries were it’s legally easy to fire someone, it’s still psychologically and organizationally taxing.

I did way too many “sprints” like these, and I’ve been in every role: developer, product owner, scrum master, team leader. Every single time people were dying to get out of the meetings and get back to doing productive work on their own. This never worked the way it was supposed to.

Agile is simple, but it’s not easy

The traditional response to “Agile doesn’t work” is usually that, to the contrary, it does, you’re just not “agile enough” or you have diverged from Scrum. That indeed, it’s hard to do agile right and it requires practicing all the great virtues of discipline, courage etc.

Just because some people can finish an IronMan race in 8 hours, that doesn’t mean everyone can, and in fact, very few people will ever be able to complete the distance in any time. It’s a very personal experience, of talent, character and choices, just like work style.

Fair Exam

Why then do we continue to try and swim upstream? Work against human nature?

The solution to fight the western world’s epidemic of obesity is not to ask people to be more disciplined with their diets, but to find out what the natural eating habits of people are and to create food that accounts for them and is healthy. Starting with less sugar and processing would already fix the most outstanding issues.

At work, instead of asking people to “try harder”, study them. Find out what turns them on or off, and change the work environment to account for it. Are they not empathetic enough? Perhaps that’s because we’re not built for empathy in large groups. Try making the organization smaller.

I look back at some thinking I did years ago on teams and realize now I got it only half-right. Yes, teams need to share goals, otherwise there’s no point in calling them “teams”. But “sharing work” is not about collectively executing tasks—that’s counterproductive. It’s about agreeing on outcomes where many individual streams of responsibility come together and yes, we can and should confront anyone who doesn’t deliver their portion.

Developer turned Product Owner

The backlog is a mess. The order is random. There’s no grouping or coherence, let alone a vision of any kind. Stories are unclear, if at all described, and sprint planning takes forever while developers try to figure out what the product owner wants. I heard these complaints over and over again from teams, and then I had a chance to become a product owner myself. Turns out it’s not much different from a developer’s work.

TDD

Taking an agile approach means making small steps and testing how they influence the product. Just like TDD in programming:

  1. Think what you want to do.
  2. Decide how you will measure success, like you code a test.
  3. Have the team implement your idea, usually by writing code and just enough to make the test pass.
  4. Evaluate the idea against your KPIs, like you would run your test.
  5. With the findings, go back to 1 to adjust your hypothesis and approach.

Clean Code

The backlog items you’re preparing—usually stories, will be read and relied on by the team to build a product increment. The clearer the language, the easier it’ll be to understand and follow.

  • clear, unambiguous, expressive naming,
  • precise vocabulary,
  • short paragraphs,
  • good English (or any other language you use),

all of these are qualities that’ll help the team deliver faster and with fewer misinterpretations. All the same in code, where good naming for variables, functions, objects matters, and short, focused functions and components are preferred.

Abstraction

A good backlog is well organized, with related items grouped together into epics, aligned along a vision for the product. This in turn provides direction and focus for both the team and stakeholders, guiding conversations on how the product should be shaped, deciding what fits in and more importantly what doesn’t. It allows setting release and sprint goals, that the team can measure their progress against.

All this is exactly like abstraction in code, where similar logic gets grouped into objects, similar objects grouped into packages, named after the domain they cover. From an orderly, well-named code base one should easily be able to identify the purpose of the system, decide where to put new code and whether it belongs in there at all.

I know what the team’s doing

My programming experience is helping me tremendously in shaping the backlog and working with the team. Requirements are crisp, or quickly refactored to become so, order is clear and it’s easy to see what fits into our current vision and what should be deferred.

On top of that, I know the work the team does. When they come back reporting issues with implementing certain items, I understand those and we can easily adjust plans to reality, with the new information we received. Everyone’s work becomes easier and we’re steadily moving forward.

Obviously not every developer would make a good product owner. There’s a lot of touchy-feely people work involved, while many will likely prefer to deal with the clear, binary logic of computers. But for those who always liked to work on the edge between technology and people, this might just be the right career advance. One that’s surprisingly easy to make.

The state of the craft at CraftConf 2015

CraftConf in sunny Budapest aims to be the conference in Central Europe where developers share what’s state of the art in building software. Joining the 2015 edition was an easy choice, after the first edition a year earlier received rave reviews from attendees. Three days, 16 talks and a workshop later I emerged with 6,062 words worth of notes, distilled into a few broad trends that seem to represent the edge of the industry right now.

CraftConf 2015 Stage

Microservices entering the mainstream

At SoCraTest 2014, less than a year ago, a lot of people were still asking basic questions about microservices—what they are, how to build them, how are they different from any other SOA approach. Not anymore. CractConf saw a lot of companies sharing battlefield experiences with these architectures—what works, what doesn’t, whether to consider the approach at all and how much it will cost.

  • Microservices are a means to scale the organization—allow many people to work on the same system, reduce dependencies, and thus enable rapid development without having to coordinate huge releases or continuously breaking one another’s code. They’re routinely described as “easier to reason about” and built to “minimize the amount of program one needs to think about” by the very virtue of being small and focused on doing one task well.
  • Microservice ecosystems at established companies routinely consist of 50+ and even 100+ services, usually built in a whole range of technologies. If a single service is small enough, say “2,000 lines of code“, or “takes 6 weeks to build from scratch by a newcomer” (both actual definitions), then building one in a wholly new technology is easily sold as an inexpensive POC.
  • All companies successfully implementing microservices started out with monolith applications and afterwards split out the services one at a time. This approach was broadly recommended, even for greenfield projects, because monoliths are easier to refactor while the organization works to research and understand its product better.
  • The supporting tooling is already there and mostly quite mature. Early adopters, like Netflix, had to build their own tools, which they later shared and improved with the open source community. Meanwhile, commercial tools popped up, offered by specialized vendors, with support contracts and other conveniences that pointy-haired bosses like so much.

But most importantly:

To benefit from microservices without getting killed, you need to:

  • automate everything—builds, testing, deployment, provisioning of virtual machines and container instances. It’s the only way to manage hundreds of running service instances.
  • monitor everything—each service must have its own health checks, showing whatever information is most relevant. Typically numbers of transactions, timings, latency, hardware utilization, but often also business KPIs. You’ll also need a means to track requests flowing through the system, ie. by IDs being passed in service calls. This information must be available to development teams in real time.
  • build for failure—crashes, disconnections, load spikes, bugs, all of which will occur more often than with monoliths. Make sure failures are reported, tracked, and the system is self-healing, via techniques like circuit-breakers, bulkheads or resource pools. Work with business representatives to determine for each use case whether consistency or availability are more important, because you cannot always offer both.

Not everyone was pleased with such a high degree of saturation with microservice themes at CraftConf:

But that only proves that this architecture is well past the stage of early adoption and entering into the mainstream.

Everybody has an API

The consequence of adopting microservices is that APIs become the norm, starting out internally, and often “sooner than you think” being made available to the outside world, either to support different devices or integrate with 3rd parties. Their costs also became more evident:

  • An API is for life. Once it becomes public, it’s very difficult to change, so early design mistakes become much more difficult to fix.
    • Use techniques that allow you to add new fields and methods to an API without disturbing clients.
    • Semantic versioninig, coupled with decent deprecation procedures, help move the process of making backwards incompatible changes.
    • You’ll still have customers who “missed the memo“, whom you may have to offer commercial legacy support for an additional fee.
  • Create good documentation, generated and published automatically.
  • Assign people who will be monitoring community sites, like StackOverflow, for questions regarding your API.

@ade_oshineye (of Apprenticeship Patterns fame) was spot-on summarizing the process of deciding whether to create an API by showing an analogy to puppies—everybody wants one, but not everyone is ready.

Moving towards Domain-Driven Design

Frameworks these days tend to dictate the design of applications by suggesting organizing code by layers into packages like models, controllers or views. The consequence is that:

  • modifications to a single business scenario require often changing code in many packages,
  • it’s all too easy for components to reach into layers outside their hierarchy, too deep down for their own good, while they should only be calling public APIs,
  • there’s very little information presented by the package structure on what the application does.

Several speakers postulated that this should stop and instead code should be packaged by business units, with relevant models and controllers sharing the same packages. Encapsulation could further be improved by changing method and component access to package instead of the common public. It’s an old theme that finally seems to be getting traction, with past support from developer celebrities like Uncle Bob:

What do you see? If I were to show you this and I did not tell you that it was a Rails app, would you be able to recognize that it is, in fact, a Rails app? Yeah, looks like a Rails app.

Why should the top-level directory structure communicate that information to you? What does this application do? There’s no clue here!

Robert Martin, Architecture, the Lost Years

Well worth watching.

TDD and Agile now taken for granted

Two themes notably absent from the vast majority of CraftConf talks were TDD and Agile. They implicitly became accepted as defaults—the baseline, cost of entry and survival in the game of software development.

  • Microservices will only reach their full potential when their owning teams can work and make decisions independently from central authority, including frequent deployments to production.
  • Frequent, decentralized deployments require comprehensive, automated test coverage.
  • TDD drives the design of code, meaning every line exists only to make a failing test pass. “Every new line of immediately becomes legacy code” so code as little as possible.
  • Google continues to build new tools to satisfy business problems they are facing—tools that often get adopted company-wide, but never enforced. They call it an evolutionary approach, where the best ideas will be adopted simply on merit, while the others will die in obscurity.
  • @cagan argued how the whole top-down oriented product design cycle is broken by being grossly inaccurate and too heavy, and companies need to adopt bottom-up idea spawning and rapid testing instead.
  • Transparency is key, so that everybody can make better decisions based on as much information as possible.

Oh, and “doocracy” is a word now:

A vibrant community

It’s been a blast to mingle with some of the 1,300 energetic attendees, meeting friends, old and new. @MrowcaKasia and @kubawalinski turned from Twitter handles into real, live, and engaging persons. The slightly grumpy but ever competent @maniserowicz is always a pleasure to meet, and then there’s the whole SoCraTes gang of @leiderleider, @c089, @Singsalad, @egga_de and @rubstrauber, whose passion for community and craftsmanship continues to inspire.

The 2015 CraftConf was only the second edition, but already organized with such painstaking detail, that I can easily call it the best conference I’ve ever been to. The team’s already fishing out and working to improve what didn’t quite work this year, so next year’s event is bound to be even more polished.

Evolving towards Kanban

Kanban is the last stage of software delivery evolution. Right now, at least, as evolution never really stops. The path is usually the same, starting with a disorderly “let’s just code”, moving gradually towards a waterfall system, then painfully stumbling towards a formal Agile framework, say Scrum, and once a Scrum team truly matures, they realize they’re really doing Kanban with a bit of unnecessary paperwork.

The jump from chaos to waterfall is rather obvious. So is usually the next step to go Scrum (or any other framework you like). We love all the Agile principles, user stories help focus on business outcomes, expecting changes makes Product Owners happy and programmers less stressed. Estimates made in story points help the management see when things will be done and ready, and make decisions based on that.

In Scrum for a new project you define a whole bag of user stories, estimate them, order by business value and call it a “Release”. That Release gets split into Iterations, each of 1 or 2 weeks in length. When you’re done with all the Iterations, you’re done with the Release and ready to unleash your product onto the general audience.

We’re often working, however, with software that’s relatively inexpensive to release. Like web applications, where you upload code to a server and all your users instantly get the latest version. There’s obviously the question of stability, quality and testing, but Scrum says strictly that the definition of “done” is “designed, developed and tested.” So, by that definition, you should have a release-quality product after every Iteration.

Add the two things together and you soon have a Product Owner asking you whether it’s possible to do an interim-release, before the whole scope is done. Sure, it is. It takes a while to work out discipline and processes right, so that the quality really is good, but once you’re there, you can release right after the Iteration summary meeting. And the Product Owner has a point: why do we wait another X weeks for the whole Release to be ready, when we can have something live now? Competition never sleeps and some of these things could make users seriously happy.

Here’s where Kanban comes in: once you’re ready to release after every Iteration, then the answer to “when will this be ready?” becomes “right after the Iteration you put it in.” Long-term estimations become irrelevant, as they always were anyway. Just because you have a user story planned for 3 Iterations from now doesn’t mean it’ll really be done there and then. Things shift all the time.

Why bother with the “Release” planning then anyway? Switch to a higher-level roadmap of changes, so that the team continues to follow a vision for the product, but plan actual work only for the next Iteration. Then release it immediately. Now you’re doing Kanban.

Agile is the business approach to software engineering

Try, fail, adjust, go back to step one. So it goes on until failure turns into success or you get bored and leave to do other stuff. Such is business. It’s extremely rare that an original idea becomes immediately successful. Most ideas go through an iterative process of trial and error and the end success comes from a sometimes completely different idea than the initial one.

Take the Confected application that we are building. We set out with certain expectations, built and tried it out during this year’s TEDxWarsaw. Some of the expectations were correct, some were utter failures. We collected all the lessons and went back to our workshops to adjust the project. Meanwhile, while we’re talking to other conferences about setting up the application for them, we often stumble upon ideas for the application that are significantly different from the direction we’re pursuing. And our success relies upon whether and how we’ll take those offshoots into account, because as they say, climbing a ladder faster won’t help you if it’s placed against the wrong wall.

So when you consider any Agile methodology for software development, irrespective of whether that’s Scrum, Kanban or any other funny name, they’re nothing else than a business approach to engineering. You correctly assume that you’re unlikely to get it right the first time, so you try, fail, and adjust. You collect feedback and put it right into production, often completely changing the original designs.

It’s a way of working that’s quite familiar to business people but often strange to programmers. “But the specs said nothing about this!” Indeed they haven’t, but our knowledge of the world changes (or the world itself does) and refusing to adjust is asking to become evolution’s prey.