Time in a box

I often cannot say know how long it’ll take me to code some feature, or whether I’ll succeed at all, given the requirements. This might be because I’ve never attempted anything like that before, or because the technical constraints around it make the solution hardly plausible. In times like these, I set myself a box of time.

Timeboxing is a practice well known to Agile practitioners. A sprint or iteration is a timebox, during which we work in pursuit of a milestone for a fixed amount of time, then stop to inspect:

  • how far we’ve come,
  • whether we’re on track towards the overall goal (project milestone, KPI), and
  • knowing what we learned, is the overall goal still feasible?

The approach helps to reduce the risk in uncertain endeavors, preventing us from going too far working on the wrong thing.

The decision to invest additional resources in a losing account, when better investments are available, is known as the sunk-cost fallacy, a costly mistake that is observed in decisions large and small.

(…)

The sunk-cost fallacy keeps people for too long in poor jobs, unhappy marriages, and unpromising research projects. [emphasis mine]

Thinking, Fast and Slow, Daniel Kahneman

Set yourself a fixed upfront time to pursue a task or project, then stop to reflect. Knowing the cost you’ve already carried on it, with the currently expected return on investment, and the likely chance of success, does it still make sense to continue?

Always decide this by calculating your total cost against expected benefits. Stop yourself if you realize you’re thinking “I already invested so much into this. I can’t afford not to finish.” That’s why you have to literally stop and detach yourself from work to reflect, to not fall victim to the sunk-cost fallacy.

Timeboxing isn’t useful just for big endeavors, but for daily work too. All programmers have memories of problems they were trying to solve for hours and days, before someone else looked at these only to point out solutions within minutes. Putting a time limit on that kind of work helps manage uncertainty: “try solving this for two more hours, and if you haven’t made significant progress by then, we’ll find help.

Finally, by imposing scarcity on time, timeboxing helps sharpen focus and defeat procrastination:

  • the Pomodoro technique lets you commit to working for 25 minutes at a time, without letting yourself be disturbed or distracted,
  • I often approach tasks that I cannot complete in one work session (like a pile of clothes waiting to be ironed), by setting up a timebox, ie. working for a maximum of 1 hour, starting with the top priority portion, leave the rest for the next time I will be available.

Procrastination usually affects smart people facing large and complex undertakings. Tackling them one timebox at a time makes these so much more manageable. And once you start moving, however slow, resistance fades away.

If you’re like me, you’ll look for tools to help you track the timeboxes. Don’t overcomplicate it. It’s a very simple technique that will work just fine with a simple kitchen timer (though you might want to use something more discrete, like a cell phone timer, since these things tick and ring like crazy):

Cat kitchen timer

and for timeboxes spanning multiple days, just mark the cut-off on a calendar that you’re sure to see. Teams working with Scrum will likely want to highlight the end of their timebox on a Scrum Board, where everyone can see it.

Don’t block yourself thinking you need to find the right tool first. Just start, and see your work flow.

 

Thanks to Jarek Piotrowski for motivating me to write this post. Photo by Darren W.

Just enough logging

Debugging is a lot like police forensics. You’re chasing the villain (bug) by analyzing eye-witness accounts (users’ reports), inspecting the crime scene (source code), and combing through often the most helpful resource: CCTV recordings (application logs), if only their quality allows.

I got upset lately, looking for the needle in a stack of log spam:

where everything was logged and only mildly structured, making tracking application flow a nightmare. Clearly just dumping data into a text file doesn’t make debugging easier.

The two most common faults with logging are:

  1. logging to the wrong level, ie. everything is INFO, forcing someone to dig through tens of thousands lines of text in Mega- to Gigabyte-sized files,
  2. collecting the wrong logs per environment, ie. logging DEBUG in production, which slows and breaks stressed systems due to “out of disk space” or other errors.

The bigger your organization, the more important it becomes to get logging right. You can’t ignore the time wasted searching overblown logs, nor throw more hardware at the problem. BBC’s preparations for the London 2012 Olympics, for instance, revealed incorrect logging as one of the top performance killers:

  • Monitoring Thresholds
  • Verbose logging, everywhere
  • Timeouts
  • No data
  • Volumetrics
  • Unfair load balancing

The BBC’s experience of the London 2012 Olympics, Andrew Brockhurst

Most organizations’ findings would likely be identical. Clearly, we can do better.

Log the right things, right

With 5 standard levels of logging, it’s not always easy to choose the right one. There are good rules of thumb though, and one of the best write-ups I’ve seen comes from StackOverflow:

  • error: the system is in distress, customers are probably being affected (or will soon be) and the fix probably requires human intervention. The “2AM rule” applies here- if you’re on call, do you want to be woken up at 2AM if this condition happens? If yes, then log it as “error”.

  • warn: an unexpected technical or business event happened, customers may be affected, but probably no immediate human intervention is required. On call people won’t be called immediately, but support personnel will want to review these issues asap to understand what the impact is. Basically any issue that needs to be tracked but may not require immediate intervention.

  • info: things we want to see at high volume in case we need to forensically analyze an issue. System lifecycle events (system start, stop) go here. “Session” lifecycle events (login, logout, etc.) go here. Significant boundary events should be considered as well (e.g. database calls, remote API calls). Typical business exceptions can go here (e.g. login failed due to bad credentials). Any other event you think you’ll need to see in production at high volume goes here.

  • debug: just about everything that doesn’t make the “info” cut… any message that is helpful in tracking the flow through the system and isolating issues, especially during the development and QA phases. We use “debug” level logs for entry/exit of most non-trivial methods and marking interesting events and decision points inside methods.

  • trace: we don’t use this often, but this would be for extremely detailed and potentially high volume logs that you don’t typically want enabled even during normal development. Examples include dumping a full object hierarchy, logging some state during every iteration of a large loop, etc.

Logging levels – Logback – rule-of-thumb to assign log levels, ecodan

While you’re still coding:

  • make sure you don’t flatten stack traces – let them spill into logs in their usual, multi-line form, which then creates a visual pattern that’s easy to scan for,
  • avoid concatenating log messages – the (computationally expensive) code would execute even when a given level is not logged. Your logging library will often help you out, for instance SLF4J will let you replace:
    log.debug("Loaded User: " + user);
    

    with

    log.debug("Loaded User: {}", user);
    

    and automatically compose the message only when the given level should be logged.

Collect the right logs

Different application environments have their own logging needs. The general rules to follow are:

  1. the more traffic, the less logging, but longer persistence,
  2. the more active development, the more logging, but shorter persistence.

This translates into the following setup:

  • development/sandbox DEBUG or even TRACE levels, with logs that are deleted within 24-48 hours.
  • testing DEBUG most of the time, to efficiently follow-up bug reports, with logs deleted after 2-3 days.
  • staging INFO to match production closely, with the occasional fallback on DEBUG if you need to trace problems, logs lasting for as long as the staging phase takes.
  • production INFO or even WARN level if you have a very high-traffic system, with log storage for at least a week or longer.

The environment should also configure the correct appender:

  • use text files wherever possible. They’re open, easy to parse and can be saved to disk without relying on intermediaries, like queues or database connections, to behave correctly (logs stored in databases won’t help much if it’s the database failing);
  • log asynchronously. You really don’t want the application to wait until each and every log message is written.

Bonus: Decide explicitly, or be decided for

Make sure you have explicit logging configuration everywhere, even if some module doesn’t use logging at all. Otherwise a newly pulled in dependency might come with its own setup, which your logging library or other dependencies will happily accept. Just last week we spent a few hours tracking down why our integration test suite suddenly started blasting out DEBUG level logging from all libraries, slowing running speed to a crawl. It was a new mocking library we had added to the Maven dependency list, which brought its own logging configuration, overwriting ours.

 

Thanks to Tim Barnett for motivating me to write this post.

The title is all there is

I was invited once to speak at a Toastmasters Leadership Institute, about the Leadership Track of the organization’s educational program. I crafted a short title:

The Leader In You

designed to arouse curiosity, while a short companion abstract provided information on what the actual content was. Happy with the result, I went over to the event only to realize, that the nifty abstract had been stripped, and the only thing printed in agendas was the title.

Of course the title alone was meaningless. It was never meant to work without the abstract. It wouldn’t have attracted even myself to attend. Luckily, lots of people did show up for my session, but the lesson stuck.

For a later appearance, at Tech Open Air Berlin, I still delivered a nice abstract, but this time made sure the title was sufficient:

Weapon of Mass Attraction: Public Speaking

Predictably, during the event, the only piece of information attendees saw were the titles. But that was all I needed.

The crazy state of structured data markup

You want to look good for Google. You want it to understand your website, so that it comes up in results often and stands out. You also want Facebook and Twitter to display your links prominently in their crowded timelines. Because you want all that, you’ll likely turn to structured data markup, like I did with the new design of Michał’s Bites. And then you’ll shake your head in disbelief.

Designing a fresh look for Michał’s Bites nudged me to look into Schema.org – a co-product of Bing, Google, Yahoo! and Yandex, meant to help them make sense of the contents of a page. In the words of its publishers:

On-page markup enables search engines to understand the information on web pages and provide richer search results in order to make it easier for users to find relevant information on the web.

And while choosing the right entity for your particular page element isn’t always straightforward, it’s easy to mark it up:

<article itemscope itemtype="http://schema.org/BlogPosting">
  <h1 itemprop="name headline">The crazy state of structured data markup</h1>
  <section itemprop="articleBody">
    <p>...</p>
  </section>
  <p>Written by <span itemprop="author" itemscope itemtype="http://schema.org/Person"><span itemprop="name">Michał Paluchowski</span></span></p>
</article>

It’s consistent. Most entities will have a way to markup name, url or description. Some have unique attributes, like a BlogPosting has an articleBody above. It’s readable for both computers and humans.

But wait, there’s more.

Since I’m using WordPress, some of its widgets output markup of the microformats brand. These serve a similar purpose as Schema.org, but with a much smaller dictionary of entities. Little did I know that Google scanned it too and started complaining via Webmaster Tools that my implementation was incorrect:

Google Webmaster Tools Microformats Error

I complied, included the missing markup, and the code became:

<article class="h-entry" itemscope…
  <h1 class="p-name" itemprop=...

There’s still more. Facebook developed its OpenGraph, and Twitter has their Cards, all of which – with some extra markup – allow me to control and improve the way content will appear in the services’ respective timelines. Otherwise Facebook may display a random snippet of text with a link, starting with something as “meaningful” as “Comments closed”.

These meant adding some more markup to my code:

<meta property="og:type" content="article">
<meta property="og:title" content="The crazy state of structured data markup">
<meta name="twitter:card" content="summary">
<meta name="twitter:title" content="The crazy state of structured data markup">

Now my content was nicely highlighted on Twitter (note: doesn’t always show up):

As a consequence, I have the same data three-four times in the page:

  1. reader-visible HTML, with the overhead of Schema.org and Microformats markup,
  2. META markup for Facebook,
  3. META markup for Twitter.

It’s like adding extra CSS for some older versions of Internet Explorer with <!--[if lt IE 9]>. Overhead and waste.

There is, perhaps, an end to this in sight. The W3C, just a few months ago, published a draft specification of Microdata, which essentially is Schema.org as part of HTML5.

I like the Schema.org specification best, because it’s rich, consistent and impossible to confuse with other markup. Using the class attribute for structured data is logically sound (<article class="blog-post"> speaks well of the type of the article), but if you place it in any slightly complex web layout, maintained by many people, it’s easy to mix and confuse with styling values. That means it’s likely to be accidentally removed or changed.

At the same time, Schema.org markup gets added right onto content markup, without the overhead of duplication in the <head> or elsewhere – again a maintenance nightmare waiting to happen.

Both Facebook and Twitter are certainly powerful enough to enforce their own solutions (think Facebook’s latest announcement of Hack), but if Google and Bing were able to come together and agree on one standard, I’m sure other big names can join the party too. The fewer standards on the market, the broader the adoption, easier parsing and ultimately better content is served to end users.

Not everyone knows what they’re talking about

I can’t watch presentations anymore. Having been an active Toastmaster for some time, I notice all the speakers’ mistakes. The “uhm” sounds, the non-purposeful fidgeting, the illogical sequence of ideas, the obsolete information. I can barely hear what the speaker is saying.

A layperson watching the same presentation might notice that something’s wrong with it. It’s boring or just not resonating. But they can’t really put their finger on it. Once I explain: “he mixed up presenting an abstract concept with giving concrete advice” they go “ah!”, and confirm “you nailed it!”

Switch to a company doing an employee satisfaction survey. Its results are miserable, so the participants are asked to work out concrete recommendations for what should change. Ask that question separately to line employees and to managers and you’ll get distinctly different answers. Worlds apart.

The employees will tell you all kinds of things, that could be jointly labeled as: more money. Not necessarily in monetary form. The managers will come back with flowed processes, working conditions, credibility, vision, communication flaws, and lastly perhaps compensation, but only as one of many factors.

It’s not that the first group is dumb or greedy. It’s that they cannot really put their finger on what they feel is a problem. Perception takes training, and training takes time and appetite for learning. Not everybody wants to think broadly.

When you set out to improve morale, by all means do include all of the affected employees in the conversation, but:

  • drill down past the first answers (symptoms) to the root cause of issues, don’t just take requests and throw some money at them,
  • explain how the results fit into the broader picture of work, taking the time to educate the people you work with.

It’s democracy with a healthy dose of moderation.

A side effect of such surveys could be discoveries of talent. An employee who speaks, demonstrating an understanding of the broader context, is a good candidate to develop and promote. A manager who does not speak this way, may in turn be in the wrong position.