The unconference experience of SoCraTes 2014

Flip-flops, sandals, occasionally a pair of Vibram Five Fingers and some people walking barefoot. It’s the definition of the laid-back spirit at SoCraTes 2014, an unconference I joined last week, made for and by software craftsmen. It was my first ever unconference experience and while I found the format unsuited for my personality, the energizing power of that community was exceptional.

SoCraTes logo

SoCraTes stands for Software Craftsmanship and Testing – an all-in-one statement embodying the aspiration to become better at creating software tomorrow than we were today. That’s all there was available before the event to describe its contents. No calls for papers, no formal agendas, speaker profiles or anything one would expect from a regular conference. All sessions for the day were announced each morning, often sparked by the previous night’s conversations. One literally comes there without knowing what to expect.

Pitching sessions

For two days, first thing after breakfast, all 150 of us met to fill in the agenda, by pitching sessions and sticking them somewhere onto the grid of available times and spaces – a morning conference-room lecture on micro-services, or perhaps an afternoon debate on freelancing strategies in a garden setting? Anything was possible, because in Open Space Technology:

  • Whoever comes is the right people.
  • Whenever it starts is the right time.
  • Wherever it is, is the right place.
  • Whatever happens is the only thing that could have.
  • When it’s over, it’s over.

and perhaps most importantly:

Law of two feet
If at any time during our time together you find yourself in any situation where you are neither learning nor contributing, use your two feet, go someplace else.

SoCraTes attendees spent days strolling between sessions, then evenings and nights playing board games, learning to juggle, explaining complex software architectures with beer glasses, or mob-programming in Cobol.

So, how was it for me? Mixed.

There are consequences to participating in an unconference. On the positive side, there’s the flourishing creativity. Every single person I met at SoCraTes was passionate about their work to the point of making me ashamed that I didn’t know enough about the field and probably didn’t work hard enough to learn. I actually had to get some work done in-between, just to improve my morale, like finishing a blog post and publishing a microscopic framework for content websites.

On the negative side, however, was a certain level of disorganization and common lack of substance. With no information on sessions published beforehand, there was no way to make a reasonably informed decision about which ones to attend. I had to decide on the spot, at 9 o’clock in the morning of the day, based on titles and 30-second pitches. A few people came with prepared sessions, but many just improvised them.

The result is that most sessions ended up being discussions. A session owner may have introduced the topic and then some of the people present talked through its selected areas. While that’s a useful form for a meeting of peers trying to fill in their gaps in knowledge, it doesn’t work well for the less experienced who would like to learn in a more structured fashion.

In theory, the law of two feet removes the burden of having to sit through a bad session, but in practice leaving one session makes you arrive at another one mid-way, which often makes it difficult to join in and follow the flow of information.

After the two days I didn’t feel I took away much, despite the amazing variety of sessions and the astonishing scope of knowledge of all their attendees.

In Philip Zimbardo’s classification of time perspectives, I’m clearly a future-positive-oriented type. I like to prepare ahead and anticipate what’s coming. I also like concise conference sessions where presenters share the experience of their past work, like describing a specific project, a piece of software they built, and the lessons they took away from it. Concrete, credible, and structured.

Unconferences are great because of the people who attend them. Competent and passionate to the boundaries of craze. That’s what keeps those who enjoyed SoCraTes coming back. They seek renewed inspiration and energy coming from others like them. It’s perhaps the most social gathering of geeks.

For the future I will likely stick to more traditional conference formats, with curated agendas. SoCraTes 2014 was a very well organized event and for anyone who feels comfortable with the format, the upcoming editions will be thoroughly enjoyable.

What goes UP must come down

Back on April 26 I reviewed my first year of wearing Jawbone UP with all its benefits and deficiencies. Two months later, on June 21, my band gave up the ghost as battery life suddenly dropped to a mere few minutes. Today I’m wearing a replacement – courtesy of Jawbone, and have a few more thoughts to share from the experience.

Considering how long the band worked well for me, I was one of the lucky ones:

14 months of nearly flawless operation sounds like an eternity, when some people had as many as five consecutive devices replaced in the course of a few months. If that statement sounds sarcastically, it should. I expect a €100+ device to easily last a few years accepting only gradual reduction of battery life.

Like many UP users have commented, the design and features make it a winner among similar bands. It’s fun and easy to use, delivers lots of information that helps me steer my habits in a healthier direction. But while the “designed in San Francisco” portion of the product works out really well, the “assembled in China” clearly needs a retrospective.

The warranty for Jawbone UP is offered only for 12 months. Pretty short, as electronics regularly come with minimum 24 months of coverage. I thought my Jawbone adventure was over, because I didn’t see the point in buying a replacement. Nonetheless, since there’s no harm in asking, I reached out to Jawbone directly and quickly got a response:

Two reset attempts and a phone conversation later I was offered a replacement. How? “As an exception“, I was told, because Jawbone “wanted to provide the best possible customer experience“. Fair enough and I’m happy to have received that kind of attention – that certainly says a lot about the approach of Jawbone towards its users. Still, I cannot quite understand why an exception was made for me, in particular. Why make exceptions in the first place? Why not extend the warranty to 24 months for everyone?

From my first contact with @JawboneSupport on June 24, my replacement band arrived a few weeks later, on July 17. Most of this time was shipping. Jawbone knew exactly what type, size and color I had and I was pleasantly surprised that I didn’t need to provide those details.

The new band is distinct in a few externally visible construction details. The button feels differently and it is rattling the same way it must’ve for Zach Epstein, who’s article I linked to above. He had his band replaced due to the rattling, for me it’s not an issue.

Replacement Jawbone UP

I’m hoping changes go beyond the visible and something was done to improve the band’s longevity.

During this one month of waiting for replacement, one additional issue became clear. All this data that my body is producing and sending off to Jawbone to make profit from, there’s no way to extract it in case I’d like to move away from the device. No export feature, no official policy on how to grab it. I’m not even sure if legally the data can be considered mine. I’m sure it’s my own movements that produced it, but since it was processed by the band and application, can I claim ownership?

I’m not so much worried about the possibility of Jawbone selling off my data, as long as it’s anonymized, aggregated and properly controlled. But I sure would like to receive it when I ask for it. These questions will be coming up more often as more devices join the market and users begin ditching them and switching.

For now I’m hoping my new band will accompany me longer than the previous one, and for soon-to-be users of the device, I hope Jawbone will work up enough confidence in their product that they start offering two years or more of safety, replacements available without exceptions, just in case.

Deciding to decide when the time is right

Never before in history were so many people required to make decisions so often and of such far-reaching consequences. Where only a handful of wise men used to choose for everybody else, today we’re all decision makers. From the CEO to the junior programmer, everyone’s decisions can send ripples across the globe. We have to learn when and how to decide.

See just how significantly mentions of “decision” have increased since the outset of World War II.

Times when our days were pre-defined, we knew how to crank a widget and how many we were expected to crank to make a day’s pay, are over. Many of us in IT and elsewhere work with systems used by millions, where one bad decision could start mayhem. Welcome to the twenty first century – the age of empowerment and decisions.

Decision road sign

decision
noun
A conclusion or resolution reached after consideration.

Oxford Dictionary

I especially like the bit saying “after consideration“. Without proper consideration a decision is a gamble taken at the mercy of mere chance. Consideration requires data – facts, figures, the knowns and the unknowns, and enough of it to make a decent judgment. Nobody said this better than Sherlock Holmes:

It is a capital mistake to theorise before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.

Sherlock Holmes, Adventures of Sherlock Holmes

But we can’t be sitting there all day, collecting facts, while the world moves on. Our competitors surely will come up with something that’ll sweep the rug from under our feet and before we know it we’ll be out of business!

Not necessarily.

Nowhere are timely decisions so important as in the military. Making one too early will give your enemy ample time to prepare. Making it too late will have you overrun by their forces. In either case lives will be lost and destruction will ensue. General Colin Powell, former United States Secretary of State and Chairman of the Joint Chiefs of Staff says decisions should be made as late as possible, but not later.

Don’t rush into decisions – make them timely and correct.

Time management is an essential feature of decision-making. One of the first questions a commander considers when faced with a mission on the battlefield is “How much time do I have before I execute?” Take a third of that time to analyze and decide. Leave two-thirds of the time for subordinates to do their analysis and make their plans. Use all the time you have. Don’t make a snap decision. Think about it, do your analysis, let your staff do their analysis. Gather all the information you can. When you enter the range of 40 to 70 percent of all available information, think about making your decision. Above all, never wait too long, never run out of time.

In the Army we had an expression, OBE—overtaken by events. In bureaucratic terms being OBE is a felonious offense. You blew it. If you took too much time to study the issue, to staff it, or to think about it, you became OBE. The issue has moved on or an autopilot decision has been made. No one cares what you think anymore – the train has left the station. [emphasis mine]

Colin Powell, It Worked for Me

Further down Powell shares the steps of collecting data:

  • tell me what you know
  • tell me what you don’t know
  • tell me what you think
  • always distinguish which from which

Colin Powell, It Worked for Me

These should be printed out on large sheets, posted next to every manager’s desk, facing anybody coming in with a report or request.

Return to the world of software, where bad decision-making may not kill anybody, but “merely” derail companies, sending hordes of people into unemployment. Any sort of Agile methodology preaches making important decisions at the right time.

  • what is the next most valuable thing we should build?
  • whom are we building this for? what problem are we solving?
  • how will we measure success?
  • what are we not going to build?
  • when should we release this to users?

The worst thing one can do is what, unfortunately, comes most naturally to programmers: jump straight into coding. Programmers hate discussing, debating and researching. They want to code already!. Which leads to messy, unwanted products, delivered too late to the wrong people.

Take time to ask questions, research answers and make decisions when they need to be made.

  • Order your stories, use cases or requests into a backlog, find out which of them are the most important and take the time to decide on how to build those correctly. Postpone any decisions on the remaining items until the first ones get done. Delivering them will change the value of all the other requests and spending time debating things other than immediate work will likely turn to waste.
  • Be very open about what you don’t know. Not sure how a business process looks like? How to work with some technology? State it explicitly, refuse to accept the story into the iteration and create a spike instead. This should allow you to spend a time-boxed amount of work on researching missing pieces. In our current process we even created a specific type of sub-task that represents Questions and Concerns, just to have them prominently visible.
  • Prototype with proofs of concept. Building any serious piece of software is a test for how suitable the chosen technology is. Start out by building a minimal, full-flow, disposable Proof of Concept piece that will test the critical portions. Put some pressure on it to see how it scales, so that if it fails, it does so early. And by the way: make sure everybody is aware that it’s disposable and will be built again correctly, right after disposal.

Keep asking yourself whether you have enough data and work to verify whether what you think you know is true. Learn and practice how to make decisions. If you avoid them, somebody else will step in and their decisions may not be to your liking.

API thinking vs. client thinking

Have an API? No? So obscure. Everybody has one these days as APIs were the foundation of online success in the last decade. But building a good API is hard. In fact, the mindset that’s required is peculiar enough to consider separating people who will build it from those who will use it.

APIs were all the rage that, along with AJAX, kicked off Web 2.0. By allowing others to tap into the features and data of your application, you could spark a whole community of clients and mash-ups, making you the platform. Twitter is a well known child of this era, where an API was built first, then Twitter’s own clients as well as all the independent ones on top of it.

APIs, APIs everywhere

This obviously takes away control of the application’s future from its creators, putting it into the hands of a broader community. Example being again Twitter, where features such as retweets were only added to the platform once they became widely used in independent clients. At some point Twitter decided to reclaim control of its brand and user experience, which started to diverge between applications. Certain requirements were imposed on how tweets may be displayed and what functions should be available. Break those and you may be kicked off the API completely.

For System Architects, APIs are the panacea in a multi-device world. With the variety of client applications being demanded – web, native, embedded, large-screen, tiny-screen etc. – we want to keep complexity low by reusing as much code as possible. A properly written API can be shared between all clients and even allow for gracefully dropping support for a legacy generation, like a browser that’s becoming obsolete.

Trello makes excellent use of this graceful degradation pattern:

[T]he website is just a face that chats with the Trello API and that the iOS and Android apps are also just faces and you can make your own face.
(…)
[T]here’s a special face out there for people using Internet Explorer 9.

There’s the API shared by all official and unofficial clients, each one called a “face”, and there’s a special, older version of the web face that’s left to support the remaining users of Internet Explorer 9. Brilliant.

  • Yes! I want to build an API. How do I go about it?
  • With foresight and planning.

APIs are a special case of Separation of Concerns and here’s where I’m starting to think that APIs and clients should be built by different people:

  • clients are focused on their immediate needs; I’m building feature X and need data A, B and C formatted this way.
  • APIs are catering for many clients and their different, often incompatible needs.

If the same person writes the client and the API, or even if they’re separate but on one, tightly knit team, they are much more likely to reconcile the conflict by leaning towards the immediate need of the client, away from the broader needs of the ecosystem. Every subsequent client that comes in with their needs will receive their own, special endpoints. Soon you’ll have an explosion of similar, oddly named methods for very specific use cases, little reusability, where a simple change may require modifications to hundreds of lines of code. In other words, you’ll have built a monolith where “API” will merely be a different name for the application’s model layer, and since that model will be separate from the rest of the application, complexity becomes even worse.

Then, once it’s in production, you’re dead in the water, because:

Public APIs are forever.

Joshua Bloch, How to Design a Good API

Anyone can use a public API and you’ll have to maintain backwards compatibility for a long, long time.

However, if you task different people with building APIs and clients, you’ll get a lot of conversations, often conflicts, which are essential for getting the best result for the broadest amount of use cases.

Make sure the API team consists of people who have as wide a perspective as possible. Keep thinking well beyond the immediate requests they receive, weighing those against all similar requests in the past and thinking forward into the future. What else could be required from this method later? What else might someone want to extract from this particular data set? Will it need filtering, sorting, paging?

Building a good API requires following guidelines, which are not the ones usually proposed for client design:

  • violate YAGNI – think of what might be useful in the future, but leave out things that are easy to add, because removing anything is much harder;
  • write a broader than usual set of features for the method, weighing the possible performance penalties against power;
  • displease everyone equally – clients will often times need to curb their requirements, to allow for broader reusability;
  • document extensively – your documentation will become a guide to understand the contract of each method – what it expects and returns. Without it, you’ll be swamped by questions and complaints.

Joshua Bloch, creator of, among others, the Java Collections API, shares a number of excellent recommendations for building APIs in a Tech Talk he gave years ago at Google. It’s well worth the hour to see it:

If the API is done right, it’s an investment that pays back many times the effort put into it. The multitude of clients that can use it, the flexibility to rapidly build features that weren’t previously thought of. For any regular software company it’s possibly the most complex task it will handle and you should put your best, brightest people on it. And make sure they spark conflicts with all the developers building clients, because that means they’re having real conversations about how to build the best solution for everyone.

Coding is cheating

Programming is perhaps the only job where lying, cheating and deceiving will not only get you paid but also praised for being innovative and creative. That’s because computers are severely limited. We’re literally fitting square pegs (real world) into round holes (0′s and 1′s).

Assuming you already know that everything in the computer world is represented in binary – combinations of 0′s and 1′s – consider the simplest example – trying to store the value 0.2:

0.2 decimal = 0.00110011... binary

There is no precise representation in binary for the decimal 0.2. Instead it’s a repeating pattern of 0011 after the mark. To make matters worse, a computer will only store a limited amount of digits for a number, say 32 for a fraction like the one above. But due to the specifics of number storage, the actual representation will only keep 26 digits from the recurring pattern of 00110011.... The rest will be cut and gone as if it never existed. So the number a computer will actually store is:

0.199999988079071044921875

Close enough to round it off to 0.2, but still, what a cheat!

We continue to work around constraints, this time of memory. All computer memory is limited and we shouldn’t waste it unnecessarily. So when we want to store a value in a program, we often won’t store the actual value, but a pointer instead:

value = "banana"
valuePointer = &value

The exact code will differ depending on the language used, but essentially it says:

  1. save the value banana under the name value, then
  2. assign to the name valuePointer the memory address of value (it points to where the original value is stored).

Pointer illustration

In consequence we’re using much less memory, because the banana is stored only once, but as a side effect (sometimes desired), if we change to value = "kiwi" later, then valuePointer will also suddenly return kiwi.

Let’s look at something more tangible – a sphere.

Sphere

You know how a sphere looks like, you can recognize one if you see it. But a computer is inherently incapable of producing a real sphere (though that’ll change once ray tracing goes mainstream, thanks Marek). For reasons that require a university course to explain, 3D spheres are drawn with… triangles.

Sphere

There’s just so many of them and so tiny that you are fooled and see a smooth surface. In the first 3D games that surfaced in the 90′s you could actually see the edgy surfaces. Nowadays computers have enough horsepower to draw millions of triangles without much sweat.

Another funny concept is lazy loading. We’re usually storing data in some kind of database, which makes it expensive to retrieve. There’s the time needed for a network call, the database engine reading files from disk etc. It all adds up, hence we want to make as few database calls as possible, so that users won’t have to constantly stare at a screen saying “loading”.

Let’s say you want to open up a contract. We’ll represent it in code as an object that includes all relevant information – ID, date of signing, ship-to and bill-to companies etc. We’ll also tell you it has line items, which we may conveniently program as:

getLineItems()

where calling the above function will return the list of line items. However… we don’t really have those line items ready to display, because we purposefully didn’t ask the database for them just yet. You might not need them at all – just want to check some basic details of the contract. So only the moment you ask for line items explicitly, and getLineItems() is called, do we make the query to the database (and let you wait for it), then return and display the list.

Finally, some problems in computing are very hard and extremely expensive to calculate at scale. Even for the modern beastly machines we have available. If you’re using any sort of map application, you’re seeing one such problem: calculating the best route between two points.

In order to perfectly calculate the best route – be that the shortest or the quickest one, whatever the criterion – the computer would have to have to calculate the distances and routes between every single point in the database. The number of calculations to perform would be the square of the number of points. Warsaw alone has thousands of addresses. Think how many points would there be on the route between, say, Warsaw and Berlin.

The trick we use in these hard cases is heuristics which boils down to using extra information we may have, and allowing for suboptimal results, providing the ones we deliver are good enough. For finding the best route on a map, we already know the locations (latitude and longitude) of all points. We can use that information to limit the area in which we’ll calculate the routes, often to a shape resembling an ellipse:

Shortest path calculation area

We won’t consider points and roads outside of this area at all. That’s why when trying to cross Warsaw North (say Marymont) to South (Ursynów), the GPS might offer you a straight line through the city center, while a quicker and more convenient route may lead along the city bypass. But the calculation is much faster.

It’s all cheating. Bending, stretching the material we are working with – computers – in order to deliver bigger, better and more vibrant experiences to users. We’re not sorry, not at all. It’s like solving elaborate puzzles every single day, while getting pay and praise for it. The joy of programming.