Unexciting. That’s the one word I would use to describe this year’s edition of GOTO Berlin. And I’m suspecting it’s a sign of advancing maturity. Just not sure whether it’s the industry’s, or my own.
This year was my second time at GOTO Berlin, after I attended in December 2015. I came back because the first time I was really satisfied with the content. Lots of fresh, practical, accessible ideas, well-presented (unlike, say, QCon London 2016). Plus, the keynotes were tremendous fun.
Keynotes are always tricky for the organizers, because you want them to be somewhat abstracted from the low-level themes of sessions, but not too far away, so that the audience can still relate. Plus, they need really engaging speakers. GOTO Berlin pulled this off perfectly. The 2015 edition had space-themed keynotes about the Apollo and Space Shuttle programs, mighty interesting. This year featured a variety: Mars exploration, data science, autonomous systems and data security. Really, really good content.
This year’s sessions, however, surprised me with how little new stuff there was. Nothing really bleeding-edge. Same topics I keep hearing about for the last two-three years: microservices, DDD, containers (and agile, of course, but I staid away from “soft” themes). Even serverless, that I thought would be the buzzword of this year, seems ubiquitous and long-tamed.
Not that any of these are boring—to the contrary, they’ve now reached mainstream adoption and presenters have a lot of real-world experience to share (as opposed to theories and hopeful evangelization). It’s just… what happened to the breakneck speed of “innovation” in this industry?
Granted, there was a lot of talk about AI, which seems to be the bleeding-edge stuff these days. But I staid away from those sessions too. We still need to prove we can deliver well-structured “dumb” software before we can think of constructing anything “smart”.
So, what did I see?
Rust and WebAssembly were the only real, new things for me. Emerging languages and platforms, the first one may help us make proper use of concurrency and multi-core processors:
Rust has explicit mutability and data ownership. „Enables taking better risks.” @argorak#GOTOBer
The rest of the sessions I attended were all about: how do you deal with the real world? Not some fresh, hot framework or cloud service, but the day-to-day grind of working in a business environment, modeling a highly irregular world in code, maintaining multi-year- and decade-old legacy code. All of this in a time of cheap networking, favoring distributed systems, where failure is the rule, not the exception.
It may be that the industry has changed. Reflected on itself and found that technological novelties are fun, but often leave real-world problems unsolved. Came back to Earth and asked itself “what are we really trying to achieve?” Or… it may be that I’m the one who changed. I still enjoy new toys, but there’s work to do down here. An approach that informed my choice of sessions to attend, and so influenced my impressions.
Comments Off on Git hard or go home
Git hard or go home
I am in love with Git. It’s the neatness junkie’s dream come true. Multiple people can code away safely, independently, then every commit can be written and rewritten at any time to end up with a tidy, coherent product. As long as you know what you’re doing, that is.
With the years I worked out a few strategies for taming Git, making both mine and my collaborators’ work easier. Perhaps these’ll save you some headaches down the road.
Understand the model
A prerequisite to understand most of Git’s concepts is learning how the data model works. It’s so important, that before I let developers do anything with Git, I force them to read through the “Git Basics” chapter of Pro Git. The rest they can learn as they go. In essence:
all commits are nodes in a directed graphs, where
when multiple commits point to the same parent, they’re all branches,
when one commit has multiple parents, it’s a merge,
there’s no such thing as “branch” or “tag” in itself, they’re just labels given to specific commits—friendlier names to refer to than 63949df, so:
deleting a branch or tag merely deletes the label. The commit 63949df is still there and you can reference it. At least until Git decides to do garbage collection, when all orphaned commits are permanently deleted.
a “fast forward” merge moves a label to a different commit, as long as the branch you’re merging into didn’t have any commits of its own. Since there’s no separate merge commit for it, there will be no trace of this merge later in the graph, so make sure that’s what you want. Otherwise do git merge --no-ff.
Take your time to study the model, the charts and perhaps the commit graph in some existing repository. Working with branches will suddenly make much more sense for you, and more importantly you’ll grasp the concept of rebasing much, much quicker.
Rebasing is the wonder broom that keeps the commit graph tidy. It’s what you do to get rid of commits saying Merge branch 'master' of https://github.com/your-name/your-repo.git which make the commit graph look like a Guitar Hero script—completely unreadable.
amend broken commits, if you:
mistyped the last commit’s message do git commit --amend and edit the commit,
forgot to stage some changes, add a file, do
git add <missing changes>
git commit --amend -C HEAD
squash obsolete commits, once you realize a commit other than your last one could use amending, do:
git add <missing changes>
git commit --fixup=<hash of commit to amend>
git rebase -i --autosquash <hash of commit to amend>^
rebase instead of merge on pull, pretty please. This makes sure master (or any other branch) is a single, straight line, that’s easy to follow. This will also cause fewer conflicts and make the remaining ones easier to resolve. Use either git fetch && git rebase origin/<your branch> or git pull --rebase.
There’s a caveat to rebasing—it changes history, discarding some commits and replacing them with other ones. That’s not an issue, as long as all of the commits you’re replacing are only local to your sandbox, but once you pushed these commits out into the world, you might be peeing into the pool.
Start juggling too many [branches] at once, and you’re bound to drop a few. In most source control systems, you can create hundreds of branches with no performance issues whatsoever; it’s the mental overhead of keeping track of all those branches that you really need to worry about. Your developer’s brains can’t exactly be upgraded the same way your source control server can (…)
To stay on top of your repository:
keep branches short and small, branch off, make your changes, create a pull request and merge as soon as possible. I used to advocate the gitflow model in the past, but nowadays I consider its long-lived develop and release branches an anti-pattern, and favor the simpler GitHub Flow. Fewer branches mean fewer (ideally just one) sources of truth.
delete remote branches, once merged, usually right after a pull request is closed. If you ever need to resurrect a branch and add something to it, you can always push it anew. Otherwise it’s hard to see which branches are still active.
don’t git cherry-pick or commit the same patch of changes to multiple branches. It’s confusing and will likely give you headaches during merging. If you stick to a single, long-lived master branch with short-lived feature branches, you shouldn’t ever need cherry-picking. Otherwise, use rebasing to move a commit from one branch to another.
Once your changes are out there, on the remote(s), you might as well assume everybody else has already fetched them. GUI clients like SourceTree have auto-fetching enabled by default. If you now realize you’d still like to do some amendments—say fix a silly typo or rebase the master branch—you’re screwed. You’ll have to ask everybody to git reset their local branches and cause a lot of grief.
Take your time before you push anything, and never, ever mark the “Push changes immediately to remote” checkbox in your Git client. It’s like disabling the Undo feature. Commit your changes locally, leave them baking for an hour or a day, then review them—see if there are no missing or obsolete line changes, files, whether commit messages are correct—and only then run git push.
Reduced amount and magnitude of merge conflicts, actually useful commit graphs, immediately obvious location of the most current code—all benefits I drew from applying these here practices. Call it Git etiquette or call it a way to improve the ratio of pleasure to pain in using Git. And do share your own tips in the comments, please.
Comments Off on Antique inspirations for software architects
Antique inspirations for software architects
The Roman architect Vitruvius around the year 15 BC completed his impressive, ten-volume treatise on architecture—De Architectura. It covers every possible type of man-made structure ever needed by the people of his time, but starts out wisely by prescribing basic qualities that good architecture must embody:
All… must be built with due reference to durability, [utility], and beauty. Durability will be assured when foundations are carried down to the solid ground and materials wisely and liberally selected; [utility], when the arrangement of the apartments is faultless and presents no hindrance to use, and when each class of building is assigned to its suitable and appropriate exposure; and beauty, when the appearance of the work is pleasing and in good taste, and when its members are in due proportion according to correct principles of symmetry. [emphasis mine, and I prefer ‘utility’ over ‘convenience’ in the original quote]
Sounds familiar? It certainly did for me when I was brainstorming the set of values that our newly minted team of software architects was to adopt. Durability, Utility and Beauty described perfectly how I thought about good software architecture.
Durability means that software must stand the test of time. Work as long as necessary without crashing or requiring mid-way reboots. And it shouldn’t need to be rewritten from scratch every few years, but instead adapt well to changing needs. This principle guides the selection of languages, platforms, frameworks and tools.
Utility means the software must fulfill requirements that it was given and must do so well. If it’s a user-facing application, it must be easy to use and supportive, if it’s meant to handle high load, it needs to scale well. If it exposes an API for others to connect to, that has to be comprehensive and flexible. We need to build software always with its intended purpose in mind.
Beauty means the software must be pleasing to the eye. A clean, simple UI, laid-out and colored for readability. Inside, a logical layout of components, packages and modules. Good, clear but concise naming, plenty of whitespace, short functions, variables of unambiguous purpose. Computers will read anything. We need to code for people. This principle underlies front-end and coding style guides.
Bridges ideally embody durability, some serving the public for centuries, like the Charles Bridge in Prague, opened in 1402. Getting people efficiently and safely across rivers, valleys and canyons is as clear an illustration of utility as we could hope for. And while beauty is in the eye of the beholder, to us Millau Viaduct truly is beautiful, with its slender structure suspended over the Tarn valley like a delicate web of strings.
We’re using these values to ask better questions. Is it durable? Will this fancy framework you have spotted at some conference be around in two years? Are you sure we need the user to click three times to submit the order? Can we make it easier? Can you read your own code? Can your new team colleague read and understand it? Old lessons that continue to hold true in an age technically so much more sophisticated than when they were put in writing.
Comments Off on The state of the craft at CraftConf 2015
The state of the craft at CraftConf 2015
CraftConf in sunny Budapest aims to be the conference in Central Europe where developers share what’s state of the art in building software. Joining the 2015 edition was an easy choice, after the first edition a year earlier received rave reviews from attendees. Three days, 16 talks and a workshop later I emerged with 6,062 words worth of notes, distilled into a few broad trends that seem to represent the edge of the industry right now.
Microservices entering the mainstream
At SoCraTest 2014, less than a year ago, a lot of people were still asking basic questions about microservices—what they are, how to build them, how are they different from any other SOA approach. Not anymore. CractConf saw a lot of companies sharing battlefield experiences with these architectures—what works, what doesn’t, whether to consider the approach at all and how much it will cost.
Microservices are a means to scale the organization—allow many people to work on the same system, reduce dependencies, and thus enable rapid development without having to coordinate huge releases or continuously breaking one another’s code. They’re routinely described as “easier to reason about” and built to “minimize the amount of program one needs to think about” by the very virtue of being small and focused on doing one task well.
Microservice ecosystems at established companies routinely consist of 50+ and even 100+ services, usually built in a whole range of technologies. If a single service is small enough, say “2,000 lines of code“, or “takes 6 weeks to build from scratch by a newcomer” (both actual definitions), then building one in a wholly new technology is easily sold as an inexpensive POC.
All companies successfully implementing microservices started out with monolith applications and afterwards split out the services one at a time. This approach was broadly recommended, even for greenfield projects, because monoliths are easier to refactor while the organization works to research and understand its product better.
The supporting tooling is already there and mostly quite mature. Early adopters, like Netflix, had to build their own tools, which they later shared and improved with the open source community. Meanwhile, commercial tools popped up, offered by specialized vendors, with support contracts and other conveniences that pointy-haired bosses like so much.
To benefit from microservices without getting killed, you need to:
automate everything—builds, testing, deployment, provisioning of virtual machines and container instances. It’s the only way to manage hundreds of running service instances.
monitor everything—each service must have its own health checks, showing whatever information is most relevant. Typically numbers of transactions, timings, latency, hardware utilization, but often also business KPIs. You’ll also need a means to track requests flowing through the system, ie. by IDs being passed in service calls. This information must be available to development teams in real time.
build for failure—crashes, disconnections, load spikes, bugs, all of which will occur more often than with monoliths. Make sure failures are reported, tracked, and the system is self-healing, via techniques like circuit-breakers, bulkheads or resource pools. Work with business representatives to determine for each use case whether consistency or availability are more important, because you cannot always offer both.
Not everyone was pleased with such a high degree of saturation with microservice themes at CraftConf:
#craftconf so are you all enjoying MicroServiceConf? ;)
But that only proves that this architecture is well past the stage of early adoption and entering into the mainstream.
Everybody has an API
The consequence of adopting microservices is that APIs become the norm, starting out internally, and often “sooner than you think” being made available to the outside world, either to support different devices or integrate with 3rd parties. Their costs also became more evident:
An API is for life. Once it becomes public, it’s very difficult to change, so early design mistakes become much more difficult to fix.
Use techniques that allow you to add new fields and methods to an API without disturbing clients.
Semantic versioninig, coupled with decent deprecation procedures, help move the process of making backwards incompatible changes.
You’ll still have customers who “missed the memo“, whom you may have to offer commercial legacy support for an additional fee.
Create good documentation, generated and published automatically.
Assign people who will be monitoring community sites, like StackOverflow, for questions regarding your API.
@ade_oshineye (of Apprenticeship Patterns fame) was spot-on summarizing the process of deciding whether to create an API by showing an analogy to puppies—everybody wants one, but not everyone is ready.
Frameworks these days tend to dictate the design of applications by suggesting organizing code by layers into packages like models, controllers or views. The consequence is that:
modifications to a single business scenario require often changing code in many packages,
it’s all too easy for components to reach into layers outside their hierarchy, too deep down for their own good, while they should only be calling public APIs,
there’s very little information presented by the package structure on what the application does.
Several speakers postulated that this should stop and instead code should be packaged by business units, with relevant models and controllers sharing the same packages. Encapsulation could further be improved by changing method and component access to package instead of the common public. It’s an old theme that finally seems to be getting traction, with past support from developer celebrities like Uncle Bob:
What do you see? If I were to show you this and I did not tell you that it was a Rails app, would you be able to recognize that it is, in fact, a Rails app? Yeah, looks like a Rails app.
Why should the top-level directory structure communicate that information to you? What does this application do? There’s no clue here!
Two themes notably absent from the vast majority of CraftConf talks were TDD and Agile. They implicitly became accepted as defaults—the baseline, cost of entry and survival in the game of software development.
You basically have no chance if you are still at a company doing waterfall intentionally @cagan#CraftConf
Microservices will only reach their full potential when their owning teams can work and make decisionsindependently from central authority, including frequent deployments to production.
Frequent, decentralized deployments require comprehensive, automated test coverage.
TDD drives the design of code, meaning every line exists only to make a failing test pass. “Every new line of immediately becomes legacy code” so code as little as possible.
Google continues to build new tools to satisfy business problems they are facing—tools that often get adopted company-wide, but never enforced. They call it an evolutionary approach, where the best ideas will be adopted simply on merit, while the others will die in obscurity.
@cagan argued how the whole top-down oriented product design cycle is broken by being grossly inaccurate and too heavy, and companies need to adopt bottom-up idea spawning and rapid testing instead.
It’s been a blast to mingle with some of the 1,300 energetic attendees, meeting friends, old and new. @MrowcaKasia and @kubawalinski turned from Twitter handles into real, live, and engaging persons. The slightly grumpy but ever competent @maniserowicz is always a pleasure to meet, and then there’s the whole SoCraTes gang of @leiderleider, @c089, @Singsalad, @egga_de and @rubstrauber, whose passion for community and craftsmanship continues to inspire.
The 2015 CraftConf was only the second edition, but already organized with such painstaking detail, that I can easily call it the best conference I’ve ever been to. The team’s already fishing out and working to improve what didn’t quite work this year, so next year’s event is bound to be even more polished.
Comments Off on What goes UP must come down
What goes UP must come down
Back on April 26 I reviewed my first year of wearing Jawbone UP with all its benefits and deficiencies. Two months later, on June 21, my band gave up the ghost as battery life suddenly dropped to a mere few minutes. Today I’m wearing a replacement – courtesy of Jawbone, and have a few more thoughts to share from the experience.
Considering how long the band worked well for me, I was one of the lucky ones:
Just got an email, my second @Jawbone Up band is dispatched (two previous died). Hoping this one will work.
14 months of nearly flawless operation sounds like an eternity, when some people had as many as five consecutive devices replaced in the course of a few months. If that statement sounds sarcastically, it should. I expect a €100+ device to easily last a few years accepting only gradual reduction of battery life.
Like many UP users have commented, the design and features make it a winner among similar bands. It’s fun and easy to use, delivers lots of information that helps me steer my habits in a healthier direction. But while the “designed in San Francisco” portion of the product works out really well, the “assembled in China” clearly needs a retrospective.
The warranty for Jawbone UP is offered only for 12 months. Pretty short, as electronics regularly come with minimum 24 months of coverage. I thought my Jawbone adventure was over, because I didn’t see the point in buying a replacement. Nonetheless, since there’s no harm in asking, I reached out to Jawbone directly and quickly got a response:
@mpaluchowski DM us with your email address! We'll get in touch to help out.
Two reset attempts and a phone conversation later I was offered a replacement. How? “As an exception“, I was told, because Jawbone “wanted to provide the best possible customer experience“. Fair enough and I’m happy to have received that kind of attention – that certainly says a lot about the approach of Jawbone towards its users. Still, I cannot quite understand why an exception was made for me, in particular. Why make exceptions in the first place? Why not extend the warranty to 24 months for everyone?
From my first contact with @JawboneSupport on June 24, my replacement band arrived a few weeks later, on July 17. Most of this time was shipping. Jawbone knew exactly what type, size and color I had and I was pleasantly surprised that I didn’t need to provide those details.
The new band is distinct in a few externally visible construction details. The button feels differently and it is rattling the same way it must’ve for Zach Epstein, who’s article I linked to above. He had his band replaced due to the rattling, for me it’s not an issue.
I’m hoping changes go beyond the visible and something was done to improve the band’s longevity.
During this one month of waiting for replacement, one additional issue became clear. All this data that my body is producing and sending off to Jawbone to make profit from, there’s no way to extract it in case I’d like to move away from the device. No export feature, no official policy on how to grab it. I’m not even sure if legally the data can be considered mine. I’m sure it’s my own movements that produced it, but since it was processed by the band and application, can I claim ownership?
I’m not so much worried about the possibility of Jawbone selling off my data, as long as it’s anonymized, aggregated and properly controlled. But I sure would like to receive it when I ask for it. These questions will be coming up more often as more devices join the market and users begin ditching them and switching.
For now I’m hoping my new band will accompany me longer than the previous one, and for soon-to-be users of the device, I hope Jawbone will work up enough confidence in their product that they start offering two years or more of safety, replacements available without exceptions, just in case.