Reviews

This blog

This blog is something of a vanity project. It has a readership of approximately zero, excluding myself. Sometimes I wonder what the point of having a blog with no readers is. I have no plans to abandon it however.

You might think that the stereotypical Yorkshireman attitude of “I say what I like, and I like what I bloody well say” is exaggerated for comic effect, but it’s true, I read the blog and I really do like what I bloody well say.

A Yorkshireman

And by read it, I mean, I proof-read it. A lot. And by a lot, I mean, really a lot. I do this because I am not anything like as good at writing as I think I should be. Sure, I like what I bloody well say but I often don’t like how I’ve bloody well said it. Writing even simple concepts down like why Foden doesn’t do well for England takes me a great deal of work to get right, from polishing the phrasing to shepherding my thoughts into a coherent structure. My father is a published author and cartoonist, I clearly did not get the drawing genes but I surely still get the writing ones, right?

Dispersion of writing gene discussion

Prior to starting the blog, I had read quite a few Tim Hartford books smugly thinking to myself that Tim has such a formulaic worrying style. Thanks to this blog, I am now aware that I’m not fit to click save on his Word documents.

In theory, writing this blog is good practice and I’ll show the kind of improvements my entire readership is praying for. And there’s more. Writing about how to lose at Dominion has helped me improve as a player. For example, I no longer underestimate alt-VP strategies quite as much as I used to do.
There are benefits to writing here, but it is also a vanity project that ironically holds a mirror up to some of my shortcomings. In the spirit of trying to improve the benefits-to-revealed-flaws ratio, I could use it to do what I should have done in the past: annotate, comment on and highlight valuable parts of books I have read as a way of remembering what was in them.

Football Hackers

I had this thought approximately a year ago when I was reading the book “Football Hackers” by Christoph Biermann. Unfortunately, there is little in the book to write home about. It is however, very well written. I read it all the way to the end quite happily despite realising early on that the book was not going to share any notable insights on the titular subject. I can more or less spoil the book by telling you that the owners of Brentford and Brighton made it big by betting. As for any interesting detail on that front, well they use Singaporean bookmakers who are not spooked by large bets, and bet on matches in over/under markets which reduce the three possible outcomes of a football match to two, thereby simplifying calculations.

Insights on the football front are even more elusive. Reading between the lines it sounds like Brentford’s success at corners is due to a focus on making sure they have players where the ball tends to go after the first contact is made. I always found the simplistic “more players in the box” explanation of Brentford’s set pieces success very unsatisfying. If that was the reason, it should be easy for other teams to achieve Brentford’s level of set-piece success, and furthermore, Brentford should suffer a lot of devastating counter attacks. I don’t see either of these things. Likewise, my son’s exasperation at how Brentford consistently stab home the messy rebounds and second balls from corners is explained by what is implied in the book.

How To Win The Premier League

Asides aside, this brings me on to what I wanted to write about, which is the book “How to win the Premier League” by Ian Graham. It is the opposite of “Football Hackers” in that the author was employed by elite football clubs rather than being a journalist on the outside trying to peek in. The result is a book that is less well written but does actually share some insight on data and processes used by the football industry.

The content speaks to me on a couple of levels, I’m interested to know how clubs use data to analyse games and also because I can remember the players he recommended at Spurs and Liverpool. While he had mixed success and infrequent influence at Spurs, he was by extension involved in the infamous transfer committee at Liverpool and that committee was notorious mainly because of its record of mediocre signings.

Explanations and excuses for the early failure at Liverpool feature in some detail, but the book has the same annoying reluctance to tell you what really went on as every other football book I’ve ever read. I’m going to give the author a pass on this front since he was only associated with Spurs and was a hybrid worker (translation: he worked from home a lot) during his time at Liverpool. Don’t read the book if you want to know what Klopp is really like, similarly don’t read any football related book if you want to know what goes on or what anyone is really like.

The book explains xG and describes the current, state of the art, xG models and all the extra considerations that xG models have beyond shot locations. I think xG numbers are basically lies but at least the lies are better now.

One thing that is apparent from reading the book is that his system for rating players is very, very good. The list of successes is extremely impressive. Bale, Salah, Firmino, Mane, Allison, Van Dyke, Coutinho, Robertson, Matip, Fabinho etc. were all recommended by the author to Spurs and Liverpool. There is some good fortune in that Klopp rated many of the players he suggested highly anyway, and in the case of Robertson - recommended as an attacking full back who couldn’t defend - Klopp’s opinion was that he didn’t care about Robertson’s defending ability anything like as much as his attacking ability and Klopp would be prepared to ensure Robertson has cover when defending. Plus there is some considerable fortune in having the talents of Klopp as a manager to get the best out of the signings. All the same, Liverpool have been incredibly successful in the transfer market.

The system Graham uses to evaluate players is basically Karun Singh’s expected threat “possession value” model where the values from are calibrated to Premier League level by adjusting the values based on estimates of how good other leagues are judged by the Dixon/Coles model. I never had a need to worry about the latter but I did try to use the expected threat (henceforth “xT”) model to evaluate my junior football teams.

Graphic of xT values

Considering the aphorism that all models are wrong but some are useful, I have to say I didn’t find xT very useful. The ability level of the kids in my junior football team was such that they got microscopic scores. Similarly the performance levels of my boys were inconsistent and the numbers I got out of xT analysis might as well have been random. We would win comfortably and I would look at the xT scores and it would tell me that, in aggregate, my players made almost no positive contribution whatsoever. It’s very depressing to be told that all the good stuff that results in goals and shots was offset by misplaced passes and failed dribbles.

The book attributes the transfer success to their xT model. I have therefore massively raised my opinion of xT models, clearly they have considerable value.

I liked the book a great deal. Unlike many football books it does actually reveal some of the internal machinations of football clubs. Graham combines an ability to deal with numbers and algorithms with an ability to think critically about the world, a rare combination indeed. He is someone who is able to marshall data and stats and draw from them logical and rational conclusions. I could not say the same about the authors of “The Numbers Game”, Chris Anderson and David Sally, a book which contradicts Graham’s findings and thoughts.
Graham devotes a small chapter to explaining why actually, clean sheets are not more valuable than goals, why Darren Bent wasn’t the best forward in the Premier League at the time (2013), and why corners aren’t worthless. It might be a slightly snippy chapter for Graham to add to his book but it underscores for me his clarity of thought and a desire to have a theoretical basis for the stats and the data.

Foden's England Failings

There’s a lot of competition but none of England’s players in the recent Euro 2024 tournament disappointed quite as much as Phil Foden. Coming in to the tournament following a sensational season with Manchester City, Foden scored 19 goals and provided 8 assists in the Premier League helping City to yet another Premier League title, to add to the UEFA Super Cup and the Club World Cup titles captured earlier in the season. The level of his performances were such that he was voted FWA Footballer of the Year, Premier League Player of the Season, Manchester City Player of the Year and PFA Players’ Player of the Year. Nobody came into Euro 2024 hotter but his form was not apparent in an England shirt, so much so that Foden had more children (1) during the tournament than he did goals or assists.

I have found it frustrating to watch because even a lowly level one football coach like myself can see fundamental problems with the way Foden had been utilised by England. Foden was used as the left inside attacker in a 4-2-3-1 with a right footed left back, Trippier, providing the width. From my experience coaching a junior football team I am well acquainted with the compromises of playing right footed players at left back. Maybe that’s a meatier topic for another day, the problem with Foden seems so fundamental that I think I can explain it by working from first principles.


Shielding the ball in football

How do you prevent an opponent stealing the ball away from you? There’s very little you can do to stop the opponent taking possession. Essentially, all you can do is position your body between your opponent and the ball thereby physically prevent the opposing player from reaching it. Inserting yourself between the ball and your opponent and maintaining that situation is called “shielding” the ball and it is a fundamental football skill. Stick with me and I will return to Foden in due course.

Perez shielding the ball

In the image above, a Leicester City player (Perez) shields the ball from a Liverpool player. The Liverpool player is to Perez’s back and the ball is in front, an ideal scenario for shielding the ball.

Not all of football can be played with your opponents at the rear and the ball in front of you. Far from it. In open play, a more regular occurrence is the ball is at your front and also in front of the defender. The defender will take up a position between you and the goal, or at least in a manner to stop you making effective progress up the pitch. Now what?

Kudus against Alexander-Arnold

The defender wants to position himself between you and the goal. Failure to do this will mean avoiding the defender will take you towards the goal. The dynamics of this constraint on the defenders means that, if you are an attacker on your team’s left, the defender for this part of the pitch (called a Right Back) will be in front of you but firmly to your right hand side. If you look at the above image, a West Ham player (Kudus) is in that exact situation against Liverpool. Kudus is pushing the ball with his left foot away to his left side. In a way, this also shields the ball – the defender, Alexander-Arnold, can only touch the ball by reaching all the way across Kudus’s body. It’s possible the only way he can get any kind of touch on it is by diving in with his feet, a very high risk thing to do, which in the best case will probably only succeed in giving West Ham a throw-in.

This technique exhibited by Kudus is less ideal than the clearer case of shielding the ball demonstrated by Perez above, but it is also more frequently useful.


Foden’s dribbling technique

Foden in full flow

With the idea of shielding the ball in mind, consider the above image with Foden dribbling in full flow. Look at how Foden dribbles by manipulating the ball with the outside of his left foot, the ball is slightly to his left side rather than directly in front of him. This is Foden’s dribbling technique, there’s absolutely nothing wrong with it and he is a very good dribbler. There are implications arising from his preferred technique, however, think about how defenders could approach Foden when he is dribbling, if they approach from his left, the ball will be closer to the defender than Foden’s body so the ball will be easier to access; if they approach from the right they will have to reach all the way across Foden’s body to get the ball and it will be very hard to access, just like in the picture of Alexander-Arnold and Kudus above.

Foden’s contributions

You’ve probably worked out where I am going with this. Having discussed the principles of shielding the ball and Foden’s dribbling technique, I can now illustrate how things played out. I’ve got pictures of the same sequence of events, at four different points in the first half of the game against Slovenia. I’ve only included one picture sequence here to keep things brief.

The sequence is essentially this: Foden receives the ball on the left side of the pitch. He wants to turn in field, of course he does, almost all the pitch, the vast majority of his team-mates and the goal is infield. There is some pressure behind him, which he evades with a turn to his left and a burst of pace.

Foden turns

The initial burst of pace buys Foden some time but as he travels with the ball he slows slightly allowing the opponent applying pressure to get closer and another approaches as he moves in-field.

Foden travels in-field

The pressure comes from his left side and therefore in order to shield the ball from pressure, Foden moves the ball inside his left foot so it’s further from his opponents and he is better able to protect it. To evade the pressure in a low risk way, Foden looks for a pass, but where the ball is relative to his strong foot means the most natural pass for him to play is to drop the ball back to the central defender, Stones.

Foden passes

To sum up, the contribution from Foden to this sequence of possession is to receive the ball from a defender on his team, drive in-field, encounter pressure and retain the ball by passing backwards to a centre back. Obviously when Foden is picked to play for England everyone imagines him doing the kind of things he did for Manchester City. This is absolutely not what anyone had in mind, and yet, it’s what happened as a consequence of both his dribbling technique and his position.

Foden would be a much better player for England if he was played centrally or on the right, where his dribbling technique would naturally shield the ball from pressure as he travels in-field. It seems so painfully obvious.

To be completely fair, I should consider the one successful progressive sequence along side the four underwhelming sequences.

A successful dribble

Here the opponents are sufficiently disorientated that Foden is able to turn without encountering pressure. He dribbles towards the box using his favoured technique, in fact, the first picture I used of his dribbling technique is this same dribble looking from a different angle.

Foden is able to progress the ball and avoid encountering pressure by dribbling very quickly, this is part of his skill set, dribbling quickly. The down-side to dribbling quickly is that playing a gentle pass when running quickly is very hard, and sure enough Foden over-hits the pass attempting to slide Kane in. To labour the point, even when the opposition is sufficiently disorganised that Foden succeeds in advancing the ball with a dribble from his starting point on the left he will be traveling very quickly with the ball and any pass in to the box will have to be perfectly timed and very carefully weighted.

Foden’s dribbling technique is ideal for playing very wide on the left like an old school left-winger, running down the line and naturally keeping the ball far across his body away from the opposition right back using the same technique that Kudus is using in the earlier image. That’s not really Foden’s game. He’s great at receiving the ball and turning in congested areas. If he picks the ball up on the right and drives in-field then his natural dribbling technique will keep the ball away from the defenders who will be applying pressure on his right hand side.

Playing Foden on the left means his dribbling technique is a weakness when it should be a strength. This is why Foden was such a disappointment for England at the Euro 2024 tournament.

How To Lose At Dominion

Dominon Box

A vague theme running through what I write on here is the off-piste thoughts I have about things that are taken for granted by most people. Until now I have stuck to a rigid and eccentric mix of football and programming. I have long thought I should write book reviews for no reason other than to, in effect, make notes for my own future reference. Anyway, prompted by a few recent, momentous events: the bill for the DNS registration of second-system.blog and the passing of another year, I thought might actually write a book review. I don’t have any obvious candidates at the moment on the book front, sadly. In the meantime, I could write about one of my favourite board games.

Dominion is a very special game. It’s sufficiently deep and challenging that it’s worthy of making notes for myself and because it’s not something beginners play very well, writing about it also fits in with the general blog theme of doing things in a different way.

Dominion is both strategic and tactical, the number of set-ups you can have is astronomical and it’s a very high skill game. It has a framework of simple rules than can be grasped very quickly and a game can be played in 15-25 minutes. I really like all these aspects of Dominion but one thing that really stands out about it is the counter-intuitiveness of what you need to do to play it well. The vast majority of people who start out playing Dominion will do so in a particular way because it obviously seems like the way to go, and they are wrong! I was wrong! It’s so tempting to go about it in a mistaken way, and so easy to overlook what you should try to do and slip back into terrible ways of playing.

Very few games I have played are like this, I would call out Blitzkreig!, The King Is Dead (2nd Ed.) and maybe Heat as possibly the only other games I have played where what you need to do to win is different to how you initially go about it. What you need to do to win at Blitzkreig! is play in a way that is just a little counter-intuitive to the theme, but I think the designer deliberately planned it like that. Heat I need to play more of, and I’m not particularly great at it, but it’s easy to think that the game is all about getting your car around the track and it is that, but managing your deck is a significant consideration too. Finally, The King is Dead is again deliberately designed such that what you do on your turn cuts firmly against itself, I hear good players simply pass for much of the game! Dominion is, I think, not designed to be like this and yet it is much more counter-intuitive than all of these games.

I’ve played a lot of Dominion, I mean really a lot. I’m decent enough now at it but I still forget sometimes about what I need to do and drift back to the intuitive and instinctive ways of the novice. Needless to say, I’ve lost a lot of games of Dominion.

How to win at Dominion

You simply play 30 cards every turn. Of course, this is one of my dry jokes, but it’s based in truth. If you can assemble a deck that plays 30 cards every turn then the world is definitely your oyster and Bob is your elderly male relative.

Dry jokes out of the way, I will now proceed to discuss how to lose at Dominion.

How to lose

Lose? Well, writing a how-to-win guide for Dominion is an epic undertaking because winning requires a lot of correct play for which there are a lot of considerations. In contrast, when I lose it’s usually because I have fallen into one of a few traps - so telling you how to lose is easier and simpler (for me) than telling you how to win.

Way 1: Not distinguishing between cost and value

Dominion is called a “deck-builder”. This term melted my brain before I learned how to play. What on earth is a “deck-builder”? I had played the Pokemon TCG with my son where you would assemble or choose a deck prior to playing, and the playing of the game used this deck. How could the assembling-the-deck part of this process be the game itself? Well, what happens is that taking your turn with the deck culminates in the gaining of additional cards which are added to your deck (OK, OK, as a general rule the composition of your deck changes only when you shuffle). The playing out of your deck and the assembly of it occurs at the same time. This double layering of what’s happening means there is tension in deciding between buying victory cards that you will need to win the game, cards that will help near the end of the game, and cards that will improve your deck right now.

The best card in Dominion

What you need is sometimes a card that costs 2$, 3$, 4$ but the powerful 5$ cards available might be – all things being equal – better cards but what your deck needs now might be a cheaper card. The 3$ card that you need is better than the powerful 5$ card that you don’t, it seems like a terrible waste to spend 8$ on a 3$ card but if that’s what you need then you have to be able to disregard what each card costs in your decision. The human mind instinctively finds the impulse to buy higher cost cards hard to resist. Gold is particularly enticing to the beginner because it costs 6$, more than almost all of the action cards, and it helps you buy victory cards. The beginner that has 7$ to spend on a card doesn’t want to “waste it” and reckons buying a Gold less wasteful than buying a cheaper card.

One of the most counter-intuitive parts of Dominion is that the best card costs just 2$. How can a 2$ card be better for you than a 5, 6 or even 7$ card? It stems from back when Dominion was being conceived, nobody knew how to play it well, and Chapel was clearly intended as a card that a player who is drowning in curses buys with the intended use of trashing the curses and removing them from their deck. Used like that it’s an OK card, costing you a whole turn to clean out a few curses, and sure enough that’s easily worth $2 – actually more than that but the designer didn’t want people to get stuck not able to buy it so he made it 2$ rather than the 4/5/6$ it should really cost. Since getting stuck with a terrible deck is absolutely no fun, and because early Dominion players didn’t really know what they were doing, the best card ended up as one of the cheapest cards. This is not what anyone expects when they first come across Dominion. For 2$, Chapel is surprisingly cheap relative to its power but why is this card so good?

Way 2: Not trashing

A terrible card

Chapel is good because it allows you to remove bad cards from your deck, henceforth called trashing. It trashes cards and it’s hard to over-emphasise just how important trashing is to your deck. For every card you trash, that’s one bad card you won’t see in any of your future shuffles, every card you buy is now one less card away from showing up in your hand, and one less card away every shuffle. It’s one less card you have to draw. I can’t put it in words actually how good it is because you probably read that explanation and weren’t blown away by it. It’s always better than I imagine it is. Often I will lose because I have a choice between trashing two Coppers and buying a great 5$ card, and I get too excited by the great 5$ card that I neglect to remove two bad cards from the deck. As a yardstick for trashing consider that trashing two coppers (not even estates) is generally better than gaining a 5$ card, and not just slightly better, much better.

Returning to the counter-intuitive nature of Dominion, the newbie sees the Estates and the Coppers and notices that, correctly, the Coppers are in fact helping you get better cards. The newbie may or may not realise that the Estates are a major handicap. Regardless, that Coppers help is true, but the point is that the Copper cards help you a little bit – it’s not that they don’t help you, it’s that other cards are going to help much much more – and having Coppers in your deck literally gets in the way of you playing the good cards. These cards are going to get in the way for every shuffle you carry out. There is a huge opportunity cost here, and it’s hard to get your head around that let alone the magnitude of it.

If it helps you be a bit bolder with your trashing, consider that you don’t often need lots of purchasing power early in the game. Usually, at the outset you have only a single buy and you want to add a 5$ card to your deck, so holding off trashing is often a mistake and piling up on Golds or Silvers doesn’t help very much if at all.

Way 3: No draw

I rarely fall into this trap but if you’re going to win by playing 30 cards a turn then you need to move those cards from your deck in to your hand. There is very real limit to what you will be able can do with five cards, indeed, the five card turn is often a very sad turn. You need to have draw cards in your deck, to fill your hand and to cycle your deck so the good cards you gained will show up in your hand sooner rather than later. I rarely fall in to this trap but when I do it’s usually because there are good cards available that benefit from having few cards in your hand when you play them.

Way 4: No payload

It’s easy to fall into this trap by trashing your bad cards, if you are not careful you will end up with no purchasing power in your buy phase. You absolutely need to bin all your Coppers but if there is no purchasing power on your action cards, then you will need some treasures after all. Silver is a very unexciting card but it will help you work your way up to buying more expensive cards.

Village

Way 5: Being a village idiot

If you are going to play 30 cards, a lot of them will have to be action cards. And there’s more, if you are going to play a lot of action cards you need enabling cards called Villages which do very little apart from allow you to play more action cards. Sometimes, there will be a single pile of Villages and the fight will be very much on to get the lion’s share of these cards. Again there is an opportunity cost here, if you spent all your turns adding villages then your deck isn’t really getting better, it’s gaining the potential to be better but it’s not improving in any other way. It’s very easy to find yourself in a situation where your opponent has bought cards that improved their deck while you have been building up the potential power of your deck and you have fallen so far behind that the game is clearly already lost.

Way 6: No control over the end of the game

The end of the game in Dominion is triggered by the emptying of card piles, if you don’t have any way of forcing the game to end when you are ahead then it will be a case of pure luck who gets the last province, sometimes it will be you and sometimes it will be your opponent. Playing a game and having the winner come down to a coin-toss is very unsatisfying, well, it is if it isn’t your lucky day. To go from coin-toss finishes to being in control of the end of the game then you need cards that gain cards (or allow you to buy extra cards) so you can stop the game when you are in front of your opponent. You simply must have these cards in your deck when the end is near, but when do you pick them up? You can’t leave it to the last minute because you will be fighting over the victory cards at that point, so you have to get them early on even though they won’t be the best cards for your deck early in the game.

Farmers Market

Way 7: Ignoring alternative ways of getting victory points.

The flip side of trashing being better than the brain can comprehend is that victory cards are worse for your deck than you imagine. If there is another way of getting victory points that doesn’t hurt your deck quite so much then it is usually well worth pursuing these cards because green victory cards are so incredibly injurious to your deck and your turns. The terrible, unhelpful cards you should be desperate to remove from your deck at the start of the game will be back once you starting buying victory cards, and if you are like me and you under-estimate trashing, then you will also under-estimate the effects of adding victory cards to your deck. This is Dominion’s catch up mechanism, your opponent may have more points than you but he or she may have poisoned their deck, allowing you to have powerful turns while they trip up on their victory cards. If you can score points without weakening your deck and opening the door for your opponent then you should try to incorporate that into your strategy.

Recap

Of my seven comprehensively tried and tested ways to lose at Dominion, three of them are because what you need to do is not what your human brain is expecting: Disregard the cost of cards, trashing your starting cards, and getting points without poisoning your deck. The other four of them are not in themselves surprising discoveries but pitfalls for the unwary who are at least not making beginner mistakes. I’m not a beginner, but I still listen to my naive instincts too much and I still sometimes forget what I am doing and where I am going. Dominion is not an easy game to play well but at least I can aim to avoid playing it badly.

The Most Important Thing

My son recently changed football clubs. He moved from the one I coached and reluctantly folded to one where a lot of his friends play. In one of his first few games at his new club, his team were losing at half time. A father of one of my son’s team-mates who I had talked to at various birthday parties, knew that I had coaching experience and plenty of opinions, asked me what I thought we should do to turn the game around.

A Wrong Question

Well, I said, we should really try to actually pass the ball to each other. Sadly the amount of successful passes in the first half was pitifully few. In fact, they did pass the ball better in the second half and lost anyway.

It wasn’t a good answer, although I like it over answers you hear repeated by football managers: “Didn’t win the 50/50s”, “Didn’t win our individual battles”, “Didn’t win the second balls”, and the classic “Didn’t take our chances”. For reasons of brevity I answered the question as if the question was “What’s the most important thing we can do”? Which was a mistake because that’s what Lemony Snicket would call a Wrong Question.

The Most Important Thing Is…

According to exotic animals is...

Paraphrasing Malcolm Gladwell’s views on spaghetti sauces. It’s a wrong question because there’s no Most Important Thing, there are only Important Things.

What are these things? Well, just for starters, I dislike the slowness of throw-ins and free kicks, a lack of a plan for throws, corners, goal-kicks and set pieces. The way the full backs are positionally static and don’t join in with attacks but the centre backs are positionally dynamic, racing out of position to engage the opponents. The lack of positional discipline in the midfield generally, the centre midfielders are everywhere but where you would expect them to be, and the wide midfielders shirk a lot of defensive responsibility and play as ersatz forwards, sometimes one of them does and the team ends up in a lop-sided 4-3-3 and sometimes both of them do and it ends up with four attackers. What should be the attack is two flat in a line strikers neatly arranging themselves so it’s easy for the CBs to mark them. In spite of the quantity of them, there’s scarcely any pressing from the forwards. There’s no depth to things, just flat lines, little control of the space and of counter attacks. Individually, one player does everything in slide tackles, one player is entirely motivated by doing what are basically stunts such as back-heeling the ball “blind”, one player has a dribbling technique which is OK for playing on the left but causes him to be easily dispossessed when he gets switched by the coaches over to the right side. Almost the entire squad takes very little care over their passing, chipping the ball around at knee height. I have further beef with individual techniques and playing styles but some of it overlaps with ability and it’s not really the purpose of this blog for me to highlight it here, other than to say team dynamics are one thing but beneath that there is whole load more of those important things at the individual and technical level.

Which is not to say that you can’t prefer some things over other ones. I would always want to know why you rank one item above another – is it based on some objective measures? Perhaps some items are low hanging fruit and more easily addressed than others. Prioritising coaching individuals over team factors seems reasonable too, developing kids to be good all-round footballers who enjoy playing as adults strikes me as your primary responsibility and much more important than winning.

The idea of a Most Important Thing is mainly a distraction from reality, an imaginary and comforting label. There’s no silver bullet, there never is. In so far as there is a Most Important Thing it’s just paying attention to lots and lots of tiny little details.

How to win The Times' chess competition

I have won the The Times’s chess competition twice, The Saturday competition and the Sunday competition once each.

How can you win, bank the cheque and get to read your name in the paper? There’s two ways: the straightforward, simple way or my way.

First you need to know how the competitions work. Every Saturday and Sunday a chess puzzle is published. The challenge begins with locating the puzzle, which may be hiding next to things nobody wants to read like the obituaries or yesterday’s closing prices for FTSE 100 shares. Once located, work out the answer. The format of the competition requires you to only send in the first move – The Times doesn’t really want to wade through lines and lines of chess notation for every entry. This works well since you can’t really guess the first move without knowing the full sequence of moves and it saves them a lot of time deciding if an entry is correct or not. Having worked it out, send your answer in to The Times, who will randomly choose one person as the winner from everyone who submitted a correct answer. Metaphorically they draw entries from a hat, I say metaphorically because I don’t think there really is a hat.

The simple way to win

Let’s assume that the Saturday paper gets 150 entries and the Sunday paper, which has almost twice the readership, gets 300. At the current ratio of weekends to years, my naive expectation is that if you can send the correct answer every Saturday for three years, you’ll probably win at some point. And for the Sunday Times puzzle simply send the correct answer in every Sunday for six years, which should be enough to get a win. Probably.

My way to win

The other way is not to be relentless but to be lucky. I certainly was.

My first win

The Saturday Times Puzzle

The first time I won, it was the Saturday competition.

The puzzle was unusual in that the solution is not a sequence of forcing moves. There are multiple responses white may play to the correct move from black. This is quite rare, usually there is only a single path to checkmate or one path to an overwhelmingly winning position. Here if black plays the right move, he can force checkmate whatever white plays.

Less unusual was that the checkmate was substantially helped by a bishop on the long a7-g1 diagonal. It’s easy to miss the effect of a bishop because it moves at an angle to the other pieces, and it’s even easier to miss the effect of a bishop over a long distance, tucked far away from the action in a corner of the board.

Finally, I should mention the date, this puzzle was the last Saturday before Christmas. The pre-Christmas panic was at its peak and fewer people than normal would have bought the paper that day, and of those, I reckon a smaller percentage of readers would have had time to look at it.

My second win

My second win was the Sunday Times’ competition. This one is much harder to win because it tends to be a more difficult puzzle and the Sunday Times’ readership is very large.

I don’t remember this one as vividly other than it took me most of the week to work it out. Take a quick look at it and see if you can find the move.

Spot the Move 823

I was extremely confused by the puzzle because it didn’t appear to make sense. In exasperation, I wondered if there had been a mistake, the puzzle said white to move but, in fact, I could see a move for black that would win if it was black’s turn. Reading the description implied it was black’s turn, but it clearly stated “white to move”. Based on the assumption that it was black’s turn I sent that move in.

A cheque from the Sunday Times

Luck

Each win was a result of considerable good fortune. For the Saturday competition I was lucky that most people have a blind-spot for distant bishops, that it was untypical in having two paths to force checkmate and lucky that the readership was likely both reduced and distracted by Christmas approaching. For the Sunday competition I was lucky that a mistake lead to the puzzle being incorrectly labelled doubtless throwing off many readers.

Luck is a label we attach to things we can’t control. But if that is our definition of luck, everyone who bought the paper on those days was lucky too, and that doesn’t seem right. Luck is not just things that happen out of our control, that’s probably better described as chance, luck requires some level of competence and ability to take advantage of whatever ups and downs come your way. If you want to follow in my footsteps, you need to be able to actually solve the chess puzzles. You may not be quite so enthused by this particular challenge but for anything else that matters to you inevitably the chaos of life will throw things your way, are you going to be able to take advantage of it? If you can, you too can be lucky.

Const Incorrectness

Recently, one of my work colleagues experienced a bug. So far so boring, but it was in such simple code that it just didn’t seem possible that there could be a bug lurking in it. A rough approximation of the code looked like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
const subscribers = [];

subscribe = observer => subscribers.push(observer);

unsubscribe = observer => {
const idx = subscribers.indexOf(observer);
if (idx > -1) {
subscribers.splice(idx, 1);
}
}

notifySubscribers = data => {
subscribers.forEach(subscriber => subscriber.notify(data));
}

I really do doff my cap to anyone who can see what the bug is. Can you see it? Have a think and if you need a hint, read on below. If you have worked out the answer, congratulations, your prize is you get to read a few paragraphs of text telling you what you already know.

A hint

Imagine how your code would look like if you subscribed to information about an object/some data and that object/data has been deleted or removed.

A heavy hint and the answer

What you would do in the case that the information you wanted to track has been deleted or removed would be to immediately unsubscribe from any further notifications. That would trigger the removal of your subscription mid-way through the notifySubscribers function iterating over the number of subscribers. In the code below I write a “one shot” class that will unsubscribe when it is first notified. The code is just to illustrate the bug, so it’s somewhat unrealistic in that it doesn’t do much beyond that.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
class OneShot {
constructor(name) {
this.name = name;
}
notify(data) {
console.log(this.name + ' notified of: ' + data);
unsubscribe(this);
}
};

oneshot1 = new OneShot('One shot obj 1');
oneshot2 = new OneShot('One shot obj 2');

subscribe(oneshot1);
subscribe(oneshot2);

notifySubscribers('data deleted');

Output:

1
> One shot obj 1 notified of: data deleted

So the bug is that you have two subscribers but only one gets notified because the subscribers list is being modified as it is being iterated over.

Taking a step back

The problem here is that the list of subscribers is being modified while other parts of the code are still using it.

To fix this you either want the data to be immutable so it can’t be modified or you want to use code that does not modify data – which is what you would have to do anyway if the data was immutable.

JavaScript doesn’t provide any built-in immutable data structures. You can freeze objects or use things like seamless-immutable that prevent modification of objects but do you really want the sweat beading on your brow every time you write code that modifies some data that may or may not have been frozen? I tend to think you don’t, and while some of my colleagues apparently do, the madness that ensues is probably worth covering in its own future post.

That leaves us with using code that does not modify the data. How can we have useful programs that don’t modify data? Rather than modify the subscribers data, instead we can change what subscribers points to, pointing it at new data which is different to the old data.

This simple change of approach means that you can safely unsubscribe to further notifications when you are notified because doing so won’t affect the list of subscribers that is being iterated over. The data being enumerated is still intact, and subscribers has just been changed to point at a different array.

The fixed code would look like this if we used Ramda (which we don’t – I’ll probably complain about that in detail some other time):

1
2
3
4
5
6
7
8
9
10
11
12
13
let subscribers = [];

const subscribe = observer => {
subscribers = append(observer, subscribers);
}

const unsubscribe = observer => {
subscribers = reject(equals(observer)), subscribers);
}

const notifySubscribers = data => {
map(subscriber => subscriber.notify(data), subscribers);
}

I’m not going to evangelise for this style of code here, I’ve done that plenty in the past. The relevant point is that I had to make a change to replace the definition of subscribers to use let instead of const. I think this is better code but this style of coding is incompatible with const! In fact const seems to encourage us to use mutable data as it prevents you from re-assigning your variables to point at different data.

As ever, I find myself convinced that the typical mid-wit opinion on things are much less true than you would expect. You can see some such opinions presented as answers to this question on Stack Exchange: “const all the things!”, “Using the const keyword gives us more safety, which will surely lead to more robust applications.” and “I know that the majority of bugs that I see involve unexpected state changes. You won’t get rid of all of these bugs by liberally using const, but you will get rid of a lot of them!”. Note that this bug is a result of unexpected state changes, so I am not sure that const is really helping.

Opinions on how const is used from across the intelligence spectrum

In summary, I think the way to avoid bugs is to avoid mutating data structures. Ideally your language would give you immutable data structures, but JavaScript does not and it seems fine to just treat the data like it was immutable. Immutable data by convention if you will. Fortunately, libraries like Ramda exist that do exactly that and all you need to do is to use Ramda consistently.

The dot operator

One of my biases in life is to consider the value of processes in terms of the outcome. For me, a process that produces good outcomes is a good process, but lots of people dislike aspects of the processes themselves and will decry processes that produce good results. If I am using the term correctly, I consider myself a consequentialist.

I am prepared to entertain the idea that there are some fundamental, categorical reasons why some things might be true. There are people who argue that objects and methods are bad due to the mixing of two wildly different things. It’s true that you can think of data and processes upon data as different things, but it doesn’t seem obvious to me that this a reason not to mix them.

Chronic disagreements in philosophy

To start, I want to define what is good and what is bad in terms of programming. If you can accept that definition then read on and I will try to show why I think mixing data and processes in the same components pushes you away from doing desirable things and pushes you towards doing less desirable things. And if you can accept that certain outcomes are superior and some are inferior then you just need to come around to a consequentialist way of thinking to agree with me, although I’m going to skim over the sales pitch for that here – while I would be happy to settle chronic disagreements in philosophy for reasons of space I’ll do it another day.

Do not repeat yourself

Do not repeat yourself (henceforth: DRY), is widely considered to be good advice for the programmer. Repeated code is harder to maintain and more prone to errors. We can argue about readability another time. In a long standing project I have worked on, there was a common thing that coders wanted to do but it wasn’t obvious or easy so a “recipe” for doing it was followed and multiplied around the codebase, years later it dawned upon us that in fact the popular “recipe” was flawed but at this point many different variations of the “recipe” lurked in dark corners and sorting this out was not trivial or quick. This is the risk you run if you do not follow DRY. Applying a consequentialist assessment of DRY leads me to think that DRY is good advice, it’s also widely considered to be good advice which probably means less but it’s not a controversial opinion at all and one I expect you as a reader to comfortably agree with.

Recently, I wanted to sort some data for presenting to users. Users don’t want to stumble over unsorted data and computers can sort data very quickly. The items of data had a number of levels to them and as the user clicked through the “tree” I wanted to put the folders at the top and the files at the bottom while keeping the folders and the files in order relative to each other of their kind.

Conceptual friction

I was using an unapologetically Object-Oriented language. So convinced of the wholesomeness of Object-Orientation were the designers of this language that it was not possible to create anything that didn’t live in a class. This was annoying because all I wanted to do was create a function that compared thing A with thing B. The all seeing eye of the language designer has seen all possible uses of the language and decided that I need to enclose everything in a class, even such simple and fundamental tasks as comparing two items of data. I probably called the class ThingComparer as a nod to the fact that it was purely procedural and contained no data. If you’ve written any code in a large OOP codebase there will be lots of names running along similar lines giving off a slight smell of conceptual friction. The method within the class looked like this:

1
2
3
4
5
6
7
8
9
10
11
class ThingComparer : IComparer<Thing>
{
public int Compare(Thing a, Thing b) {
bool aIsFolder = Type == Thing.FOLDER;
bool bIsFolder = b.Type == Thing.FOLDER;

return aIsFolder && !bIsFolder ? -1 :
!aIsFolder && bIsFolder ? 1 :
a.Value.Project.Account.Name.CompareTo(b.Value.Project.Account.Name);
}
}

I have exaggerated what I needed to write for dramatic effect but do you see the repetition here? Getting the Name property is done multiple times and it’s a common process that I shouldn’t want or need to repeat. The problem here is there is no way to separate the data in a from the process of accessing it, really I want to be able to specify this process without referencing a particular instance of data, the process is rather obviously not exclusively related to the specific instances of a or b. However, the result of mixing data and process leaves me in a situation where I have repeated myself. I want to move the .Value.Project.Account.Name process away from particular instances of data but I am unable to do it without fighting against the language.

Async aside

Aside: Asynchronous programming in this language demonstrates the point just as clearly. For boring reasons that need not detain us here, to get your asynchronous code to perform better you need to configure its behaviour away from the default behaviour chosen by the all seeing language designers. This is an example copied off the net:

1
var customer = await GetCustomerAsync(id).ConfigureAwait(false);

Again the language obstructs you from removing the process of .ConfigureAwait(false) away from particular instances of the data. I recently looked through a very small codebase at my work and found that in one small project this code fragment .ConfigureAwait(false) was repeated over 300 times! Who knows how many times the programmers on this project forgot to include this magic suffix? If the API changes or another, superior, library for asynchronous programming appears, then adopting it will require changes in 300 places rather than one.

This is why, from my consequentialist vantage point I think that mixing data and processes into the same components as dictated by OOP is a mistake. If you accept that DRY is a fundamental principle of programming and then it follows from this example that OOP pushes you away from what you should be doing.

Leaving the realm of the dot

You can do what I did and write an anonymous function that takes a Thing and returns the .Value.Project.Account.Name of it, which is to effectively write in a functional style. Writing functions allows me to stick to the principle of DRY, writing functions is good because it produces good outcomes (DRY) and writing OOP pushes me to repeat myself.

Opinions on OOP vs FP from across the intelligence spectrum

Symmetrical game plans

Years ago whilst watching my son play football I was talking with another father who told me he was planning to take over running his son’s team. From his pocket he produced almost literally the back of an envelope with his thoughts on how he would like the team to play. I read it and it was all perfectly agreeable stuff, and the kind of thing coaching courses suggest you produce and use as a basis for your training sessions and matches. The one thing that struck me about it was an apparent asymmetry between how he thought his team should attack and how they should defend – he wanted to “attack with width” and “defend narrow”. Logically speaking, if you think that, say, crossing the ball is how you win matches then you really ought to be attempting to reduce the amount of crosses you have to defend. Your approach should be symmetrical in the sense that whatever it is you think your team should be doing is surely what you want to prevent the other team from doing to you.
In this case, if my friend thought “attack with width” was the right way to go then “defending narrow” surely invites the opponents to “attack with width”. It possible that he probably thought his young lads weren’t going to carve their opponents open in the middle and “attack with width” was actually just a more realistic description of how things were going to pan out. Let’s put that generous thought aside so that I may move seamlessly from bad-mouthing grassroots football coaches to spearing larger prey.

A short history of defending wide free kicks in the final third

Professional teams in England have evolved how they defend against wide free kicks close to their goal. My memory is slightly hazy – little did I know I might be writing a blog post about it in 2021 so I haven’t taken any notes – as I recall teams once would pack the box and defend much in the same was as they do for corners, with similar results, most of the time nothing would come of it but occasionally the ball would arrive perfectly for a header and they would concede.

My recollection is that Arsenal changed how they defended from this plan to one where the team would hold an offside line outside the box so the penalty area would be empty enough that Cech could come out and punch the ball clear without anyone getting in his way. Over a period of time Cech gradually stopped coming out to punch possibly because him running forwards with his fists out while defenders run back with their eyes on the ball is a car crash waiting to happen. That’s how teams defend at the time of writing, with an offside line outside the box and the keeper on the line.

Wolves free kick

In the image above we can see Pep Guardiola’s Manchester City defending in exactly this manner. What is surprising to me about this, is firstly that it seems like an objectively terrible way to defend. A common rule of thumb is to not try to play the offside trap when you have no pressure on the ball, here it’s a free kick, so there is zero pressure on the ball. This way of defending doesn’t agree with that maxim for a start. City’s offside line is static and if the Wolves players time their runs they will have a dynamic advantage that will allow them to get in behind the defence. That means City have little control over the very dangerous space in front of the goal. Even if City’s defenders are able to keep up with the on-rushing attackers they are faced with the problem that as they are running towards their own goal any touch on the ball will direct it towards goal rather than away. As it happens, Wolves’s Coady gets to the ball first and heads it in.

Everything I know about football tells me this is a terrible way to defend and yet lots of Premier League teams insist on it. What boils my mind is that Pep of all coaches knows about the importance of getting in behind teams and forcing defenders to defend facing or running back towards their own goal. Indeed, City’s first goal in this game came from them deliberately doing exactly that. Incredibly, Pep has the same asymmetry in his principles of play as the father I was talking to.

Data Transformations

A lot of programming can be seen as data transformations.

To illustrate the point, consider this interview question:

Interview question tweet

Typical solutions look like this one by David Beazley.

David Beazley's answer

The reason why this question is slightly tricky is that you have to manage two things simultaneously as you loop through the data. You have to keep track of the current character and the number of continuous characters you have seen as you go though. There are some decisions to make as to how to best store the data.

If we break the problem down, we can actually perform the task in a different way that sidesteps the trickiness. Let’s treat this problem as, yes, a series of data-transformations. Where obviously each data transformation will be much simpler in itself than the one larger solution ably provided by David Beazley.

To prove how much we gain by doing this, I will re-pose the original question as a series of questions each challenging us to perform one component part of the problem. After each question, I will write my answer in my favourite programming language, co-incidentally the one I created, FFS Script.

Fake interview tweet 1

Interview question 1 fake tweet

This “interview question” is so simple as to be hilarious, 8 minutes to turn a string into a list?

1
const strToList = split('');

Fake interview tweet 2

Interview question 2 fake tweet

This is more difficult, the meat of the problem.

1
2
3
4
5
const lastChar = do([last, or(''), first]);
const gatherOnce = (l, c) =>
c == lastChar(l) ? append(last(l) + c, slice(0, -1, l))
/* otherwise */ : append(c, l);
const gatherSame = reduce(gatherOnce, []);

Even still, we break the problem down into smaller, easier problems. Here we can treat the problem as a single operation (gatherOnce) and allow reduce to worry about the iteration.

Fake interview tweet 3

Interview question 3 fake tweet

Again, the question is laughably easy.

1
const countLengths = map(juxt([first, length]));

Finally, to solve the interview question we just need to sequence our three data transformations in order. That’s also very easy to do.

1
2
3
4
5
const rle = do([
strToList,
gatherSame,
countLengths,
]);

Not only have we solved a tricky problem with ease, something else quite odd has happened.

Imagine that our interviewer then asks us what we would do to prove that this solution works and will continue to work in a larger piece of software. The answer is to write unit tests.

Unit testing part 1

Let’s return to my solution to the first re-posed fake question:

1
const strToList = split('');

The thing is that split is provided to us by the language. We trust that it works, no sensible person unit tests library code that comes with the language. All we are doing is passing it some data, an empty string. We can fire up an interactive prompt and check that it works but that’s all we ever need to do. Nothing can change in the rest of the code that will break this – it’s not possible to re-define split and it doesn’t rely on any other code in any way at all. This does not need a unit test and not only that it would be silly to write one. All my career I have believed that code needs to be tested and yet this does not.

Unit testing part 2

The solution to the second re-posed fake question is more complex:

1
2
3
4
5
const lastChar = do([last, or(''), first]);
const gatherOnce = (l, c) =>
c == lastChar(l) ? append(last(l) + c, slice(0, -1, l))
/* otherwise */ : append(c, l);
const gatherSame = reduce(gatherOnce, []);

There are three things here that we could write tests for. The first of them is lastChar is using four things but each of them are provided by the language: do, last, or and first. This is the same situation as in the previous solution and no tests are appropriate. The second of them, gatherOnce contains the meat of our solution and bears unit testing since it contains some actual code.

1
2
3
assertEqual('Zero data state', ["A"], gatherOnce([], strToList("AABB"));
assertEqual('Same character', ["AA"], gatherOnce(["A"], strToList("ABB")));
assertEqual('New character', ["AA", "B"], gatherOnce(["AA"], strToList("BB")));

The third part, gatherSame, is using code we just unit tested and reduce which is, again, provided to us by the language. Again, we can call up an interactive prompt to make sure gatherOnce works with reduce but that’s all we need to do. No unit tests are appropriate.

Unit testing part 3

Returning to the solution for the final re-posed interview question:

1
const countLengths = map(juxt([first, length]));

Again, we use four things, map, juxt, first and length all of which are provided to us by the language. Once we have used an interactive prompt to run data through it, there’s nothing more to do. It doesn’t make sense to write persistent unit tests for this.

Unit testing part 4

The code to join these transformations together is this:

1
2
3
4
5
const rle = do([
strToList,
gatherSame,
countLengths,
]);

We trust do as it is provided to us by the language. We know the three data transformations work and we know do works, therefore we can reason that rle works. Again, it would be silly to write a unit test. We may have a typo, an omission or an ordering problem but that can be established by calling it in an interactive session.

Possible objections

You could object that countLengths might change in the future and therefore rle might stop working. On this basis you should test rle to make sure that it is not broken by something it uses changing. A possible scenario might be that some other code finds countLengths useful and uses it, subsequently it might become apparent that countLengths doesn’t quite meet the demands of the new code and is changed so it does, breaking rle.

I don’t think this reasonable in practice because countLengths does essentially just one thing, and if that one thing is not what you want then you should write something that does do what you want and use that instead.

Data transformations and where you end up

To recap, by treating a problem as a series of data transformations, I managed to break down the problem into smaller and easier parts. Not just a bit easier, so much easier that it is almost laughable to consider that each part is any kind of problem at all.

Breaking problems down is widely accepted as an essential principle of programming, therefore viewing problems as data transformations is helping you to do the right thing.

Each part is so small to be almost atomic, impossible to break or affect by code elsewhere. Some component parts are so small to be comprised of just things provided by the language. Of course, you should try things out to check they work but these two reasons show why it doesn’t make sense to write unit tests for much of it.

All things being equal, you should surely prefer to follow widely accepted principles of breaking down your code into simpler parts. The typical answer to the original interview question often does not because people think (incorrectly in my opinion) that the solution is not complex enough to require it. Similarly, you should surely prefer to write code that 1) does not require testing, 2) re-uses existing code and 3) is immune to changes elsewhere breaking it.

Maybe the amazing thing is that my solution is not the typical one.

Ask For What You Want

When talking with like minded colleagues at work, I have occasionally said “ask for what you want”, expecting to get a knowing smile, nod or wink in return. I only ever got blank looks back. Sadly, I’ve checked online and “ask for what you want” is really not a popular heuristic in the world of programming and not even in the much smaller world of functional programming. It’s not all bad news, it does mean that I can claim this one for myself.

My First Rule of Programming, Life and Everything

One reason that it’s not a popular phrase is that I phrase it in a general way rather than just in programming terms. I do so because it is also an approach that may even apply to life as a whole – in keeping with the theme of the blog I am still collecting data and ruminating on it – but the indications are good. In fact, so good that “ask for what you want” is my First Rule of Programming, Life and Everything. I call it my first rule to distinguish it from my other rules of Programming, Life and Everything which don’t exist yet. It’s always a good idea to version your data since you will almost certainly need it later and even if you don’t then it costs next to nothing. I’ll throw that little tip in for free.

Programming: Ask For What You Want

By way of a concrete example to illustrate why the component parts of your programs should obey my rule, imagine we have a document editing web app coded in Javascript which uses a third party system to record what the user is doing in the app. Let’s suppose that the app itself allows you to flick backwards and forwards between documents, saving your changes back to the server and saving more temporary data about how you had that document configured while you were editing it, I’m thinking of text zoom settings, cursor position and so on – things that don’t need to be saved on a server somewhere but remembered such that if you immediately returned back to a document then your software would remember the size of the text and where your cursor was positioned when you were last editing that document.

Because the document editor code is carefully thought through it doesn’t wait for the data to save on the server to start downloading the next document. Imagine that the document editor has code like this:

1
2
3
4
5
6
7
const loadDocument = (saveDoc, loadDocId) => {
const saveDataPromise = save(saveDoc);
const loadDataPromise = load(loadDocId);

return Promise.all([saveDataPromise, loadDataPromise]).
then([_, data] => data);
};

The save and load functions are similar, they tell the third party service for recording user activity data, manifest in the code below as the customerIntel object, the type of event that happens and then orchestrate the work:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
const save = doc => {
customerIntel.setEventType('Saving');
return Promise.all([
saveUserData(doc),
saveDocument(doc)
]);
};

const load = docId => {
customerIntel.setEventType('Loading');
return Promise.all([
loadUserData(docId),
loadDocument(docId)
]);
};

The save user data code sends an event first and does its actual work after that. It looks a bit like this (and the other functions follow the same procedure):

1
2
3
4
const saveUserData = docId => {
customerIntel.event('user data', { docId });
...
};

Which is to say that to use the third party service for recording usage, you must tell it about the type of event and the name of the actual event. For your convenience it’s set up so that you call just a method called setEventType once to tell it the type of event, and call the event method with data relating to that specific event.

This service does not follow my first rule of Programming, Life and Everything because the event method wants to know three things: the type of the event, the event name and some data, but it only asks for two. Instead of asking directly, it goes to the shelf where the setEventType method keeps the event type data and gets it from there. The service has an implicit but not enforced order of events requirement – you need to call setEventType before you call event. In the simple case, the absent minded programmer might just forget to call setEventType. That problem doesn’t exist in our example but something worse and less obvious does. Saving and loading are happening at the same time, there’s no guarantees at all that when saveDocument runs that the event type is still “Saving”. It’s likely that by the time saveDocument runs the eventType will be set to “Loading”. It’s entirely possible that the data saved by the third party service looks like this:

Type Event Data
Saving User data DocId: 1
Loading User data DocId: 2
Loading Document DocId: 1
Loading Document DocId: 2

It’s possible that it behaves this way on one user’s computer and yet it might be correct or different on another user’s computer. It might happen to work fine for people who test the software for defects.

Imagine too that in future, someone higher up in the organisation will pull down the data about how the software is being used. They may end up with a graph like this:

Imaginary chart of loading and saving events

They likely want to be data-driven and may say things like “saving just isn’t that important for our users” and de-prioritise fixing saving bugs or they may even change the direction of the product – sending the development team away for months to radically change how it works for entirely misguided reasons.

The effect of a simple little flaw like this could be huge. It could create not so much of a butterfly effect but more of a blast radius.

The good news is that there’s a simple fix that could be made to the customer activity tracking service and you already know what it is. If the event method wants to know what the type of the event is, then it should ask for it – ask for what you want. This trivially simple change eradicates all possible crosstalk between different events and event types. An entire class of bugs is vapourised!

There’s much more to say about why this approach makes sense and what the benefits are. For the sake of brevity here, I’ll leave that for another time.

Ask For What You Want: Life And Everything

This part of my rule is harder to illustrate and instead I’m going to try to sell it to you on a logical basis. Typically, asking is cheap and what you want is by definition quite a big deal. All other things being equal, where the costs are negligible and the potential pay-off is large, that’s a deal you should take. Ask for what you want.