Category Archives: New Articles

Some of our thinking about training and the subjects we teach.


Memory Games

If you want to remember something important, what do you do?  If you want someone else to remember the brilliant insights you’ve just presented, what do you do?

If your go to strategy is to reread what you just read, or repeat what you just said, I’ve got a different approach that takes no more time, which gives you 50% better recall, and which you’ll find a lot more interesting.

Testing Better Ways to Remember

In a 2006 experiment, 3 groups of people were asked to spend 5 minutes studying a short passage.  They then had 3 further 5 minute sessions in which they either studied (S) again, or just self-tested (T) by writing down as much as they could remember of the passage. This is what each of the groups did:

The groups were asked how well they thought they’d remember the material in a test planned for a week later.  Here’s how they expected to perform:

The people who reread the material were the most confident about what they would remember, but here’s how they performed in the actual test.

 

So the group that studied once and self-tested 3 times performed more than 50% better than the one that just studied repeatedly.

The groups also rated how interesting they found the material.

So to sum things up, the group who self-tested more, remembered better a week later and found the material more interesting.  The self-testing group were also more realistic about how much they would remember.

Ways to Use This New Power

I won’t speculate about the mechanisms of how this works, but I will venture how to use it.

If you want to absorb something well enough that you’ll even remember more than half of it a week later, take a few minutes to actively recall what you can.  Try it.  I promise you’ll be humbled about what you can actually recall, and that you’ll actually remember a decent amount of it.

If you want your clients, boss, students, colleagues or minions to remember more than a fraction of your brilliant insights, then develop the wherewithal to ask them to recall whatever they can of what you just pontificated.  There are non-patronising ways to do this, like “What are the 4 or 5 big things that came out of this for you,” or, “How does what we just covered affect your plans for the future?”  We do this in every training session we run.  It works, it’s engaging and it’s not hard at all.

Some of the answers to your questions could even be enlightening.  You never know.

 

So what big 5 things do you recall from this article?

 


How to Deal with Skeptics (Including Yourself)

Discomfort = Skepticism

You read a headline that smart TVs emit radiation that is perfectly safe.  What do you do?  Nothing, right?  It makes sense and there’s nothing to worry about.

What if the headline said that smart TVs emit radiation that’s deadly for anyone watching more than 15 minutes a day for 7 years.  Would you accept those findings just as easily, chucking out your TV as a minor sacrifice compared to your good health?  I reckon that maybe you’d actually read the report under the headline, understand what the research actually said, check how credible it is, and even find out if any other experts agree or disagree, before visiting the recycling centre.

When we like what we’re hearing we’re accepting folk, happy with simple sense checks.  When we don’t like it, we morph into frigidly analytical arch-skeptics.

Fooling Students with Yellow Paper

In a famous study, participants were told to put their saliva on yellow litmus paper that would turn green within one minute if they had the, invented, condition of TAA deficiency.  The “litmus” paper was just yellow paper that would never change colour.

One group (the good news group) was told the paper would turn green if they had TAA deficiency.  The other (the bad news group) was told staying yellow meant they had the condition.  Here’s how the different groups behaved:

The good news group did obediently wait over a minute until they were happy the yellow paper wasn’t going to change colour.  The unfortunates who were told yellow paper meant TAA deficiency just waited and waited for that paper to turn green and give them better news.  More than half of that group with the fake bad news also tried a retest.

The group getting the bad news was also more dismissive of how serious TAA was as a condition, imagined it was more common, thought the test was less accurate, and in a follow up could be prompted come up with far more mitigating life irregularities that rendered the test less reliable.

This happy acceptance of things we want to hear, but motivated skepticism about what we don’t is strongly influenced by knowledge and intelligence, in a bad way.  Research shows that the more intelligent, numerate, informed and expert we are, the better we are at finding reasons to support our desired worldview and reject what we don’t like.

So What?

Rather than bemoaning how bonkers and irrational people are, I think this information is pretty useful in helping us make better decisions and in persuading people to a bit more open.  The place to start is in the mirror.

I’m not arguing that we mimic French philosophers, challenging each treasured belief in turn, pulverising our worldview and indulging in a self motivated existential crisis.  What I am arguing is that when we’re faced with a big decision – like starting a venture, taking a job offer, or hiring a candidate – we switch around the burden of proof from its usual place of “convince me that I’m wrong.”  The burden of proof should now be squarely on your dearly held belief, for example that your cat millinery hobby can be turned into a business, that the job with the hipster beard accessories start up is right for you, or that you should give the job to the candidate who’s most buff and shares your love of televised sport.  Evidence shows that this mindset of prompting ourselves to be analytical actually does work.

Evidence also shows that people are much more objectively analytical in groups, which is a good reason to rope in that peer reviewer, devil’s advocate or challenging mate.  This is uncomfortable, but in you’ll make a better decision by caring about the truth than by sheltering your beautiful worldview from harm.  And once you’ve made the decision, you’ll actually have more confidence from having challenged things hard.

Persuading others is a whole new bagful of monkeys.  But once we accept that people are motivated skeptics, we realise there’s wisdom in choosing who to persuade and choosing whether and how you persuade them.

  • You’re miles ahead if you can work with people who are predisposed to you and what you’re going to say. This is one reason why management consultancies sell to their alumni, and why football managers are all former players.
  • When you need to step out of your echo chamber of like-minded folk, but still have a choice between trying to persuade someone whose views are opposite to yours, or someone who’s open minded, go for the swing voter every time.
  • If your thankless task is to persuade someone who has a strong opposing dearly held belief, don’t fool yourself that a selection of facts and a cogent argument that goes head on with theirs will have any effect other than to entrench them. When people disbelieve they get analytical, so emotional appeals won’t help either.

With these folks, the data suggests a couple of things.  First, focus on insights they might not have considered and where they don’t have a foundation to rudely undermine.  For example, one purported reason there’s fewer girls than boys in technical subjects is that girls are as good as boys at technical subjects but better than boys in the other school subjects; so boys focus on the technical stuff, and girls are spread across the full range of subjects.  This feels like a more productive place to have a discussion about why your computer course doesn’t get enough women on it than some potential misogyny spiral.

And the next time you’re questioning the quality and justice of the referee’s every decision against your team but not really giving that much thought to those calls that go your way, or muttering about the media’s right/left wing bias, give yourself a nod of recognition – that’s your own highly motivated skeptic hard at work as usual.


Leadership Lessons from the Weight Room

The weight room might seem a strange and sweaty place from which to draw leadership lessons; but if you’re interested in a place to observe people getting noticeably and reliably better, then weight rooms have got an awful lot going for them:

  • Weight rooms have been around forever; and a testing ground where millions of motivated people, coached by thousands of personal trainers have tried all kinds of simple and exotic ways to get fitter and stronger
  • Results can be clearly measured – the 60kg barbell either got above your head or it didn’t, and no narrative will make it heavier than it was or higher than you lifted it
  • They work. Beginner and intermediate athletes get rapidly, considerably, and reliably stronger from the simple approaches in the weight room.  Professional athletes couldn’t maintain their excellence without them

Lesson 1 – Setting Today’s Goals

An athlete may have a long term goal of lifting more over their head than they can currently even get off the ground.  But when they enter the weight room, their goal for that session is never, ever more than a mild stretch over what they did last week.

If they ever try more than that, they’ll either break themselves or struggle hopelessly, like someone who isn’t Thor trying to pick up his magic hammer.

Big hairy audacious goals are for the long term.  The only way to an athlete’s long term goal of being strong and buff is to do a little bit more today than they could last time they stepped in the door.

Lesson 2 – Reaching Today’s Goals – “Spotting”

A go-to tool that coaches use to help athletes do a little more than they can currently do is called spotting.  For non gym rats, this is where a coach gives the athlete a tiny bit of assistance so they can complete the lift.  Here are some aspects of spotting, whose relevance should be all too clear for everyday coaching.

The coach offers just enough help for the athlete to complete the lift, and no more.

The athlete is going all out and mustering every tiny, spare muscle fibre.  The coach is often hardly trying, typically using only a couple of fingers underneath the bar, but when you’re that athlete it feels like the coach has attached a giant helium balloon to it.

The coach only spots at the sticking point.

The coach doesn’t spot all the way up, only at the sticking point where the athlete is struggling, which is usually about half way through the lift.  The athlete does everything else, before and after, all by themselves.

The coach always spots where the stakes are high.

The coach doesn’t spot all lifts, but always spots where the stakes are high and where failure would be bad.  In the weight room this is whenever the athlete’s body is under the bar, and they’re lifting close to their capability.

Here the coach positions themselves so they could take over if all went wrong.  Now the athlete can have full confidence and go for it.

 

Hopefully, these weight room observations have shown us some simple transferable lessons in leadership and coaching from a well proven field of endeavour:

  • Today’s goal is a mild stretch and no more
  • The coach offers the amount of help the athlete needs and no more
  • The coach offers help only at the sticking point
  • The coach is always there as a safety net whenever the stakes or cost of failure are high

You are the coach; the people who work for you or with you are athletes. You’re not doing anything for them; you’re helping them do more for themselves.  Good leadership is spotting.

 


You Think A Caused B, But How Can You Tell?

Here’s an obvious improvement, and a clear cause of that improvement.

The stimulus of the espresso really caused an improvement in the javelin throw. More evidence for the performance enhancing benefits of caffeine? Not so fast.

What Caused What?

If we see one thing happen (A) and another thing happen (B) we can conclude at least 5 different things:

1. A caused B. My singing (A) caused my singing teacher to wince (B).

2. Something else caused B. Here’s an analysis of UK fertility rates (B) before and after Games of Thrones (A) was released on HBO in 2011. Anyone arguing that GoT caused a decrease in birth rates, Khaleesi?

3. Something else caused both A and B. If shark attacks (B) go up at the same time as ice cream sales (A) on US and Australian beaches do we conclude that those ice cream sellers are tempting in the sharks, or maybe that hot days cause more people to go the beach and in the sea?

4. The change in B was random. Here’s one time we can feel sorry for football managers. Take a look at this excellent German research on football teams’ results (B) when they had a bad run of form and changed manager (A). Looks like changing the manager was a good idea.

Now have a look at the teams that had similar dips in form but didn’t change manager. Do you still think changing the manager made a difference? Or that teams just revert to their typical performances following a bit of bad luck?

5. B caused A. Did our change in manager (A) cause a change in performance (B), or did a run of bad performances (B) cause a change in manager (B)? Does veganism (A) cause better health (B) or are health conscious people (B) more likely to become vegan (A)?

How to Tell if You Have Your a Cause

In most, complicated, real life circumstances the way to know you have your cause is to find yourself 2 almost identical situations: in 1 the cause being present; in the other absent.
We can see if these 2 situations already happened in the real world, just like our German football teams that did change manager after a bad run compared to those that didn’t.
We can also create these 2 situations by doing our own trial, to see what happens in when our speculated cause is present versus when it’s absent. Here again is my experiment of throwing the javelin, this time showing before and after a break that didn’t involve drinking espresso.

Looks like the espresso isn’t the cause of the improvement after all. Maybe it’s just that a good warm up and practice causes better performance.

Even if you think you have a cause, here’s a few other tests that’ll raise your confidence that you really do:

  • Can the effect be obviously explained by anything else? Are ulcers obviously caused by stress, or could it be something else, say, H Pylori?
  • Does the effect happen every time the cause is applied? Does hiring Pep Guardiola cause your football team to tiki-taka its way to the Championship? Yes, well except when it doesn’t.
  • Is there a low chance of the cause effect being explained by randomness? Run lots of
    experiments on small enough samples and you’ll find lots of fascinating counter-intuitive cause effect relationships that no one will ever replicate.
  • Are our sample groups similar? If we only give training to super stars on the promotion fast track,we can’t compare its effect against a sample of regular folk who aren’t on the fast track.
  • Is there a chance that the participant is wittingly or unwittingly fixing the outcome? Would you trust the findings of a sports drink company about the big race time improvements that come from drinking its product?

We will all sometimes jump to conclude that our management intervention caused our temporarily underperforming branch to improve, that people are successful because they say “no” more often and not the other way around, or that our team lost because we weren’t wearing our lucky jumper.  If it’s important, that’s the time it’s worth reflecting on whether we really do know what’s a cause and what, well, isn’t. If we don’t, we’ll be drinking too much espresso and not doing enough javelin practice.


How to Win at Anything

Serious organisations, from top sports teams to Silicon Valley tech firms, now employ egg head data crunchers to uncover those magical analysable sets of measures that help them win.  If you emulate just one part of this winning formula – tracking and analysing performance data – then you won’t win, because you’ll have missed the main point.

Look more closely at what game changing organisations have done and you see a more fundamental pattern than collecting and processing data.  They pay attention to the assumptions they’ve been making about how to win, find evidence to test and challenge them, and they change how they do things according to what the evidence tells them.

You don’t need Prozone tracking and banks of PhD quants to do this yourself.  You just need an assumption challenging, evidence based mindset to stack the deck in your favour and win more often.  The principle is relevant for everyone’s job.

To illustrate how simple it is to improve by using this mindset, here are some examples with everyday games: darts, tennis and Monopoly.

Darts

Watch any player on the pub darts oche, and you’ll see a semi-wayward amateur copying the pros and aiming for treble-20.  This is a great place to aim if you’re one of the 1% who’s accurate to within a centimetre from a distance of 8 feet.  20 is flanked by 1 and 5, so average players often end up with the “pub score” of 26 from their 3 darts.  There are much more forgiving places to aim for anyone who isn’t an expert.

The blue dot below shows the ideal place to aim, giving highest likely score, depending on your accuracy.

There’s also an important end-game to winning at darts. I’ll leave you to work out why you should practise hitting double 16 if you’re good, but double 1 if you’re not.

Tennis

The classic measure of success in tennis is the ratio (winners ÷ unforced errors).  For male pros, a good ratio is around 1, for female pros around 0.8.  For amateurs, the ratio is much lower because amateurs make so many more unforced errors.  So the key to winning in amateur tennis is not to go for those Hollywood passes and smashes; it’s to keep your unforced errors down, and your opponent’s up.  Here are some ways research shows you can do it:

  • Reduce lateral errors (i.e. the ball going out of the side lines) by hitting the ball back where it came from, i.e. return a cross court shot with a cross court shot, return a shot down the line with a shot down the line. This is because it’s difficult for amateurs to change the direction of the ball accurately, which requires the right combination of racket speed and racket inclination.  You’re much more likely to be accurate if you don’t change the angle of the ball on your racket, and just send it more or less back where it came from.  Obviously you need to change this up every now and again, unless another keystone of your strategy is to bore your opponent into submission.
  • Reduce depth errors (hitting the net or going beyond the baseline) by practising your top spin. Because top spin causes the ball to dip, it increases the height of the window above the net you can target while keeping the ball in play.  You could achieve the same by not hitting the ball as hard, but that doesn’t help you increase your opponent’s unforced errors.

An easy way to increase your opponent’s unforced errors is to target their their weaker side, usually the back hand.  I’ll let you figure out how that’s entirely consistent with keeping your own unforced errors down.

 

Monopoly

Chances are you play Monopoly in a random, happy-go-lucky way, and maybe hanker after the ultimate of hotels on Park Lane and Mayfair.  You shouldn’t do that, if you want to win.  Here’s a handful of golden rules backed by a mathematical analysis of the board:

  • Buy as many stations as you can – people land on stations more often than any square except jail, buying an additional station increases the value of all the others, and the rent yield is excellent
  • Buy the orange streets – people land in jail 4 times more often than any other square, the orange streets are in the sweet spot of likely 2 dice totals after jail, and rent yield is excellent
  • Develop houses but not hotels – the yield on additional hotel investment is poor, especially late in the game when there’s fewer turns left, and hoarding houses creates a housing shortage that stops your opponents developing
  • Don’t bother with the purples (Park Lane is the least landed square on the board because of the locations of Go To Jail, Chance and Community Chest), browns (rents are still at pre-gentrified East End levels), or utilities (rent irrelevant compared to developed spaces in the mid- and end-game)
  • Stay in jail at the end of the game when the rents are high

There’s a few other ways to crush your opponent, but those tips will help you dominate next Christmas.  Oh, and it doesn’t matter whether you’re the top hat or the sports car.

 

So there’s a handful of games where you can win more often, without employing a single rocket scientist.  Those were just the first three I investigated.

The key point here is to check and challenge your assumptions about how you go about something, and then make better ones based on relevant evidence:

  • What assumptions are you making when you play a game or do your job, or what are you just doing without thinking about it?
  • Is there a relevant source of data or evidence of how to perform better, which is relevant to you and doesn’t just copy other people with more resource or experience? The evidence in this article took me about 3 hours to find, filter and process
  • If a source doesn’t exist, and if you can’t work it out, can you just design a simple DIY experiment? Do you get more power if you lower your bike seat by 2cm?  Does the Jenga block come out without pulling over the tower if you tap it length wise, or pull it width wise?  Do you get more responses to the same email if it’s branded and fancy or in plain text?

The metrics and rules for winning naturally change as you get better.  When you start to climb up the pub darts ladder, you’ll need to start aiming at treble 20.  But the principle remains the same: recognise your assumptions, challenge those assumptions with evidence, and you’ll end up with a better strategy and you’ll win more often.

 

Kardelen trains critical thinking, which is about being creative and complete, reasoning clearly, and challenging often dearly held assumptions.

If you want to suggest a sport or game (that hasn’t already been analysed to death) where you’d like us to challenge the evidence about how to win, just let us know.

 


To be Right You Have to Risk Being Wrong

It’s OK if Your First Idea is Wrong

When you’re trying to solve a problem, there’s usually not a whole lot of difference between right and wrong answers.  More often than not, as long as you start somewhere, recognising that it’s probably wrong, and spend time tinkering, you’ll get to right.

Here’s an example.  I want to dominate next year’s amateur Tour de Snowdonia with its massive climbs and mountain top finishes.  So I figure I need to hit the weight room six times a week, and squat an extra quarter kilo each session to end up with stronger legs than The Mountain from Game of Thrones.

I can test whether my theory is right or wrong with some simple checks.

  • Completeness: Does my thinking cover everything important? Not here, because being lean is just as important as having super powerful butt and thigh muscles, just look at those skinny devils who win the big Tour de France climbs
  • Assumptions: Have I made correct assumptions? No again.  If you’ve ever been to the weight room and tried your hardest every day for 6 days in a row, you know you just get weak and tired.  You need the odd day off to recover
  • Valid reasoning: Does my reasoning make sense? And no again.  I don’t need the strongest thighs (or leanest body) in the world, I just need to be better than my opponents.

And there’s an essential final check to my solution: I’ve got the ability to test whether I’m right or wrong, by climbing on my bike and seeing how quickly I get to the top.

My initial thinking was wrong, but that’s fine because I can now tinker and make it better – I can take a bit of weight off, build in some recovery, and work out how good I need to be at climbing to win.  I can check all this by riding up the hills.  I needed to go through wrong to get to right.

If You’re Not Even Wrong There’s Nowhere to Go

Where I’m really in trouble is when I’m not even wrong[1]: when my thinking is so far off the mark that I don’t know where to start or whether to even bother testing.  You can tell when someone else is not even wrong by your immediate reaction of “Huh?  Say again.”  Here’s how to tell if you’re guilty of it yourself.

  • Completeness: Your thinking is narrow so you miss the critical points: “We should reinforce the returning planes in the wings, where the bullet holes are.” (How about where the holes might have been in the planes that didn’t return?)
  • Assumptions: I’ve made some monumentally naive or crazy assumption, probably because I lack subject matter expertise: “Clearly the best song is going to win Eurovision.”
  • Valid reasoning: My reasoning is so vague or twisted or odd that it’s hard to know what the reasoning is, or how to challenge it: “We’d be twice as democratic if we held a second referendum.”

If we’re not even wrong like this, we can’t build on our first answer and get to a better one.  There’s nothing to build on.  It’s best just to screw up the paper with our sketched solution, throw it in the bin, and start again.

Deliberate Not Even Wrong

The most heinous critical thinking crime of all is when someone is deliberately not even wrong, which happens a lot.  With deliberate not even wrong your immediate reaction is, “I see what you did there you sneaky little rascal.”

Leaving aside clever argument sophistry, there’s a whole list of everyday not even wrong tactics that we all employ.  Most of these are ploys to avoid being tested and, ironically, being wrong:

  • Excuses – “I would have been right about Kayley winning Bake Off if they’d given her the score she deserved for her 3-tier Waffle House.”
  • Hedging – “That 1 room Kensington apartment is a bargain, if prices keep rising by 10% year.”
  • Vagueness – “She’ll get the results her performance deserves.”
  • Circular reasoning – “It’s a close call, but I predict the winners are going to be the team that best rises to the occasion.”
  • Inability to disprove – “His desire for control all comes from his unconscious feelings about his mother.”
  • Downward sloping shoulders – “I’m happy with it if you are big fella.”

Of course, we shouldn’t be naïve.  If lawyers might be circling, then we need a disclaimer.

But if we want to have a good chance of getting to a right answer, we need to start with something that passes all our checks: it attempts to be complete, has decent assumptions and valid reasoning, and we can test it.  If we start out by being not even wrong, deliberately or otherwise, then we’re in a cul-de-sac to nowhere.

 

[1] This doesn’t apply to matters of faith, taste, intuition etc, it’s just where you’re attempting to be in some way rational or scientific


How to Tell a Baptist from a Bootlegger

If you want to know if someone’s stated reasons for doing something are true, there’s no point asking them.  There are much better ways of finding out.

Bootleggers & Baptists

Bootleggers and baptists is a catch phrase to characterise how you can get one morally righteous group fronting something, while another group quietly benefits.  The reference comes from US prohibition, when baptists lobbied for restrictions on selling alcohol, while bootleggers profited nicely.

It’s all too easy for a bootlegger to dress himself up as a baptist when he’s trying to get his way.  So if we’re going to judge someone’s moral argument, we need to know if we’re dealing with a baptist or a bootlegger[1].  If we take them at face value or ask them, they’ll just answer as a baptist.  But if we think a bit more clearly, we’ll get to the bottom of it.

One baptist argument being spouted here in England is by the RMT rail union.  The RMT doesn’t want a train company to install monitoring technology that enables drivers to open and close the doors.  This does away with one of the roles of the train conductor, and according to the RMT it threatens passenger safety.  The RMT called a strike this week to stop driver operated doors and protect passenger safety.

So is the RMT a baptist (champion of passenger safety), a bootlegger[2] (champion of RMT members), or both?  Let’s have look.

Is There a Bootlegger Benefit?

Faced with a baptist pronouncement, asking “who benefits?” can help us work out if there might also be a bootlegger.  The RMT calls a strike to stop driver only train operation.  Its members are train conductors, whose jobs might be threatened by the change.  They benefit if the driver only trains are stopped.  Could they be bootleggers?

Asking “who benefits?” doesn’t tell us the RMT is a bootlegger, but it tells us it might be.

Does a Neutral Expert Support the Argument?

There’s obviously no point asking a potential bootlegger about his expert opinion on the matter.  There’s also no point asking any old expert whether an argument is true.  It needs to be someone without a dog in the fight.

Do you believe the RMT’s claims that driver only operated trains are less safe?  Would you believe the train company’s expert opinion?  Or do you believe the Rail Safety & Standards Board, which says that there are plenty of driver only operated trains and they’re just as safe as conductor operated ones?

The least biased view seems to be saying that safety isn’t a big issue here.  The neutral expert rejects the baptist argument.  I sense a bootlegger.

When Do They Act?

Though we shouldn’t take baptists’ words at face value, we can learn a lot from how they’ve acted.  Have they ever acted in a way that a would benefit baptist but a not bootlegger?  Have they acted in a way that  would help a bootlegger but not a baptist?  How about when neither would benefit?

 

Here’s how to tell what they are from how they behave:

Do They Act Table

Let’s look at our RMT strikers again and how they’ve acted in the past.  Have they ever gone on strike for an issue that was purely about job security or pay and conditions but not about safety or some other baptist cause? Yes, that’s the reason given for most of RMT’s strikes.  That’s squarely in our top right box.  They’re bootleggers.

Just as useful is to look at when someone doesn’t act.  Do they stay in the background when a baptist would act? Do they do nothing when a bootlegger would act?

Let’s have another look at our RMT strikers.  Have they gone on strike about issues that threatened safety but didn’t affect members jobs, for example the fairly well accepted problem of trespassers on tracks?  Not that I can find.  They’re not in our bottom left box.  They’re not baptists.


Bootleggers Pretend to be Baptists Everywhere, and We’re Onto Them

We’ve used some simple clear thinking to call out the RMT on its pretence that the strike was all about safety.  This isn’t to attack unions.  Faced with a choice between a union and a monopoly, I’m on the union’s side every time.  I just want them to be frank about their motivations.

The bigger picture is that bootleggers everywhere either stay quiet or posture morally and pretend to be baptists.  The Chancellor raises alcohol tax to promote responsible drinking – is the Treasury a bootlegger?  The Ministry of Defence buys British – are its senior staff, who commonly move into the private sector to work for British Aerospace, bootleggers?

You won’t tell who’s who by listening to what they say about themselves or to their own experts’ declarations.  To find out what their real motivations are you need to think about who stands to gain, listen to people without a dog in the fight, and to pay attention to how they do and don’t act.

 

[1] I want to be clear that I’m not using bootlegger as a pejorative term.  I can relate just as much to rascal (non-mobster) bootleggers as holy baptists.  My mission is to call out baptists when they’re baptists, bootleggers when they’re bootleggers, and both when they’re both.

[2] This isn’t a union bashing or promoting article.  Unions at their simplest are labour cartels that in most countries are effective negotiating counter parties in industries with monopoly employers.  Judging by wages and employment conditions in those sectors, unions do a fine job of this


If You Want to Argue Well, Be Charitable

A good way of becoming celibate is to make a point of correcting other people’s grammar.  A great way to dissuade people from your argument is to make petty criticisms of the opposing side.  That’s why, if you want to understand an argument, get to the heart of the matter, have a useful discussion, and even persuade people to your point of view, it’s important to be charitable.

Here’s what I mean by charitable: genuinely wanting to understand why the other person is taking this (seemingly ridiculous) point of view.  It means putting their argument in its best light, acknowledging their good points, giving their words a positive interpretation when they were vague or could have meant multiple things, and letting minor or irrelevant inaccuracies go unchallenged.

If you’ve had the wherewithal to do that, your version of the other person’s argument should look like a good one, or at least not the ramblings of a lunatic that your inner uncharitable argument buster was tempted to portray.

Being Fair to Unhappy Young Voters

Here’s an example argument that’s been doing the rounds following the UK’s seemingly never-ending EU referendum post mortem.  It’s an analysis by the YouGov polling organisation that argued young people would be hard done by a Leave vote.  It’s usually quoted by under-40s alongside comments about not giving up seats on buses.

Voting by age

Even leaving aside the clunky use of font sizes for graphical effect, logic busters tore this to pieces:

  • The analysis is wrong because YouGov should have used conditional probabilities for life expectancy (your life expectancy goes up with every year you survive)
  • The bands are deceptive because they aren’t the same size, especially the tiny but extreme 18-24 band
  • It ignores how many from each band cared enough to vote, and it turned out that many more older people cared and voted
  • The poll was from before the referendum, which YouGov called incorrectly, making them a poor source
  • It assumes people don’t become continually more pro-Leave as they age anyway

Instead of dismissing all arguments based YouGov’s work as pseudo-analytical rubbish, here’s the case expressed generously:

  • Younger people strongly prefer Remain, and older people Leave (even if YouGov is 8% out as it was in the referendum this strong trend is still true)
  • Younger people have much longer to live with the decision than older people (even if you correctly use conditional probabilities)
  • We’ll grant that people may not become more EU resistant as they age
  • Therefore, when you vote you should consider the others that will have to live with your decision, long after you’ve joined Elvis

I’ve ignored the criticism about band sizes because the trend is clear.  The point about young people not turning out to vote may explain why Leave won, but is a red herring here and not relevant to this argument.

If I’ve done my job properly, the argument should now look reasonable, and we can understand the protagonist’s point well enough to get beyond Punch & Judy.

Charitable Understanding Doesn’t Mean You Agree, But It Might

You now have a chance of getting to the heart of the matter.

Do you just disagree about what’s most important (is it more important to do what you think is right, or consider the harm to others if you’re wrong)?

Are you making different assumptions (we’re still going to have freedom of movement, vs we’re going to close the borders and brick up the tunnel)?

Are you each afraid of different things (Eurocrats telling you what to do for the rest of your life, vs being free to work in Slough but not Paris)?

Or, God forbid, did you decide the other person has an excellent point that’s changed your mind (you know what, I’m going to trust my grand children on this one)?

You may still conclude that the other person is a closed-minded bigot or self-serving sociopath, but at least you learned that by not being one yourself.

The quality of your discussion just went from bickerfest to forcing yourselves to think about important nuances and beliefs.  You may even make a good decision or get to the truth.


Lucky Like a Fox?

In amongst the David vs Goliath life and business lessons being expounded on the back of Leicester’s magnificent Premier League victory, a big question bothers me.  It’s a question someone asked when I was pontificating about why some companies became market leaders.  He said, “How do you know they weren’t just lucky?”

It seems a pertinent question in football, where opposing teams often have goal scoring chances in the teens but most games are either drawn or decided by a single goal.  The best team doesn’t always win, but much of this good and bad fortune cancels itself out over a long season.

The puzzle to decipher with Leicester then is what was luck versus, say, what was work ethic, organisation and ability, in a game that has all of these in ways that are often difficult to separate.

What Does a Lucky Winner Look Like?

To help clear the muddy waters, here’s my definition of a lucky winner: to win you need everything that has a strong element of randomness to go your way.  I.e. wherever things can easily go right or wrong, you only win if they all go right, and you have no room for redundancy.

We can apply this to facts about football performance in 4 steps:

  1. How much pressure the team puts on the opponent’s goal (shots and corners)
  2. How much of that pressure turns into genuine chances (shots on goal)
  3. How many of those chances you or your opponents convert (goals)
  4. The reward you get for those goals (points)

Let’s look at each of these steps, taking a view on how much randomness there is, how Leicester fared, and whether they were lucky winners.

Leicester Under a Little Pressure

It’s hard to argue that the pressure a team applies or experiences is random.  It seems to be about ability, organisation and work; and Leicester were about average on this measure.  Leicester suffered slightly more pressure than they applied.  In a league table of pressure, of shots and corners, Leicester are in the bottom half.  Tottenham, Leicester’s main rivals for the title for most of the season, are 2nd or 3rd.

Chart 1 - Shots & Corners Tables

Things are going to have to go pretty well to get Leicester to the top of the table from here.

Accurate Artillery

Things start going well for Leicester in front of goal, turning those shots into shots on target.  Hitting the target seems to be about ability and not luck, and Leicester’s strikers are a tiny bit better than average.

Chart 2 - Shots on Target vs Shots

When it comes to turning those shots on target into goals, Leicester’s strikers are the best in the League, beating the trend by quite a long way.  It takes some skill to hit the target and beat the keeper in the split seconds available at the top level, but surely there’s an aspect of fortune in hitting the right spot during the maelstrom, defenders being there to block it, or the opposition keeper being well placed and reacting.  It seems harsh to judge this superior performance in front of goal as mainly down to luck, but Leicester’s centre forward is beating the trend, by quite a long way, compared to some world class goal scorers.

Chart 3 - Goals vs Shots on Target

Super Stopper

In defence, Leicester beat the trend, again by quite a long way, Leicester’s opponents being poor at turning chances into goals against Leicester.  Is Leicester’s keeper, Kasper Schmeichel, who Leeds sold to Leicester because of a poor goals against record, the third best in a League?  One that contains the national keepers of England, France, Spain, Belgium and Czech Republic?  Is this all superior skill, or was there a little bit of blessed good fortune in amongst the goalmouth maelstrom? 

Chart 4 - Goals Against vs Shots Against

Not All Goals Are Equal

If there were a Premier League for goals for and against, Leicester would have been 3rd.  So how come they won it with 2 games to spare?  This is because of a points system where you get 3 points for a win, whether you win by 1, 2 or 10 goals.  If you score 3 goals in 3 games, you could get three 1-0 wins, which is worth a lot more than one 3-0 win and two 0-0 draws.  Of Leicester’s 19 victories when they won the title, 14 were by a single goal.  Have a look at a table of average winning margin, comparing good teams and poorer teams.  The teams in green finished in the top half of the League, red in the bottom half.

Chart 5 - Points vs Goals

This is a common pattern in many sports: top teams have high average margins of victory, i.e. they often win easily; but weaker teams only just win when they do win, and so have smaller winning margins.  Top teams have redundancy in their victories, lesser teams have hardly any.  Leicester’s average goals per game is one of the lowest of any top division champions since the second world war.

So Was it Luck Then?

So how much of Leicester’s Premiership was luck?  I think it helped quite a bit.  Leicester were OK at putting opponents under pressure but no better, and so pretty much everything else had to go in their favour to win the league, which it did from conversion of chances to their large proportion of single goal victories.  When everything has to go right, with little room for redundancy, that’s my definition of a lucky winner.

If you don’t believe me, have a look at what the manager said when asked whether he could follow this up with another Premier League title:

“No. Next season we have to fight for 10th position.  We have to make sure we are safe then we look to something more.”

No one in his club accused him of talking down the team.

Of course, it wasn’t all about luck.  Leicester’s centre forward, Jamie Vardy, did not look out of place when called up to play for England; their winger, Riyad Mahrez, was voted by his fellow professionals as their player of the season.  You don’t create the chances to win the Premier League without outstanding ability.

 

I take a couple of lessons from all this.  First, thank goodness, fortune sometimes smiles on David enough for him to beat Goliath.  Second, before praising and picking up top tips from winners, or condemning and criticising losers, maybe take a moment to see if someone was in or out of luck that day.


The giant returns on getting a bit better

Bad Ideas Designed To Stop People Thriving: #1, The Learning Curve

Think about developing a skill, and you think of a learning curve. Early on you improve quickly, then progress gets slower; and it’s soon hard to tell if you’re improving at all. At some stage you plateau and stop improving, just like your handwriting, driving and party dancing did a few decades ago. If you’re super conscientious you practice for 10,000 hours, making tiny improvements to get really good, but this seems to have a high price.  Here’s how that learning curve looks

Chart 1 - Learning Curve w Title

If we think like this, it’s no wonder that we eventually stop making an effort to get better and unconsciously start coasting, then maybe even start looking around for a new skill.

The good news is that this limiting picture of learning curves is, in the vernacular, “a crock of shit”:

  1. Learning curves for individuals don’t need to look like this – there’s nothing inevitable about plateauing
  2. The vertical axis measures the wrong thing, and fools us into thinking the wrong way

I’ll just look at this second issue here, and propose a better measure that will make you think differently and cheer you up.

What if we looked At reward instead of ability?

Let’s analyse the data lover’s paradise of baseball, and the abilities of folk who are all the way along baseball’s learning curve – starting pitchers for major league teams.

Here’s the pitching ability of the top 55 ranked pitchers using one common measure: walks and hits per innings pitched, or WHIP.  Lower is better.

Chart 2 - Pitching Ability w Title

I’ve only shown one season, so variability is higher than for career stats.  But if you look at the trend line, you can see the difference in ability between pitcher number 55, Wei-Yen Chen, and pitcher number 1, Clayton Kershaw.  It’s tiny: less than 1 hit or walk conceded in every 10 innings pitched.  To go up 1 place in these rankings, you need to improve your ability by about 0.2%.

Let’s look again at our pitchers, but instead of comparing ability, we’ll compare how much they get paid.

Chart 3 - MLB Salary w Title

Don’t forget that these guys are on the far right diminishing returns part of the learning curve.  The difference in their abilities is tiny, but the reward for tiny improvement is huge. In fact the further along the learning curve you go, the more difference a tiny improvement makes.  Here’s a comparison of salaries that includes people from the full professional spectrum of pitching ability.

Chart 4 - All Baseball Salary w Title

Choose any field that rewards ability – from sales and project management to writing novels and betting on football matches – and you’ll find the same pattern.  Change your reward from money to whatever is important to you: glory, gold medals, cats rescued or souls saved from damnation, and again the pattern is the same.  The better you get, the bigger your reward for getting even better.  Looking only at increase in ability leads us to draw the wrong conclusions because it’s the rewards that really matter.  When we look at rewards, we end up seeing our learning curve in a new way:

Chart 5 - Reward Curve

What to Practise?

Moving from left to right on this chart is about smart, conscious practice. That leads us to 2 questions: what to practise and how to practise?

My takeaway from this article is about what to practise: take the few fundamental skills needed to be good at your job and practise those a lot, no matter how good you already are and no matter how little you think you can improve. That seems to be the best way to reap some giant returns on getting a bit better.


How a Simple Logic Tree Could Have Kept the Sony Walkman at #1

In 1989 Sony dominated the personal music player market with 50% US market share, and still led the market in 2000 as CDs took over from cassettes.   By 2008, mp3 players had become the market standard for portable music, and Apple had up to 86% share depending on which survey you looked at. Sony was nowhere.   Apple did this despite being late to this market and not having a better player: in third party reviews the iPod is typically rated below Sony’s and other suppliers’ products.

To understand what went wrong, in a way that anyone could have worked out at the time with no need for wisdom in retrospect, we can use a simple logic tree.

Here’s a simple logic tree that lays out the key things that need to be true for a company to dominate the portable music player market. It’s hard to argue that any of that wasn’t obvious even before mp3 was becoming a usable technology

Chart 1

Logic trees tend to reveal useful and obvious things once you expand them, which is what happens if I expand the bottom left leg of the tree.

Chart 2

This bottom left hand box is where the changes happened with mp3. It was no longer just about the experience of using the player, where Sony excelled, it was also about getting your music onto it. People were no longer putting cassettes and CDs into players; they were uploading tracks from CDs, downloading them from websites, or sharing them using services like Napster. So the manufacturer’s player needed to work well with the systems used to download, store and organise music, systems like iTunes.

Apple realised the importance of the bottom left box and did something about it. It grew the iTunes library to 1.5m songs in 2 years after launch, and its devices were the only ones compatible with iTunes’ format. By 2009, iTunes consumers had downloaded 8 billion songs. These only generated ~$800m revenue for iTunes; but they made iTunes the default music management system, and that laid the ground for $22bn in sales of iPods.

It’s easy now to look back at how things turned out and be tricked into thinking that Apple must have had some overriding advantage. But Apple didn’t even have a music store until 2 years after it launched the iPod in 2001, when Sony was one of the biggest music companies in the world. So Sony had some outstanding advantages of its own. Apple just bothered to look at that bottom left hand box.

We’ve got to the heart of the matter in 2 simple steps that anyone could have taken. We haven’t created a paint by numbers solution, because those don’t exist; but we have brought our attention to the critical issue to investigate and solve. Shame for Sony, but I’m quite pleased that Apple solved it.

 

For a free introduction infographic about how to create a logic tree, sign up at www.kardelen.training/logic-tree/

 

 


Critical Thinking Gives You Better Predictive Power Than Any Expert

Difficult Decisions

The people of the UK will soon need to decide if they want to leave the EU.  To make this decision, each person needs to weigh up a tumble of possible consequences of staying or leaving: will they be better off, what will happen to immigration, what about security?  This is the kind of judgment call, having to predict what might happen in an uncertain future, that characterises all kinds of personal and business decisions.

Folks’ natural inclination in these circumstances is to consult an expert, either in person or in an editorial column, or to fall back on some firmly held belief about how the world works.  One of these approaches has a track record of diagnostic success roughly equal to a chimp throwing darts at a board containing the different scenarios, the other approach is a bit worse.  But there is another way, which has a much better track record than either of these 2 common approaches.

Who Are the Best Forecasters?

Here is an analysis[1] that tested how accurately different groups predicted a series of important political events, such as the break up of the USSR and leadership changes in a post apartheid South Africa.

Chart 1

The first thing to notice is that the average well-informed human’s predictions are barely better than assuming nothing changes.  The second thing is that intelligent people who take time to inform themselves, or “dilettantes”, predict a bit better than experts do.  Experts are typically surer of themselves and tend to be overconfident on the extremes, being more sure that something will happen when it doesn’t, and being more sure something won’t happen when it does.

Using the same study to look deeper, at characteristics of people who make the best and worst forecasters, reveals a super useful insight.  Lots of factors make no difference: age, education, technical expertise beyond a fairly basic level, political leaning, optimism versus pessimism, and idealist/realist worldview.  But one trait that does distinguish good and bad forecasters is this: the willingness to consider a range of viewpoints and facts, formulating views based on that perspective and evidence rather than any pre-conceived beliefs.  People with this openness are termed foxes in the study; people with more determination to stick to their worldview are termed hedgehogs.   The foxes were much better forecasters than the hedgehogs.

Chart 2

Worryingly, those hedgehogs look like the politicians and expert ideologues filling the airwaves with elegant and consistent theories; the ones many people rely on when forming their own judgments to make important decisions.  They also look like visionaries and business gurus with timeless, universal success principles.   The median hedgehog is worse at judging outcomes of important events than a dart throwing chimp.

So How do Good Forecasters do it?

In an ongoing real life study[2], volunteer dilettantes, who work just like the foxes in the previous study, have outperformed every other competitor group in formal forecasting competitions, including university departments and government intelligence analysts with access to privileged information.  They don’t just beat every competitor, they beat them easily, every year.  These “superforecasters” have no specialist expertise; their only common factor is their mindset and a common approach:

  1. Clearly define the problem, being careful not to substitute it for an easier one
  2. Break this bigger problem down into components small enough to analyse clearly
  3. Use the best, most relevant external data, and internal reasoning, to test each component
  4. Share that reasoning and evidence with other people to test whether it is robust, whether you’ve missed or misunderstood anything, inviting challenges and improvements
  5. Adjust the answer as relevant new evidence emerges
  6. Once the event has come to pass, measure and get feedback about how accurate you have been, learn, and adjust for next time

If this looks to you like good critical thinking that anyone can learn, that’s because it is.  It doesn’t require genius, years of specialist subject matter expertise, privileged inside information, or any secret formulas.  Good, robust critical thinking just needs a disposition to be open and challenge, some simple training and a bit of regular practice.

 

For a free introduction to critical thinking, sign up at www.kardelen.training/logic-tree/

 

[1] Expert Political Judgment, Philip Tetlock, , 2005.  I have changed the term “Contemporary base rate” to “No change from current” to ease understanding.  I also excluded tested models (cautious case specific extrapolations, aggressive case specific extrapolations, and autoregressive distributed lag models), which perform better than humans where they are available

[2] Superforecasting: The Art & Science of Prediction, Philip Tetlock & Dan Gardner, 2015

 


How a Logic Tree Helped Win the Tour de France

In 2008, British Cycling’s Performance Director wanted to create a professional road cycling team, and needed to pull together a plan to take to a potential backer for funding.

The idea was simple: a Pro Tour level cycling team needs a budget of €[x]m; if we add in our world leading performance expertise, we can win the Tour de France.

Applying some critical thinking before launching into a plan flipped this simple idea around, and made getting the budget right as important as any of management’s performance expertise.

Taking the premise “We will win the Tour de France,” a simple logic tree clarified what needed to be true for this great achievement to happen.  Here’s the tree:

Tree 1

Both legs of the logic tree need to be true to achieve the goal of winning the Tour de France.  The right leg is about faith in a management team that wins gold medals, and whether their expertise can be transferred into winning on French roads.  The left leg is where the new insight came so I’ve expanded it here:

Tree 2

 

That box at the bottom left is the insight that makes all the difference.  In professional sports teams, unlike national and Olympic teams, the players can move if the money isn’t good enough.  Looking across dozens of professional sports, including road cycling, there is solid evidence that to attract and retain the top talent, and to win the biggest prizes, the team needs to be one of those with the most money.  As many sports teams have learned, you can have the best youth academy and coaching methods in the world, but the top players or riders will leave if they aren’t rewarded like their world class peers.

So it was essential to investigate the budgets of the other top teams, including the salaries of all the riders; and check the costs needed to pay the best salaries, as well as to have the best facilities and support.  Adding everything up, and working out what was needed to be the best funded team, made the €[x]m funding requirement a lot bigger than it had been before anyone started sketching out logic trees.

So What?

You could look at the tree and say “that’s obvious.”  Well that’s how it should be.  Drawing out a clear logic tree reveals important things in an obvious way.  The main point – to win you need the most money – wasn’t obvious to anyone before some simple thinking brought it to everyone’s attention.

Laying out this clear reasoning and supplying the evidence also made things very obvious to Sky Sports, the company that was putting up the cash.  They wanted to have a good chance of winning the Tour de France; they agreed with the thinking, understood the evidence, and believed in management’s ability to make it all happen, so they made the Team Sky the best funded team in the sport.

Obviously, a budget doesn’t ride a bicycle up giant French mountains.  The Tour de France was won by the riders and the team supporting them, not by a logic tree and the associated business plan.  The clear critical thinking using a simple logic tree just helped the team get enough money to pay for all the blood, sweat and tears; and it gave Sky some justified confidence that it was going to be money well spent.

 

For a free introduction infographic about how to create a logic tree, sign up at www.kardelen.training/logic-tree/

 


To Make a Good Case to People Who Care, Don’t Belittle the Alternatives

A common and misguided way people use make a point or win an argument is to take their favoured option and contrast it with a shoddy alternative, or “straw man.”  For politicians, this straw man is typically their opponent’s position described in such an oversimplified and biased way that the opponent looks a bit ridiculous.  Business people use this technique too.  Here’s a picture of straw man example, doing the rounds on LinkedIn, that draws on the current meme “you should love everything about leadership unless you’re a mild psychopath.”

Boss vs Leader

This straw man technique is effective in persuading a two main groups: (1) people who already agree with us, and (2) people who don’t really care that much and want to make a quick decision so they can stop thinking about it. If we want to engage someone who actually cares about the subject we’re raising, the evidence shows that we’ll put them off if we use a straw man. People who care about a subject naturally find holes in such naive comparisons, to the protagonist’s discredit. Here’s the effect of 1 candidate using a straw man, in a couple of experiments where people were asked 1 to choose between 2 candidates.

Straw Man Chart

So if your objective is to get approval from people who already agree with you, or who don’t care that much, then use that straw man. If you want to challenge people’s thinking properly, engage, and even persuade them in a subject they care about, then present the alternatives fairly, and explain in a balanced way why you and they should prefer the case you’re making. And please put that straw man on the top of the bonfire where he belongs.


The Best Training is Just Like Learning to Drive

The return on investment you got from driving lessons is more than infinitely higher (yes I really said that) than that high profile £1,000+ crammed-full-of-speakers training event you went to.  The reason is this: skills can depreciate very quickly, so how well you’re using them 2, 5 or 10 years later is the difference between one of the best investments you ever make, and a costly waste of time.

Let’s look at a case example, based on a study by the excellent training researcher, Ann Bartel[1]. You can see that if a person retains the skill well a year later, returns[2] are outstanding (and if the person actually improves his or her skills every year, returns quickly get into triple digits).  But if they lose a just a third of a new skill in a year, the training becomes a loss maker.

Training Retention Chart

So how do we get those fantastic paybacks by having skills we’ll still be using well in 5 or 10 years?

  • To start with, we learn skills that we know we’ll be using 5 or 10 years later, like goal setting or delegation or problem solving. Quickly debunked and forgotten management fads like Good to Great and Myers-Briggs are the Zumba Jazzercise of the training world and crash expensively at this first hurdle
  • We practise, practise, practise in training until the skill starts to feel natural, and engages our procedural memory (the memory engaged for riding a bike). Concepts and insights alone are like reading about how to ride a bike
  • We set things up to keep practising long after training is over (hopefully because we learned skills that make us better at our jobs and that we’ll enjoy using every day, but also maybe because we set up protocols that prompt us to practise)

If it all feels like learning to drive, or to ski or to do the breast stroke, then we’re onto a rewarding winner.

 

[1] Returns for 30% & 40% extrapolated from Bartel’s figures for 5%, 10% and 20%

[2] Calculated using wage uplift for the trainee; returns to the business are typically 5x this


Here’s The Thing: The Captain Doesn’t Matter

We’ve just experienced the usual fuss that surrounds the appointment of a new England rugby captain, together with theories about what makes a good captain and anecdotes about how various different captains made their teams great.  Is this all just jibber-jabber or is there any evidence to see if the team captain makes a difference to performance?  The evidence tells me that the captain hardly matters.

Even Where The Captain Has Lots of Power, He Barely Makes Any Difference

I looked at English county cricket results from 2006-15 to test the importance of the captain.  There’s a fixed number of counties over 2 divisions so we’ve got the same sample of teams throughout, and there’s a decent number of changes of captain to test our theory.  Cricket also seems to be a game where the captain should have a lot of influence (deciding whether to bat or bowl first, when to declare, field set up, bowler selection, etc) compared to other sports.  So my theory is that cricket should show the captain’s influence at its strongest.

Performance of teams varies year on year, even when you don’t change the captain.  So if the captain makes a difference, for better or worse, then you’d expect the performance to change more in the year that teams change their captains.  Here’s a comparison of the change in points when teams changed captain, compared to the years they didn’t change the captain.

Chart 1 Points

So changing the captain changes performance in a way that’s barely perceptible.

Don’t Be A Captain (or CEO) in a Bad Year

But what about the perception we have of our teams doing better after we’ve got rid of the old loser and brought in a fresh face?  Is that intuition wrong?  Well it is and it isn’t.  Here’s an analysis of whether teams had a performance increase or decline the year before the new captain came in, the year he came in, and all subsequent years.  The simple percentages are a coincidence, the sample in each group is 40.

Chart 2 Team Performance

Here’s what seems to be happening:

  • When teams change their captains, it’s often when they’ve had a bad year
  • The new captain comes in and the team improves to the level it was before the bad year
  • After that the team carries on at the same level until another bad year and the cycle is repeated

What’s most likely going on here is something called reversion to the mean.  Something worse than normal happens, so you make a change and things get better.  We come away thinking our change has made things better, but all that’s happening is that things are reverting back to normal; we can tell this because things are about the same as before the bad thing happened.  The exact same pattern happens in studies of CEOs being replaced.

How To Choose a Captain

So back to our question, does the captain matter?  If his effect is indistinguishable from random fluctuation in a sport like cricket where the captain has such influence on tactics, then I find it hard to believe that the choice of captain really matters much at all most of the time.

What seems to me to be more important is that the team chooses its best players – there is outstanding evidence that the team with the best players wins.  So choose your best players, and make one of your very best the captain.  The conclusion of the debates I heard about the most essential quality of great captains is that they lead by example – sounds the same as having a great player and making them captain.

 

 

 


Dealing with Frightening Research Findings

All of us, quite often, find ourselves on the wrong end of troubling research findings, from analysts, subject matter experts, and marketeers.  Such findings can lead us to change something drastically or just worry a lot.  We’re commonly not in a position to challenge how well the research behind the findings was done, and whether the conclusions are actually correct.  In these situations, we can almost always change how we decide to act on those frightening findings, and usually lower our heart rates, if we ask a simple question:

“OK, if I assume these findings are true, what do they really mean for me?”

We can do this with zero subject matter expertise, and usually quite quickly.

Let’s do this with a recent news release from the World Health Organisation’s cancer agency, the IARC, which scared the bejesus out of meat eaters.

IARC Frightens Carnivores with Scary Headline

The IARC news release said that processed meat caused colorectal cancer and that red meat probably did too.  The release contained just 1 piece of data: every 50g of processed meat eaten daily increases the odds of contracting colorectal cancer by 18%.

Here are the conclusions behind the headline:

Table 1

The broader conclusions, as usual, are a bit less sensational than the headline, particularly for red meat given its limited evidence of harm.  Still cause for concern though: sounds like eating not very much increases our chances of dying young by quite a lot.

So let’s check what these findings mean for us.  To do this we need to put everything in context – what we mean by red and processed meat, how much is 50-100g, and what a 17-18% increased risk really means.  Then we need to ask what happens if we do and don’t follow the implied advice.

Sweeping Definitions that Cause Uncertainty

Let’s get clear about what the IARC means when it says red and processed meat because those groups sound a bit broad.  Its red meat research doesn’t distinguish between a slow-cooked organically raised local lamb stew, and a minced burger that’s fried in oil, comes between 2 slices of sugar enhanced bread, and is served with fries and a cola.  Its processed meat research doesn’t distinguish between traditionally cured Serrano ham and a scotch egg.

We’ve only looked at definitions, but intuitively we’re already probably a bit less worried if we’re home cooking foodies or paleo warriors.

But let’s be prudent in this case and assume the findings apply to all the processed and red meat that we eat with no exceptions, from steak tartare to kebabs.

Small Looking Numbers that are Really Large

Let’s look at that small sounding extra 50g of processed meat or 100g of red meat.

The average person who eats red meat consumes 50-100g per day, and heavy red meat eaters eat more than 200g per day.  Apparently there aren’t reliable numbers on processed meat.  So eating that small sounding extra 50-100g means doubling your intake if you’re an average Joe; and cutting down by 50-100g means cutting it out entirely.  So changing by 50-100g isn’t a small amount, it’s a lot unless you’re a major carnivore.

The critical thinking alarm bell here was the 50-100g number being presented as an average daily amount.  If it had been weekly (half a kilo) or monthly (1.5-3 kilos), then it would sound like the large amount it is.  No wonder those mobile phone salesmen always convert the insurance cost into a few pence per day.

Large Looking Percentages that Don’t Amount to Much

Now let’s look at the effect of doubling your intake, and that scary 18% uplift in your chances of getting colorectal cancer.

When faced with large percentage increases or decreases, we have another critical thinking alarm bell going off and a typically very useful question: “An increase from what to what?”

Currently, the odds of mortality from colorectal cancer are about 1 in 8,000 in developed countries.  An 18% uplift gives you an extra 1 in 45,000 risk of mortality, or 0.002%.  To make these numbers meaningful, here are some activities with an equivalent risk of mortality.

Chart 2

Just don’t compound your risk by driving 500 miles to a triathlon, and eating more pork pies on the way home to refuel.

What if I Follow the Advice, and What if I Don’t?

What if I take the advice and cut out processed meat?  I haven’t cleansed myself of the risk of death from colorectal cancer; I’ve just lengthened the odds from 1 in 8,000 to 1 in 9,000.

OK, what if I ignore the findings and keep eating those delicious salt beef sandwiches?  I might get colorectal cancer, but there’s a much, much higher risk that something else will get me first.  Here’s the risk of death by colorectal cancer versus other all causes for different age groups.

Chart 3

OK, So What?

So here’s my reinterpretation of the IARC news, having just checked what the findings actually said and put everything in context.  Eating a lot more processed meat would increase my risk of colorectal cancer by double digit percentages from not very much at all to still not very much at all; and I don’t seem to have strong reasons to believe that eating simply-cooked unprocessed beef or lamb will do me any material harm.  This is very different from the original message from a high profile and widely trusted organisation.

We didn’t need any subject matter expertise in oncology to superimpose some sanity and perspective onto this frightening headline; we just needed some basic critical thinking to ask what findings really mean if they’re true.  If we can use such simple clear thinking to add a much more balanced context to the work of rigorous scientists; think how we can use it to contextualise similarly startling insights from the less balanced world of business presentations, magazine articles, and pronouncements of gurus.

For me, I’m confident this article will increase my cumulative page views by at least 20%, will be my #1 health related post, and will retain my 100% publication track record.  I’m getting all that benefit from barely 10 minutes of research and writing per day.


England’s Exit from the Rugby World Cup
Damned Lies, England’s Exit from the Rugby World Cup, & Confusing “Should” with “Is”

Warning: this article will not appeal to anyone who doesnt follow rugby union – anyone reading it who expects any other subject matter will feel ticked off.

Have a go at this calculation, rugby followers:

[Number of English newspapers] x [pundits plus readers] x [days since England’s first match against Fiji]

The answer you get is the number of opinions about why England under-performed at the rugby world cup and became the first former winner to be eliminated at the group stage.

I’ll summarise these several hundred thousand opinions into 3 themes:

  1. It’s the players
  2. It’s the coaching team
  3. It’s the RFU

Let’s have a look at the thinking and some facts, and see how well these theories stack up.

Is it the players?  Well yes, but not in the way you think it was

The “it’s the players” argument goes like this: England has the biggest player population of any nation and so should consistently have the best set of players to choose from, but the elite English players aren’t the best in the world.  Something is going badly wrong.

First, let’s see whether England has better or worse players than other nations.  Here’s the number of nominations for the world player of the year by nation since England won the world cup in 2003.  I’ve gone back to 2004 because there are top players playing in this world cup who were in the list back in 2004.

Table 1

England doesn’t seem to have produced many of the world’s top players since 2003.  In fact, it had the same number of world player of the year nominees in the single year it won the world cup in 2003, as it has had in total in the 12 years since.

How about England’s player quality in just the Home Nations?  The most solid data we have here for comparison is who was selected for the most recent British & Irish Lions tour, to Australia in 2013.  Looking at the starting XV across the 3 tests, the average number of players by nation was Wales 8, Ireland 4, England 3, and Scotland 0.[1]  So even in the British Isles, England ranks second bottom.

So England has had fewer top players in recent years than any top nation except Argentina, Italy and Scotland.  How can this be, given England’s massive player base?  Let’s look at that next.

Here’s a chart showing the IRB rating of every nation in the rugby world cup on 9th October 2015, versus the number of registered players in that country.

Chart 1

You can see that once a nation has more than about 30-40,000 registered players (all the countries named in the chart), then there’s pretty much no relationship between player population and success.  If it did, then Japan wouldn’t be celebrating a great world cup.  Instead, its union would be holding a 360 degree inquiry into why Scotland, with half the playing population beat Japan so convincingly and qualified ahead of the Brave Blossoms.  Italy would also be gnashing its teeth each time the smaller Wales player population finishes above it in the 6 nations, which is every time.

I’m not saying player population is irrelevant, just that other things are likely much more relevant once the population is big enough.  This kind of makes sense given that a match day squad has 23 players.

Let’s look at this another way to see if anything about player population is useful.  Rugby is a minority sport in most countries.  Let’s look at how relatively popular it is by country, by looking at the number of rugby players there are per thousand of population, and see if that’s related to performance.  Here’s a chart showing that relationship.[2]

Chart 2

Now we’re getting somewhere.

It seems that the popularity of rugby union in the country is pretty important, i.e. do the most talented kids choose to play rugby instead of football or rowing or boxing, and do they have equally talented peers to keep them on their toes and prompt them to get better?

Looking at it this way, England mildly underperforms, but you wouldn’t expect it to top the performance rankings given that rugby union in England isn’t all that popular.[3]

Here’s what I conclude: England hasn’t had the greatest set of players to choose from since 2003, but given that rugby union isn’t that popular in England, you wouldn’t expect it to lead the world except when the stars align and a few once in a generation players come along at the same time.  The fact that England is a highly populated country is less important than the fact that rugby doesn’t draw that much of the top talent.

Is it the Coaching Team?  Doesn’t Look Like it to me

OK, so how about that coaching team?  There have been lots of specific criticisms of the coaching team from playing style to selection and nepotism.  These may be correct or incorrect, but the way I’ll measure the coaching team’s performance is whether they do better or worse than expected given the group of players at their disposal.

Before the World Cup, the England coaching team had trained England to second in the 6 Nations Championship four years in a row.  England has fewer top players than France, Wales and Ireland on our international player of the year measure, and fewer than both Wales and Ireland on our British Lions measure.  England has finished ahead of 2 of those countries every year for the last 4 years.

In the 2015 World Cup, England were beaten by Wales and Australia, the teams that came third and fourth in the previous World Cup with relatively young teams, and who both are ranked higher and have more top class players than England.

So the England coaches did a better job in the 6 Nations than you’d predict given player quality, and did as you should expect in the World Cup if you can take off your red rose tinted spectacles.  There may be better coaching teams available, but England’s coaches seem to be doing, on average, a slightly better job than their international quality 6 Nations peers.  England performing below expectations looks like an issue with the expectations, and these were maybe set back in the day, when England had more world class players and a disproportionate number of British & Irish Lions.

Surely the RFU Must Carry the Can?  I Don’t See How They Do

The “blame the RFU” argument goes something like this: the RFU is the richest union in world rugby, with the biggest playing population; so with that combination of financial and playing resources England should have the best managed and capable team on the planet.  Simple.

We’ve talked about how playing population isn’t important once it’s higher than about 30-40,000.  So that leaves the money – what can the RFU actually do with all that money?  It seems there’s maybe 3 big things

  1. Select and recruit a good set of coaches – we’ve covered this and I’d say the coaches are OK
  2. Give the elite players the best possible facilities – no one is arguing that these aren’t wonderful
  3. Provide grass roots coaching to develop the next 2 generations of talent – now we’re onto something, so let’s talk about this next

I never thought I’d say RFU and “chink of light” in the same sentence but take a look at this information on junior players coming through:

World Junior Player of the Year Nominations by Country

Table 2

England now matches the best nations in the world for numbers of top young players.

Now have a look at this information on how England’s junior teams are now performing:

England Placings in Junior Championships

Table 3

England’s junior teams are now often more or less the best in Europe, and in amongst the best in the world.  People who’ve watched England in the World Cup and pundits who have rated England’s players have seen the start of this generation coming through.

I don’t know how much we can credit the RFU for this uplift, how much is clubs and schools, and how much is the nation’s talented 8 to 10 year olds watching England win the World Cup in 2003, and taking up rugby instead of football.  I think much of it is the 2003 factor, and that the relationship between team performance and rugby popularity actually works in both directions; but it seems churlish to give the RFU no credit at all for all the good players coming through.

So on balance it seems that the RFU isn’t doing a terrible job with the things it can actually control.

Who’s the Luckiest Man on the Planet?

I’ve no idea whether the England coaching team will keep their jobs.  Sacking them seems harsh given the job they’ve done without a world-leading, or even Britain-leading, group of players.  But with the talent coming through, the coaching team for the next 2 World Cups is going to have some genuinely good raw materials to work with and may be able to fulfill England’s inflated expectations.

And This Relates to Critical Thinking How?

This is critical thinking blog, and all I’ve done is analyse the ass off a rugby team’s performance.  Here’s the critical thinking lesson I took from looking at this.  When something isn’t going as you think it should, it’s important to put the scatter gun back in the holster and have a colder, harder look at the facts and what matters, because you’ve probably misunderstood something basic, like whether more players equals better players.  There’s often a world of difference between “should” and “is.”  It’s especially important to make this distinction when the subject is as emotionally distressing as a few 80 minute games of rugby.

 

 

 

[1] There is another source people use to compare players: the European Player of the Year.  However, if you look at how players are chosen in this competition, you see it is really a showcase for the European Champions Cup and is biased strongly to the 30 players in the teams that reach the final.  Since annual selection started, 4 of the 5 European Players of the Year played in the team that won the final, the fifth played for the team that lost in the final.

[2] The size of the bubbles in the chart indicates the size of the playing population for each country. I’ve omitted the Pacific Islands from this chart because Tonga and Samoa have very small player populations, and Fiji has historically concentrated its top talent on 7-a-side rugby.  These nations also lose disproportionate numbers of their best players to other nations

[3] France, Australia and South Africa have regional and demographic concentrations of players.  If we looked at registered players in just those regions or demographics, rugby would look much more popular there and their bubbles would move to the right on the chart

 


A Settled Team is a Winning Team?

A Settled Team is a Winning Team – or is it the Other Way Around?

the-a-team-635fp061010

There’s a received wisdom that says you need a settled team to get good results.  Here’s Will Greenwood talking about the England team in the 2015 Rugby World Cup:

But the most important thing right now and I am not inside the England camp so I can only speculate on the mood within it is stability. I cannot emphasise that enough. Chopping and changing helps no one.

Google “a settled team is a winning team” and you get pages of similar appeals to find a settled line up, from all sorts of well meaning experts and supporters.

If we accept these philosophers’ statements as true, we’re accepting 2 things:

  1. It’s true – teams do better when they have a settled line up
  2. We’ve got cause and effect the right way around, i.e. cause=settled team, effect=better results

Let’s test it shall we?

Do Settled Teams Get Better Results?

We can’t do this test by looking at dozens of teams, and seeing whether the ones that have more settled line ups do better, because that doesn’t help us work out cause and effect.  To get to the bottom of whether a settled line up is a winning one, and to understand what causes what, we need to choose one team, and see whether the team performs better when its line up is settled compared to when it makes lots of changes.

For my team, I’ve chosen the Liverpool football team in the 2014-15 Premiership season.  I chose Liverpool because they’re better than average and so should be able to string a good run of games together if the manager can get that settled line up.  They’re also not so good that they never lose, so they should give us a good range of results to analyse.

I’ve looked at Liverpool’s starting team in every Premiership game and counted how many player changes the manager made from the previous game.  The number of player changes ranged from 0 to 4 over the season, with an average of 2.3 player changes from the previous game.  Not all player changes are voluntary because of things like suspensions and injuries, and players coming back from suspension and injury, but that’s all part of the hurly burly of achieving our target of a settled line up.

Now let’s look at the average number of points Liverpool got when they had a more settled line up than average (0, 1 or 2 player changes from the previous game), versus when they a less settled line up (3 or 4 player changes)[1].

Liverpool Team Changes vs Results

Yes, you read the table correctly.  Liverpool did no better or worse, whether they made a handful of changes, or if they changed a third of the team.  This doesn’t mean having a settled team is irrelevant, but it does mean that other things, say how good the opposition is, are likely much more relevant.  So just having a settled line up doesn’t, even for a better than average team like Liverpool, reliably give you better results.

What if we Got Our Cause and Effect the Wrong Way Around?

It looks like our cause and effect theory doesn’t hang together: the team stability doesn’t give us better results.  But what if the cause and effect worked the other way around?  What if better results caused a more settled team?  Let’s look at the data again, remembering that the manager made on average 2.3 player changes after each match.  Here’s a chart of how many changes the manager made, depending on whether the previous result was a win, draw or loss.

Liverpool Team Changes After Results

He’s making 50% more changes when he loses.  Given that some changes are enforced by injury or suspension, win lose or draw, then I’d bet he’s making most of his voluntary changes after defeats.

So it seems that good results cause team stability, and bad results cause the manager to tinker a bit more with his line up.  The classic stick while you’re ahead, roll the dice again if you’re behind.

Testing the Theory in A Completely Different Environment

Every good European knows that Britain and France are fond cousins with opposite personalities, so if the theory works in France too, then maybe we’re onto something.   I also want to test things away from the pressure cooker environment of football.  So let’s look at the France rugby team in the 2015 Six Nations competition.

France Changes vs Results

Can you see the pattern?  Let’s look first at how results affect team stability:

  • Average number of changes after a victory: 1.5
  • Average number of changes after a defeat: 6.5 – Sacre Bleu!

So good results are followed by team stability, bad ones by a petite Revolution.

Let’s now look at how team stability affects results:

  • Results after making hardly any player changes: won 1, lost 1
  • Results after lots of player changes: won 1, lost 1

So whether Phillipe Saint-Andre made lots of changes, or hardly any changes, turned out to be immaterial.[2]

So What?

So how do we square all this with our instinct that the teams we all recall as winners are all pretty stable?  The Arsenal invincibles?  The great Barcelona teams?  The world beating Australian and West Indies cricket teams from over the years?  Here’s what I conclude.  Winners make fewer changes, but they don’t win because they make fewer changes, they make fewer changes because they’re winners.

Leaving aside some armchair philosophy about whether causality exists at all, it seems to me that it feels natural to think that people’s actions (changing the team) cause outcomes (bad results) and it’s not that natural to think the other way around, i.e. that outcomes prompt or cause people’s actions.  That natural way of thinking tricks us into getting our cause and effect the wrong way around.  But the direction of cause and effect is obvious when you look at the evidence and think about what’s going on.

Makes me wonder about other cause-effects we’ve been accustomed to hearing about.  Do great or shocking governments cause national economies to thrive or flounder?  Do world class universities produce outstanding graduates?  Do a few corrupt old men create a monopolistic, lucrative global sports governing body? Or does it all work the other way around?

 

[1] For non football fans, you get 3 points for a win, 1 for a draw and 0 for a defeat.

[2] What did affect whether France won or lost? They beat both teams that regularly finish below them in the table, and they lost to all 3 teams that regularly finish above them.

 

 

 


BBC a Fountainhead of TV Creativity?

sherlock_b

With the BBC’s status apparently under threat from the Tories, various people connected with the BBC have naturally come out to explain what a unique and valuable institution they think it is.  One of these people is Steven Moffat, producer of the highly acclaimed Sherlock, and of the new series of Dr Who.  Here’s a couple of extracts from a Guardian article covering his speech:

Its fair to say that theres only one broadcaster in the whole world that would have come up with and transmitted as good an idea as Doctor Who…” [said Moffat]

Along with the Great British Bake Off and everything David Attenborough has ever done, Who is a wonderful example of the corporations breadth, according to Moffatt. There is no other broadcaster so madly varied and so genuinely mad, he said. Can you imagine what the world would be like without that insane variety?

There’s a few things that make it difficult to form a judgement from this article, and use it as a basis to give the thumbs up or down about the BBC being a good thing:

  1. He’s only talked about the benefits of the BBC; and we can’t conclude it’s a good idea without also thinking about its costs. My home town club Blackburn Rovers would really benefit from having the entire Real Madrid team turn out for them on a Saturday, if we didn’t have to worry about those cursed transfer fees and annoying player salaries
  2. His argument in the article is based on a few examples, which is a common way of arguing, and is good and memorable and easy to visualise, but he’s given us no solid data to back up his general claim of world leading excellence and variety. For we critical thinkers, this sets off a little alarm bell: “Is he just cherry picking the good stuff?”
  3. His point about variety sounds like a red herring unless we take a moment to think about why variety is important. Does it actually matter if one big production company has a lot of variety, if the same types of things are being produced by lots of other companies big or small?

There’s other things about the argument that would make a logic buster twitchy, but I’ll stop at the 3 above.

Just because he’s not making his points in a perfectly balanced way – he’s promoting a cause after all – it doesn’t mean he’s wrong.  So let’s look at his argument.  We’ll confine our investigation to testing what he claims in the article, leaving aside the costs, and any other arguments for or against the BBC.

To have something tangible to get our teeth into, we need to clear up what he’s claiming.  We need to do that in a fair way that’s generous to the spirit of what we think he meant, but also in a way we can test with evidence.  Here’s what I come up with if I do that:

  • The BBC produces some of the best TV programmes in the world, and more than its fair share of all high quality TV programmes
  • The BBC produces a broad range of different types of high quality programmes, including some types that all the other TV companies put together don’t cover anywhere near as well as the BBC does

Let’s now look at some facts to test this, starting with BBC’s quality.  I’ve used as my source the 100 top rated TV programmes in the IMDb database.  I’ve adjusted the data to make it fair.  First, I’ve removed double counting[1], which leaves 92 programmes.  Second, I’ve given any remakes to the producer of the original series, for example I’ve given the US Office and House of Cards to the BBC.

Here’s a table showing how many of the IMDb top 100 are made by each of the top production companies.

Table 1

Go BBC.  It produces more programmes in the top 100 than anyone else, and is also the top producer since 2000.  For interest, Dr Who is in there at number 73.

Things are a tad shakier at the very top end of the ratings.  BBC has 1 programme in the top 5 (Planet Earth), versus HBO’s 3 (Band of Brothers, Game of Thrones, and The Wire); and 4 of the BBC’s 5 programmes in the top 20 are David Attenborough documentaries, the other being Moffat’s Sherlock.

Things are also a little fragile if we look under the surface at what’s been made since 2000.  Of the BBC’s 14 top 100 shows since 2000, 5 were by David Attenborough, and 2 were US remakes of earlier BBC originals.  That’s half of the BBC’s best shows since 2000 being reliant on one brilliant 89 year old and two US studios.

Let’s now look at variety, and whether the BBC makes programmes in genres that would be very weak if the BBC didn’t exist.  The IMDb top 100 contains 22 genres, and the BBC is in 13 of them, compared to HBO’s 15, ITV’s 9, and Fox’s 9.[2]

Chart 1

So the BBC is there or thereabouts in having lots of variety, even if it’s not the stand out global number one.   But what about the more germane point of the BBC contributing heavily to areas that other production companies don’t cover?  Here’s the BBC’s number of shows compared to everyone else’s shows in each genre in the top 100.

Chart 2

The BBC’s biggest areas are drama and comedy, which are also the most popular areas across the board.   The BBC is the second biggest producer of high quality drama after HBO, and is the biggest in comedy.[3]  Though this puts the kibosh on the BBC championing poorly represented genres, it does say that the BBC is making what the punters like, which seems like noble work.

The genres where the BBC really moves the dial are documentaries, with 5 of the 11 most popular, and at a pinch romance, with 2 of the 4 biggest tear jerkers.  These genres would be noticeably poorer without the BBC.

So here’s what I conclude about the BBC’s quality and variety from all this:

  • It makes lots of good programmes, more than any other producer in the world
  • It’s good at making drama, and people like drama
  • It’s very good at working with David Attenborough to produce outstanding documentaries, and is very reliant on the old fella

We can’t conclude the BBC is a good idea because we haven’t looked at the costs, or at any alternatives, but the benefits look pretty good for drama and documentary lovers.

When I look back at Steven Moffat’s claims in the article, I see what I often see from passionate advocates: he concentrates on the good stuff, ignoring the negative side of the argument, and he exaggerates a bit to tell a good story.  These are important characteristics for an advocate and a believer; but we can’t take what he says as a basis to form any solid judgement and either get behind him or boo him off stage.

Our analysis also shows something else I often see in the arguments of advocates and champions: once we accept that they’re a bit biased and take off the rose tinted glasses, they often still make some excellent points.  We can only see all that with confidence because we ignored pre-judgements, cleared up the thinking, and looked at the evidence.  No sh*t Sherlock.

 

 

[1]Some TV series have multiple series counted separately in the database

[2]The totals look high because each show is typically in 2-4 genres.  For example, Sherlock is in crime, drama and mystery.

[3]Comedy is a little deceptive because the BBC hasn’t got many comedies in the top 100 that it produced after 2000.