Algorithms – A Bluffers Guide

A breakdown and simplification of some current tech speak

The word ‘algorithm’ is uttered with a degree of reverence these days. It is the black magic behind AI and Machine Learning and is a favourite thing to go rogue in a modern plot line. The actors merely blame the bad algorithm when a computer goes crazy in a dystopian sci-fi catastrophe.   The decision making requirements that we are faced with in the modern commercial world far exceed our capacity in many instances because our brains evolved for a very different sort of world. A world of small groups where we rarely met anyone very  different from ourselves. We had significantly shorter lives and our main priorities were sex and survival. These days there is hugely increased complexity and nuance yet the evolved desire for rapid choice-making hasn’t left us.  Faced with these pressures we turn to computers for help. Computers helping humans is so pervasive and permeates almost all aspects of life. Such a rapid change has occurred in the last lifetime as the evolution of computing capacity increases exponentially. Your mobile telephone has vastly greater computing power than all three computers on the first Space Shuttle. Think about it for a moment. Your phone possesses all the computing power required to fire you into space. This incredible capability means that people have been fascinated with the idea that a computer can be turned from a dumb machine into a thinking machine (thinking as we do) since the dawn of the first machine. However, computational power is one thing. How to make it work as an independent thinking machine is another thing all together. One of the key things you need to do this you need an algorithm.

Algorithms: the rules needed for machine thinking. 

Algorithm Just to clear this up. Machines DO NOT think. Computers can process a huge volume of information, really really quickly because they are unbelievably fast calculators. The hardware is just a superfast counting machine with a screen. Algorithms are not hard to conceive, if you think of them like this; an algorithm is what you need to cook supper for your family. Few families eat the same thing for every meal of every day so there are constraints and variables. Imagine there are four of you. One is a vegetarian, one is on a low-fat diet and the other too aren’t that fussy but do have preferences. You want to provide them with a nutritious and tasty meal that ensures everyone enjoys the experience, including you.   Let’s imagine that you are 45 and have cooked for the same people many times before (almost daily) and as a consequence you have learnt a lot about what works and what doesn’t. However, this week is different and you haven’t had time to shop and the other three did the shopping for you. You open the cupboard doors and have a peer in the fridge and freezer to get an idea of what is available for you to cook with. Within about 30 seconds of taking stock of the cupboard contents, the fridge contents, the available utensils to cook with, any time constraints, the dietary preferences and so on you decide on a meal. You cook it, serve it and everyone eats. They get up from the table appropriately nourished leaving the process to be repeated the next day. What allowed you to do this was an algorithm in you head. Call it the ‘cooking for family’ algorithm.  Algorith wordcloud   Pause for a moment though and think about how simple it can sound and actually how the thinking and actions required was so incredibly, amazingly, mind-blowingly complex and nuanced.

 A quick note as to where this can go wrong

Simply put, computers are not people. Computers are superb for making decisions that do not require any emotion, ethics, bias and the like. Eventually a computer beat a Chess Grandmaster and uit did it by sheer computational brute force. However, to take the supper example: the cook knows the audience at a level a computer can’t match. All the calculations from an algortihm and it can’t know from someone’s face if they are the special kind of tired that a Wednesday can make them, so putting any kind of pie down for dessert would mean the world to them. And the others would see that a pie was not only what was needed but was a very thoughtful gesture thereby elevating the cook in the eyes of the other three and making an intangible but felt contribution to them too.  The aim is to have algorithms teach themselves by learning from mistakes in order to achieve the desired outcome of the programmer(s). They try,  but they are far from perfect and because we expect perfection from computers, in a way that is different from our expectations of one another, then mistakes are not easily forgiven. Algorithm 2

Data Ethics For Business

We exist in an increasingly data driven world. More and more, we are encouraged or directed to ‘listen to the data’ above all else. After all, the data doesn’t lie. Does it?

bigdatawordmap-1264x736-672x372

Data Ethics in business is the name of the practice used to ensure that the data being used to make high-value commercial decisions is of the highest quality possible. However, there is a catch. Human beings are the catch. We have  gut-instinct, prejudices, experience, belief systems, conditioning, ego, expectation, deceit, vested interests etc. These behavioural biases all stand to cloud the data story, and usually do.

A high-value commercial decision does not necessarily have immediate financial consequences. Although, in commercial terms, a sub-optimal outcome is invariably linked with financial loss. In the first instance, the immediate effects of a high-value decision can be on organisational morale or have reputational consequences.

responsibility

When a high-value decision is to be made there are invariably advocates and detractors. Both camps like to believe that they are acting in the service of a cause greater than themselves. Occasionally, some of the actors cloud the story because their self-interest is what really matters to them, and they try hard to mask that with the veneer of the greater good. Hence the term ‘Data Story’, because behind the bare numbers and pretty graphics  there is an entire story.

The concept of conducting a pre-mortem examination of the entire data story to model what can go wrong is becoming more important for senior decision makers. It is getting increasingly difficult to use the traditional internally appointed devil’s advocate as, due to the inherent complexity of understanding a data story, this function needs to be performed by subject matter experts. Although the responsibility for decision-making always falls on the Senior Management, they want to do it with a full breakdown of the many facets of the data story.

BigData-wordcloud-2

 

In order to achieve this, individuals with a unique blend of talents, experience and inquisitiveness must be used. People with absolute objectivity and discretion, who don’t rely on inductive reasoning. Ones who are robust enough to operate independently, diplomatically and discreetly and have executive backing to interrogate all the data sources, ask the difficult questions and highlight any gaps, inconsistencies, irregularities. From this they can provide a report for the Executive Sponsor(s) with questions to ask and inquiries to make so a well-informed decision can be made.

After all, when there is  lots at stake, no one wants to be remembered as the person that screwed-up and tried to blame the data?

Why is data dangerous?

In the words of @RorySutherland: “The data made me do it” is the 21st Century equivalent of “I was only obeying orders”. The growing power and influence of Data Science touches everyone’s lives. Sutherland also remarks: “Markets are complex and there can be more than one right answer. People in business prefer the pretence of ‘definitive’ because if you can show you’ve done the ‘only right thing’ you have covered yourself in event of failure”. These are all attempts at Plausible Deniability, and they are weak.

For the record, plain old data is not dangerous, you are unlikely to be hit by an errant Spearmans Rho, or a rogue Control variable that detached itself from an analysis. Data is just a record of the measurable values of something that has happened in the past. Digital exhaust, if you will. Like speed in a car, it is the inappropriate use of it that causes issues.

zuck-data

Doing the right thing often sees people becoming  enslaved to Type 1 and Type 2 data, because they are the easy parts. You can hire experts, who can count well, use the software and understand how to tease out knowledge from the data points. What the majority can’t do, or may even do intentionally, is to manipulate the presentation, context and language used when presenting their findings. This is the Type 3 data I talk about, that isn’t traditional data as we know it.

Type 3 data is the really dangerous stuff. The reason for this is our complete fallibility as human beings. This is nothing to be ashamed of, it is how we are made and conditioned. It is in fact, entirely, boringly, and ordinarily normal. I was recently told by a lawyer – I say this because she is pretty well-educated – that all statistics are a lie. She then cited the famous Mark Twain (nicked from Disraeli) saying of, “There are lies, damn lies and statistics”, as if this were all the proof she required. Interestingly, when I challenged her on this and made a case for accurate uses of statistics she refused to even acknowledge this. She was wedded to her belief and I must be wrong. Case closed.

statwordcloud

I think immersion in courtroom rhetoric may have been getting the better of her. However, this goes to show the just how dangerous we humans can be. Imagine being a client with a lawyer whose dogmatism may cause them to overlook or be able to question relevant statistical evidence? All stemming from a strongly held view that all statistics are lies. Professor Bobby Duffy recently wrote an excellent book called Perils of Perception and on p.100 he shows just how problematic this view can be.

My point is: If a person who is well-educated, and practising in a profession like law, can hold such a position, then it is not beyond any of us to do so, quite unwittingly. Until one is more familiar with the behavioural biases that we are all susceptible to, the way Type 1 and Type 2 data can be mis-represented (Type 3 data) and how that uses our in-built foibles to generate a reaction.

This is where someone who understands both of these areas, and can blend that knowledge into an expertise which is useful, can help you. When important decisions on strategy, direction and spending  are conditional on interpreting data from others, you want to get it right first time. If not, you’ll be forced into, “The data made me do it”, and that rarely ends well.

burning money

 

 

 

 

Another Meaningless Graphic: Another Meaningless ‘Fact’

Have you ever seen one of these? A classic example of an attempt to bamboozle you with utterly meaningless data.

This is from a website that, amongst many other things, promises to “outpace disruption“. Does anyone know what that means? Anyhow, here is the result of outpacing disruption.

meaningless histogram
A meaningless bar chart

This was all there was. There was no information giving context. Still, positive numbers must mean it is wonderful investment. You can hardly fail to make a bundle

Are you ready to part with your money yet? No?  How about if you knew this dazzling fact: what if I were to tell you that this product increases checkout speed (e-commerce) by 24%. Impressed yet?

Or perhaps, after you read the first posts on The Problem With Data you were asking things like, a 24% increase over what? How many? What period? Which currency? What language? How measured? Credit/Debit card? PayPal? Amazon Pay? Stored customer details? First-time transactions? Repeat transactions? Fibre broadband or 5meg FTTC, TCP to the residence? And on and on.

 

 

Type 3 data in action. The Guardian is at it again.

The purpose of this blog is to get behind the data stories we encounter. Understandably, most commercial data is sensitive and remains unpublished. This means I have to rely on publicly available mangling of the data to illustrate the points.

The article of 11th October 2018 carries the snappy title, “Profits slide at big six energy firms as 1.4m customers switch” (The 3 types of data are explained here)

I will stick to the problems with data and not make this a critique af the article, for its weaknesses alone. That is just churlish. Read the following and think of yourself being presented with a document like this and having to critique its worth as something to base your decision-making on.

This article encompasses the Type 3 data example so very well! It appears that the journalist has started with an idea and then worked backwards to mangle what Type 1 data they have to fit the idea they want to transmit to the reader. To be clear: this post is not written an opinion piece about the Guardian, but a critique of an article purporting to use Type 1 data  to support the ‘Sliding Profits’ hypothesis.

Before we go any further the Golden Rule of data has been broken. You simply mustn’t decide the answer, and then try to manipulate, mangle and torture the data to fit your conclusion. You must be led by the data, not the other way round. It is fine to start with a hypothesis and then test the data to see if that is true. It is a major credibility red flag when the conclusion is actually the initially assumed answer.

Red Flag

If the article is apparently a business article it is rather worrying when the journalist obviously doesn’t know the difference between profit margins and profit¹. These are two distinctly different ideas yet they are used interchangeably in the piece. Red flag number two (if the first wasn’t enough). Paragraph five manages to combine the margin’s of two companies with the profits of another and then – completely randomly – plugs in (excuse the pun) an apparently random reference to a merger and the Competition Commission.

Terms like the ‘Big Six’ are used but nowhere does the author bother to say who the Big Six are. Whilst it is a moderately common term it cannot be assumed that everyone knows who they are. This is sloppy reportage and another Red Flag for the reader. Sloppy here, sloppy elsewhere. Who knows? This is back to the Type 3 issue of how it is presented to you. In this case, so far, very poorly.

The energy market regulator, Ofgem, is cited as the source for the first graphic. The Y (vertical) axis is numbered with no qualification, the date and document that this is taken from isn’t mentioned. Type 1 data being mangled by the Type 3 data. Overall – poor sourcing and not worth the bother. You can dismiss graphics like this as you can reasonably assume it is a form of visual semiotic designed to elicit a feeling and not communicate any reliable Type 1 data to you. (Note the profits and profit margins even being conflated in the graphic title!)

Poor graphic.JPG
Poor graphic designed to mislead – taken from the Guardian article.

 

The final critique is the one that speaks to the concept of Type 3 data. The language used in the article is such a blatant attempt to skew the article away from reportage about how the entrant of challengers into the market place are affecting the profits, and profit margins, of the established players. I think the subsidiary point is about the fact that consumers aren’t switching suppliers as much as is expected. I had to read the article several times to distil those as the most likely objectives of the piece.

Finally, if you re-read the article and just look at the tone and, more specifically, the adjectives used you’ll be surprised. What I can’t work out is the author’s agenda. To just report such a muddle of data is one thing, most popular press has an agenda of some kind.

NB: I really hope the Guardian doesn’t just keep gifting such poorly written articles. I think I may look at the coconut oil debate next!

Continue reading “Type 3 data in action. The Guardian is at it again.”

What is Type 3 data and why is it so important?

A simple enough sounding question, though something that is quite contested. I propose that we need to look at three distinct subsets of the concept of data. You’ll see why in a moment why this article isn’t a technical explanation of data in stats. For that (and it is necessary) this is a super post that explains them.

This article is intended a guide to help you categorise the data that is being presented to you in the course of a day.

Type 1 – This is ‘just’ the hard numbers.

By this I mean just what you imagine. The figures that get plugged into SPSS, Stata, R, SAS and the like. How these are analysed determines the output. It is necessary – and can be mind-numbingly boring, I know this as I’ve had to do it many times! – to check how any of the variables may have been re-coded, re-weighted and then analysed in the data-management components (.do files, syntax files etc) of the popular stats packages. [Why isn’t Excel listed? I asked my ex-supervisor and a Professor who specialises in this stuff. He politely guffawed and told me that it isn’t a ‘proper’ statistical analysis program. Once the heavy lifting has been done it may be exported to Excel as that is what the majority of people are used to seeing.]

figures.png

Type 2 – This type of data is the so-called softer numbers.

Whereas the first type of data is  useful for analysing the patterns of turnout for an election, the way different materials on an aircraft fatigue, how people move through a supermarkets etc. Type 1 relies on quantifiable and easily measurable (converted into a numerical value for analysis) variables. One step right, turns right and two steps at a 40 degree angle, over a nine second period and so on.

Type 2 data is an attempt to record and analyse human emotions, behaviour, and sometimes capture the strength of intent to do or not do something. We have all been asked things like, “How did that make you feel? Please rate your reply from Very Unhappy, Unhappy, Neutral, Happy to Very Happy?” This is the classic Likert scale.

Stop though. Have you considered if Semantic Differential Scales were used instead? Perhaps a mixture of the two, or two different data sets derived using different assessment methodologies? These too can be plugged into the stats programs and analysed. The trickier thing here is the subjectivity element. Is my Very Unhappy the equivalent to your Very Unhappy. The way this effect is mitigated is by large-scale testing, as this generates a happy medium by excluding the outliers. Hence, be very wary when a small sample size is used to generate an indication of feeling or intent.

Likert answers

Type 3 – And this is where it gets hazy and interesting!

Type 3 data is the way in which data is framed and presented to you. This may be in a newspaper, an internal report or perhaps a sales presentation. They are all trying to sell you something. The wrapping of the data and analysis may be in a manner to enhance the credibility and believability of the package, or you may be being steered away from robust data because it doesn’t fit with someone’s agenda. Either way, you are being encouraged to buy in to a point of view and the ‘data’ is being used in an effort to burnish the idea.

Cleverly employed Visual Semiotics that speak to far deeper parts of our brain are often employed. You already know what these are, they’re the graphs, symbols and pie charts as well as the tangentially relevant accompanying images. See the recent post on the mangling of data by the Guardian newspaper – the image of the white police officer discharging a taser directly towards you – for an example of this. Creative affect labeling, which is the process of putting feelings into words, of some of the characteristics of the data, certainly the ones that focus is being directed towards, is influential. The latest research techniques have allowed scientists to show this happens, however you may think you can override such feelings.

Visual Semiotics

Although Type 3 data is all about the way in which the data is framed, it isn’t the numbers in the traditional sense. It is the third part of the package. Type 1 data is, if correctly produced and analysed, completely susceptible to the influence of Type 3 data, as is Type 2 data.

Type 3 data is the processing, packing and presentation of the digital exhaust that makes up Types 1 and 2 data. It is important as it mediates between us unpredictable humans, slaves to our emotions, with all our psychological foibles and weaknesses hidden just below the surface. As such,  Type 3 data should be afforded as much significance when analysing any data that is presented to us.

 

 

Let’s break down today’s bad data usage – Yes Guardian newspaper, I mean you!

Overnight there was a report published in The Guardian newspaper entitled “Met police’s use of force jumps 79% in one year”. I see the hysteria on Twitter – being whipped up and added to by the usual suspects who revel in the dog-whistle approach to political discourse – about the use of force by the Metropolitan Police being used disproportionately against black people.

“The Metropolitan police’s use of force has risen sharply in the last year, with black people far more likely to be subjected to such tactics than anyone else, the Guardian can reveal.”

Firstly: this is not an attempt to take sides. The police may be guilty of the accusation. Without correct and fair anaslysis of data it is impossible to tell. See a previous post about how to approach stories like this.

Secondly: the purpose of this article is to interrogate the findings of the Guardian’s reporting of this story. If this undermines the story then so be it. Do not conflate that with an endorsement of the police in London, for I do not know enough to comment about them. This is about the use of data.

The main thrust of the article is the “79% in a year” claim. It is what has been seized upon and retweeted with vigour. Nowhere does it appear that the people getting all worked up over this selective quote  have actually looked into the data.

“On 39% of occasions in which force was used by Met officers in the first five months of the financial year, it was used on black people, who constitute approximately 13% of London’s population.”

The first thing that struck me about this piece was the language and imagery used. Whilst the language is not the data, the way it is used certainly serves to alert you to the fact that they may be glossing over details in the pursuit of shock value. The Guardian is (was?) a credible broadsheet with a left-of-centre bias. Nevertheless, now they are giving away their content for free, they seem to be leaning towards the ‘clickbaity’ style of reportage, and that is a pity. Look at the graphic they have used. It is fairly emotive stuff. A white man pointing a weapon at you.

taser front view

In the first paragraphs the article uses words and phrases like, “jumps, risen sharply, most likely, on average, approximately, raised alarm, receiving end, stark figures, police culture” and the like.

These are written efforts to engage the enraged response (metaphorically speaking) part of our brains rather than the rational analysis. The System 1 reaction and not System 2 as Daniel Khanemann calls it in his book Thinking Fast and Slow.

Arguably, the alleged disproportionate use of force by police officers against black people is so serious an allegation that it warrants slowing down, taking a deep breath and analysing correctly?

Let’s break down the critical analysis a bit by asking some questions, and making some observations.

  • The only reference to the data used is “Guardian analysis of official figures“. This alone should sound the loudest alarms ringing in your head and set you into sceptical analysis mode. What figures, analysed by whom, what is their expertise, what were the controls used, compared to what, is there (perish the thought that a journalist is anything other than scrupulously impartial) an agenda on the part of the presenter of these figures?  [I think I may have found the data being used. See the bottom of the article for links. It certainly isn’t acknowledged in the article. This might lead a cynic to wonder if it may be being taken out of context and the journalists don’t want this easily checked for fear of undermining their credibility.]

 

  • Many people think of the word black being interchangeable (perhaps incorrectly) with people of colour. It turns out that the Guardian even mentions that ‘Asians’ and ‘Other’ are not part of this classification.

 

  • Are the figures generated by the ethnicity initially recorded by police codes for radio use or the self-defined ethnicity – 16+1 versus 9 – codes used by the subjects, even if they differ from the officer assesment. It doesn’t say.

 

  • In the last paragraph of the piece the most convoluted attempt at figures is used;  we witness the groups ‘Asian’ and ‘Other’ being rolled together to make a ‘52%’ claim sound more shocking. They need to decide how they portray things and stick to it. Previously the paper excluded ‘Asian’ and ‘Other’ from the ‘Black’ category and instead let them sit outside along with ‘White’ in order to use the five month and 79% figure on which the outrage is based.

 

  • There is no indication if these figures are split between reactive (responding to calls from the public), or proactive (the officers see something that they decide to investigate further). Proactive interventions are carefully considered by officers, they rarely steam in like you see in the movies. Things like back-up availability, whether they are single-crewed (and far more vulnerable), priorities like previous calls, outstanding paperwork (yes, really, there is a lot), their caseload and so on. Proactive policing is where a racist would shine as they would be able to target black people if that was their aim. From there they would need to engage and at least claim a veneer of credibility for their choice to use force. That wouldn’t last long as everyone would need to be in on it. These days that is very difficult.

 

  • Debra Coles from the charity Inquest is reported as saying: “This also provides yet more evidence about the overpolicing and criminalisation of people from black and minority communities. It begs important questions about structural racism and how this is embedded in policing practices.” – From other remarks in the article, when the Metropolitain Police were approached and asked for their view, it sounds like after losing some 20k officers the police are rarely proactive, mostly reactive. If only they had the time and resources to ‘overpolice’ anywhere.

 

  • What if, and I am trying to steer away from political and social commentary here for it is not my intention, the police respond to more incidents in places where there is a greater proportion of black people? I X amount of interactions involve use of foprce then is stands that the use of force against blacks is more likely. There is no doubt there is historical antipathy towards the police amongst much of the black community, especially in London. Previous generations of the Met (and other forces) have not been known for their even-handed approach towards the black community. Young men (for it is predominantly males)  in groups often feel that their masculinity is being challenged if an authority figure like a police officer lawfully requires them to do something. What if this leads to more physical  resistance which in turn leads to force having to be used? What if the white people, the Asians and the Others are more compliant when dealing with the police? What if, what if, what if? The fact is that these figures do not seem to be presented in a holistic manner. By that I mean controlling for variables such as age, gender, location, time, weather, changed police priorities, changed dynamics of interaction due to cuts in resources and so on.

 

  • The phrase ‘use of force’ is misused by the journalists and politicians. The police use a very specific definitionb, and it is not what the ordinary person thinks it may be. A voluntary handcuffing is a use of force. You know, the kind where the officer says something like, “for my own protection I am going to handcuff you.” and the subject complies. Perhaps a single-crewed female arresting a large male and having to drive him to custody herself. Merely drawing Captor (CS) spray needs to be recorded as a use of force. No one was sprayed, situation calmed down. Same as the drawing of a baton. Force is also shooting someone dead. There is a wide definition of force. Force, in police recording terms, does not mean taking the suspect to the ground in a violent bundle.

 

  • The whole method of recording has changed, a fact the paper skips neatly over. Too complicated to explain I imagine. The simple fact is that comparing these new figures generated and recorded one way with the past where they were not recorded in the same way, if at all, is simply invalid. It is far too soon to tell.

 

  • The politician, David Lammy MP, famous for trying to whip up stories like this to create indignation – I say this merely because he is a public figure who regularly tortures data or chooses to use tortured data –  betrays a lack of understanding when he talks about the criminal justice system and the police. The police in London are merely one small part of this national system. Saying there is systemic racism at each stage of the system in a piece targeted at the police in London does smack of trying to score wider points and not, in my opinion, worthy of inclusion. It weakens any point trying to be made. It is good to have Lammy on board for a bit more clickbait type appeal though. He has a large Twitter following and retweeted the article almost immediately. Surely not because he is mentioned in it.

 

  • As Matt Twist of the Met Police said, “…the figures should not be compared with population demographics. He said: “The collation of these figures is still in its early stages, and as this is new data, there are no previous benchmarks to compare it to. Therefore any conclusions drawn from them must be carefully looked at against this context, and should only be compared with those individuals who have had contact with officers, rather than the entire demographic of London.” You may think he is a police stooge but it does not make his statement incorrect.

 

  • The paper even says it is comparing  FY 2017/18 to FY 2018/19. This means that from April 6th 2017 to April 5th 2018 and similarly for 18/19. This is important because the new recording system was introduced from April 2017. The data being quoted is April to August 2017. It is being compared with April to August 2018. What happened to the seven months in between? Does this show a steady rise instead of a jump? Has anything else changed in this time? For example: the new system may not have started well and overlooked items or officer engagement was not what it should be, resulting in pressure from Borough Commanders down to record more accurately, leading to an apparent jump in incidents when it is actually a rise in adherence. Just ignoring a seven month gap is concerning. Why? An oversight or intentional?

 

Images are worth a thousand words. Misleading images are still far more impactful than poor descriptions. I reproduce this because it is a howler of a poor and misleading graphic.  The article is using the Financial Year for measurement, hence the mention of 2019 whilst we are in 2018.  Laying out the same images of London side-by-side implies that a comparison is about to be demonstrated.  However, the left hand image uses Westminster and mentions five other boroughs, none of which are referenced in the right hand image. This makes the image of questionable value, other than adding to the devaluation of credibility.  The source attribution should say that this is where the data was sourced, not presented in this way. I rather implies that this graphic is from the Met Police. It isn’t.

I think this was rather twisted to produce the graphics: Met Police Use of Force information.

howler of a bad graphic

Interestingly, the Met Police data gives this caveat, albeit buried on the third tab of their use of force stats page and not linkable by a URL. It explains areas where the data may be misinterpreted. The journalists don’t bother to tell us if they have taken this into account or not. We’ll never know.

CoversheetSo you can see that this article – and many stories on many topics – is riddled with inconsistencies. To me, I just dismiss it because it hasn’t got some of the basics right. It may be speaking a degree of truth, but that truth is devalued by the poor presentation.

Data is fine. Data is useful. Data is just digital exhaust. Data without context is just numbers and means nothing.

As I said previously, “Data only helps you take a problem apart and understand its pieces. It is not suited to put them back together again. The tool needed for that is the brain.”

Try to dissect stories quoting numbers. Be they in the press or someone making a commercial claim in order to influence your actions.

 

Here are some likely data sources for the story and for you to use when reading these type of stories using numbers to give credibility to their assertations.:

UK Government crime statistics

Metropolitan Police data

Office for National Statistics

UK Data Service video

 

PS: Anecdotally: I have known many types of officer. From 6’3″ tall Senegalese immigrants, who started as a PCSO in London and is now policing rural Oxfordshire, to short white people that are Reading natives who are born and bred and police there. They vary in their attitudes and actions because they are people. The huge majority want to make their communities better places. I do have no doubt that amongst them there are a few racist thugs, albeit a tiny and ever-decreasing amount. A bit like regular people I suppose. 

 

PPS: There may be typos. I try very hard to proofread. I am a terrible typist though.

 

Uninhibited By Experience

Being a “Data Something or Other” seems to be the new thing. These are the neophytes that can often be really smart but uninhibited by experience. I have been away for five years in academia and, unsurprisingly, little has changed. This article  is a discussion piece,  not about the management of data in an organisation, a topic far beyond my expertise. Instead, it is around the interpretation of data.

Having worked in the IT and consulting space from the early 90’s  I can reflect on some of the trends that were –  in the breathy and slightly conspitratorial tones used by the advocates, as if they were sharing the code to Fort Knox with you and you alone – a ‘new paradigm’.  Some of the many terms I recall are: Dotcom Boom, Knowledge Management, CRM, eBusiness, eCRM, ERP, Web 2.0, Network Computing, Distributed Computing, Big Data and the like. The challenges in business remain the same. The ever delicate balance of growing profit whilst increasing operational efficiency.

It is a truism to say that everything changes but everything stays the same. Moore’s Law ensures that the power of computers and what we can do with them remains incredible, frightening to some and ever changing. Take data: this is a gnarly topic indeed. Data this, data that, chop it, model it, torture it, misuse it and then base critical choices on it. Critical choices that can make you or cost you money.

reasearch signpost

The key thing to remember is this:

Data science is not the arbiter of truth. We need to translate it in a much broader societal context, and when we do so, we start to understand that data only helps you take a problem apart and understand its pieces.

It is not suited to put them back together again. The tool needed for that is the brain.

I think of data as the digital exhaust (Hristova et al) of something – a company, a market, a society. It was always there but only relatively recently have we managed to combine capture and analysis with such powerful computing abilities. We are like kids with a new toy, and many people seem to be seduced by the clever tech. Yes, it is really cool. It isn’t the new reality though. It is a new way of putting a lens to the existing reality.

research 1

With a knowledge of what has happened, we want to know what will happen. Predicting the future IS indeed the Holy Grail, I get it! Whilst future prediction works well for closed systems, it isn’t so great when you put people into the mix. Understanding the people element is crucial. Tricia Wang talks eloquently about the human insights that are missing from Big Data. In fact, there are many excellent TED talks on data.

What troubles me is that they all tend to be siloed, or from a single perspective. Ethnographers look at everything through the lens of their ethnographic training. In commerce the same holds true for economists or psychologists. With the rare exceptions, academia is the same. Practioners in a single discipline view everything through the very particular lens of their area of expertise. I have experienced experts ‘pooh-pooh’ any sort of inter-disciplinary approach, mostly because it would mean them stepping out of their narrow, but very knowledge-rich, comfort zones.

I was incredibly fortunate to have two professors as supervisors, for my undergrad and postgrad work, twho didn’t start their academic life in the fields they ended up specialising in. That meant that they brought an incredible richness and diversity to their advice. Consequently I looked at the tasks I had through a much wider variety of lenses as I do to this day.

Data needs contexualising. That richness, indeed ‘thickness’ that Wang refers to is crucial. Understanding how and why people behave the way they do  should not just be determined by a psychologist or ethnographer alone, as all issues are seen as psychological or ethnographic one. It is natural, it is what they are good at. Having a historical, sociological and political perspective can only serve to make your analysis richer and thicker still.

‘The numbers’ are presented as the irrefutable everything, but how they are arrived at,  not so much. How the data is collected is absolutely crucial. The process of research that generates the data is so often skipped over, because it is a very hard thing to do well. Without sound research methodology, all the rest of the output falls apart as it simply isn’t reliable.

Qualitative, quantative, question style, face to face, postal, Internet, the recruiting pool, the profile of the respondents (age, gender, ethnicity, sexuality, religion, income etc), the time period over which the research was conducted, the sample size, significance, P values, Spearman’s Rho, dummy variables, independent and dependant variables, the dropouts, researcher bias/influence, the question design or was it merely observations by the researcher, consent, original intent of the research, transparency, analysis methods, software use, hypotheses, null hypothesis, incentives, the original data set,  and repeatability and and and…

research 2

That is just the start of the researcher’s lot, but it ought to give you clues as to just how much data can be misinterpreted, misconstrued, misreported and misleadingly presented in the hands of non-experts.

If you are relying on the use of data to back up your experience, judgement and a willingness to take risks (as I am sure Apple did when it launched its first iPhone) then you may want to consider that data is more nuanced than many imagine.

If you want to get the most from your data and make the best decisions, you need a lot of different people with different skills, or you need fewer people with a wider range of skills. However, there are not that many arch-generalists out there with specialist knowledge and experience.  When you are deciding what to do with the latest bit of insight you have been presented with, you’d be well advised to seek them out.

With thanks to Rob Briner. This sums it up perfectly. The politician’s fallacy.

 

 

 

What to do when you hear figures being quoted

1 – Breathe deeply, do not allow your mind to buy into the       catastrophising style  of reportage.

2 – Remind yourself of the following:

When trying to make news by reporting a change in something, the media love a bit of  added ‘wow’ factor. It is how they justify something as worthy of inclusion and to hold your attention, make themselves seem credible and in the know.

3 – Ask yourself three easy questions:

a- over what time period is ‘this’ being measured? Over last year may just mean that last year was very low and looked at over, say, five or ten years would the spike/plunge be a tiny blip in a consistent trend?

b – how was the data collected, analysed and interpreted? You’d be amazed at how often the most fundamental flaws start back in the process. Most people do not know how to do research to ensure accurate, reliable, repeatable results.

c – does the person/organisation who has published this have any agenda? It is most likely they do. Whilst in and of itself this is not a ‘bad thing’, it may help explain bias.

4 – Decide that if you really want to know more you will sit on this gem of reporting and         look into it further. Ask me, ask someone who knows how this can/does happen and         suspend your horror for the moment.

Data-Science_FB

There are almost endless layers of detail. I am, not a technical wizzo. I have used some analysis tools and understand (broadly) how they work. I have been trained in research methods and am very happy to ask the questions required to unpick these sorts of issues. More importantly, I have a network of experts to draw upon , for no person is an island.

I am assuming that when reading this you are not one of those people who is on a hair-trigger to be offended or outraged, either personally or on someone else’s behalf. These people don’t actually want to know more than the headline shocker.

offended hashtag

Data does not equal wisdom

It is natural to both fear the unknown and to feel the strong desire to allay that fear. After all, lack of insight and wisdom in both business and life can bring the best plans crashing down.

Big Data, Small Data, Thick Data, regression analyses, log analyses, control groups, p values, and significance – in this modern world of news, fake news, and endless statistics, we are constantly presented with numbers that are designed to give information an instant gloss of credibility. People often try to burnish their claims by saying things like, “scientifically proven”, “you can’t argue with the numbers”, “if you can’t measure it then it isn’t true”, and so on.

But the simple fact is that it is not that simple. There is something quasi-mystical in numbers, which makes them both instantly trustworthy and the perfect tool to bamboozle people. The trick is to look behind the numbers and understand what is being measured and how. Furthermore, some things, especially anything to do with human beings, are not easy to measure with ‘conventional’ statistics. For instance: how do you measure the strength and intensity of a feeling or an intention? It is not like calculating the re-entry criteria for a spacecraft, for physics doesn’t have feelings.

Data to Insight pyramid
Being at the top of the intellectual food chain can make us believe that we are best placed to see into this unknown, exploiting data to see what is really happening in the world around us. This belief is powerfully seductive. The solutions being sold to us prey not only on the fear of the unknown but also on the seduction of knowing. The mixture of loss-aversion allied to the availability heuristic that is marketed to a worried audience, often causes them to grab at the passing offerings in the belief that the silver bullet is in there somewhere.

As ‘mere’ beings we easily fall prey to the idea that we are masters of our universe; we use technology in the hope that it will allow us to control what we want to control. But the problems we face in exerting control don’t come from the technology. They come from us. We are blinded to our own fallibilities and mistake output for insight. We can get captured by the belief that the latest tech provides the truth, and is a legitimate insight into the future. The desire to believe this can often lead us to distort the data to fit our assumptions, and inevitably this also produces a distortion of reality. Famously, Nokia was warned by an ethnographer, using meticulously collected Thick Data, that the smartphone was coming. They insisted that this person was wrong because, they said, the information did not ‘fit the data’. We all know what happened to Nokia. Nokia who? (Credit to Tricia Wang for the Nokia story)

The analysis of data is a lagging indicator; it involves measuring the past, interpreting that past, and trying to predict the future, and that is a tough challenge. We conflate what we see with what we understand and how we think it should be. There is a reason that investment products carry the dire warnings about past success being no guarantee of future performance.

There is no doubt that we are much better at incorporating such ‘soft’ characteristics into measurement metrics. However, it is not as easy as a pure data-science approach. To build the effective tools (algorithms) requires a much more nuanced and wider understanding than that given by a blinkered approach which fails to incorporate Thick Data. This can only really be done by a multi-disciplinary team of people, whose skills might include behavioural science, ethnography, sociology, political science and psychology – to name just a few. Mathematicians, statisticians, data-analysts and programmers are certainly necessary, but it shouldn’t stop there.

It is often said that people are at the core of a business. Whether they are the customers or the staff, they are people, not machines. Knowing what people do is one thing, knowing why they do it is even more important. More importantly still is understanding why people are doing what they do. This requires much more information that merely what is being done and by whom. This is the context that only Thick Data brings.

Thick-Data-Info

Human beings do not act rationally and famously, they will lie like mad to researchers! Something shown in many studies about the issues in conducting studies on people. Additionally, they can rationalise their actions in a way that they are happy with. Knowing the value of how the social, societal and environmental factors influence the numbers is a step towards the sort of understanding that may have saved Nokia. For modern business leaders, who rely on data to inform their decisions, it is critical to understand the context of actions and the intentions that underpin the actions.

If you want to take the blinkered approach offered by an IT package and believe that it is a magic software tool will allow you to predict the future, then I would suggest that you are falling prey to unconscious bias. When that happens, you find things like the following flipchart starting to seem credible. In fact, I’ll wager that something similar was seen in Nokia shortly before they were wiped off the commercial map.

Think Rhino

Wisdom is understanding the limitations of the numbers alone: however they are crunched. Wisdom in business is understanding that it is not weakness to embrace wider ideas. Wisdom is strength, and this does not just come from data alone. Ultimately, wisdom comes from within, but the insights and context makers should be part of the mix.

If you are struggling with a business problem and you suspect that having a deeper understanding of how data works would be valuable then call me for a chat on Skype (domshadbolt) or  click here to email me.