What Are You Really Worth?

I do mean really, can you really put a value on your being, your presence on this planet?  For starters, define value. I imagine that it is quite a different figure to that which you are paid, the increase you are going to carefully negotiate for, cometh the pay/performance review. What have you added to your company’s bottom line? Really, how can you express your actual hours of toil to yourself, your family, the shareholders and so on?

Instead, try this: what would a stranger pay for you?  This throws up all sorts of quite deep questions. Perhaps I mean the value of your life, a binary live/die scenario. What value does your life hold to a stranger? Why should they invest their money in your preservation? What is the bottom line for a stranger if they do not have an emotional investment in your continued existence? Perhaps that stranger is just a middle-man and your worth to them can only be expressed in what another third-party will pay for you, regardless of what the final owner of you does with/to you. The more I write the more it sounds like a people trafficking scenario being described in an article on a site devoted to understanding data.

You are worth nothing. Your personal data is worth everything. You are not the customer, you are the product.

Nevertheless, my friend Nick brought the following to my attention:

Future value of data

Image credit: PwC (a publication of some sort) 2019

It turns out that ‘experts’ have predicted the estimated (now there is a get out of jail free word when used in stats/studies) value, not of life per se, but more of a person’s worth. A worth that only some people/organisations will value.

Another shock: Headline grabbing bar charts with bold colours and zero bloody context around them. I get told off occasionally for worrying about trivia like this. My reply is that it isn’t trivia, it is e v e r y t h i n g.  This is clearly a graphic designed to show thought leadership of some description and therefore imbue the reader with a warm feeling that they are in the hands of ‘experts’ who ‘get’ this kind of stuff and that said experts are the ones to choose to help shape your organisational vision for the next millennia. You’ll be at the bleeding edge of thought and stand to leapfrog all your competitors in a trick where you simultaneously disappear in a puff of smoke and hit the ground running towards a new and lucrative market enjoying an unassailable lead. If, of course, you employ the genii at said group of thought leaders proffering such a compelling image of the future.

Wouldn’t it be interesting to know how these figures were arrived at? Why is a US citizen worth three times that of their European cousins? What was measured, what was controlled for, what was the working hypothesis (apart from baffle the punters with smoke, mirrors and a pretty chart?), when was the analysis conducted, what was excluded and why, how was it analysed, can we have the raw data ourselves please, what data, how is value computed, what markets will pay that, will some pay more or less? And so on…

Here is a little test you can run yourself. Call up a software/hardware firm or management consultancy and see if in a ten-minute chat they can refrain from using the words: Big Data, Blockchain (a new one getting traction), AI, Algorithm, paradigm (falling out of favour these days, I guess the era of New Paradigms has come and gone) or cloud. My guess is at least 4 out of six will crop up. Just saying.

This begs the question, how can we harness this apparent worth and charge for it? Perhaps there could be some charitable models developed around this?


Red Flags & Sacred Cows

Here follows a cautionary tale. I name the culprit, not because I have an axe to grind or it is particularly unique, but it suits the example being made.

To repeat other posts on here: when someone starts quoting facts and figures at you and citing studies, it is entirely reasonable – and very sensible – to ask some probing questions. The figures are usually being used to sell you something. Be that an idea, credibility, services that the provider of the figures can also come and fix, at a price, naturally, or just in support of their existing position on a topic.

This entire topic is made much more challenging when very emotive topics are being commented on. Race, Gender, Diversity and Inclusion are today’s Sacred Cows. These topics always seem to make many people uncomfortable, whilst trying to appear as if they are just fine with it. They often deal with this by ensuring that they say nothing, thereby keeping their head below the parapet. An unintended consequence is that lack of enquiry means that statements with regard to the Sacred Cow go unchallenged.


Twenty years ago there were few, if any, consultancies that were offering to help companies address issues that can arise as a result of various forms of discrimination. Many seem to think that because they are positioning themselves as experts in the field it puts them beyond reasonable criticism and examination. Please can someone help me understand why that elevates them beyond reasonable scrutiny and criticism?

A big problem with Sacred Cow topics is that any criticism of anything to do with them – in this case, the use/misuse of data – is tantamount to trying to undermine their very raison d’etre. It isn’t at all, it is all about the data. Data doesn’t care about any of these issues. To conflate the two seems as if it is a tactic to draw one’s eye away from the data and try and shame you into ceasing with the questions.

Where you should have a problem is when data is used to misrepresent issues. Whether intentionally or unintentionally, the mishandling of data can make problems appear very different from what they actually are. A simple example is in the analysis of raw data. If certain variables are not measured during collection and then controlled for during the analysis, or sometimes data collected in a specific area produces results that are then remarked upon and treated as a general finding with to qualifications added to them.

Back to the Red Flags though. The fact that it is a sensitive topic should prevent you from asking about the provenance of the data. If someone clasps their hand to their mouth and asks how could you possibly question a respected pillar of the industry, sometimes an author etc, then remind them about speaking truth to power.

Recently, I saw a post on LinkedIn from one of the founders of Pearn Kandola LLP Which read:

“A third (32%) of people who have witnessed racism at work take no action, and a shocking two-fifths (39%) of those said that this was because they feared the consequences of doing so*. If our workplaces are to become genuine places of safety, it’s vital that the government acts quickly to curb the use of NDAs to hide instances of harassment, whether it be racist, sexist or otherwise. RacismAtWork UnconsciousBias

*According to our own research at Pearn Kandola LLP

All well and good on the face of it. Nothing wrong with citing your own research, providing you can back it up. I was interested to learn more, so I asked if the research was published, what the sample size was, where and when it was collected etc? There has been no reply. Judging by many of the comments this has been accepted without criticism or interrogation by many, a worrying indication of a lack of critical thinking. Another area of concern when data is being reported and should also raise a little red flag in your mind is the use of words like shocking. I can only imagine this is to try and increase click through. It detracts from data and sounds more like a Daily Express ‘weather armageddon’ type headline.

Sacred Cow

If the data is robust they ought to be delighted to publish it and open it up to examination. After all, if it is robust enough to underpin public claims that are made then there is no reason why it ought not to be open to examination by a third party.

To question data means that you are thinking. Whatever the topic, there should be no Sacred Cows, especially not the data.

Why Doesn’t Big Data Always = Good Data?

The Data Scientists out there will sigh as they feel that they have heard this a thousand times before. However, it is human beings that are the issue. Numbers are just numbers, it is what we humans do with them that is the issue.

Very quickly then; this is the correlation and causation argument writ large.

correlation causation.jpg
But it must be true???

Can you see the issue? On the face of it it makes sense. I prefer the elegance of expression of the original description of post hoc ergo propter hoc. Merely acquiring more and more data points, a bigger data set, better hardware, software and human expertise to manipulate this data does not equal better results from the data.

Big data is great and powerful when it is clean and accurate data. But….pause and think: before plunging into the analysis and insight phase the cleaning and tidying phase – the often skipped past boring stuff – needs to be complete. The crazy outliers need to be identified, partial data from a one source needs to be investigated, in the case of human surveys the ‘don’t know’ answers may be coded out, and so on.

There are a variety of ways to allow the Data Scientists to do this, but the heart of the matter is that if they are not given the time, tools and budget to do this then you are back to the junk in, junk out scenario that affects everything to do with computers.

As humans we are programmed in ways that really hamper us. This is especially true when we are operating outside of our field of expertise or are very out of date regarding a subject matter area. Our brains crave clarity and simplicity, we avoid the unknown as that is where danger may lie. We want to make as smooth and as risk-free transit through life. Because of this the best and the brightest can suddenly become very credulous and succumb to deep-seated fear and prejudice. This propensity feeds the behaviour of some because they are told something, seize upon it and then happily transmit it to others as fact. The recipients believe it, often more so when it is passed to them by a person or source in whom additional credibility is invested.

I was struck yesterday when listening to an episode of The Infinite Monkey Cage – a science program on BBC Radio 4 – where anthropologists and evolutionary biologists were tearing their hair out at the traction an image we are all familiar with has gained. The evolution of man from ape to upright walking man is apparently a terribly inaccurate and misleading image. Apparently, it first appeared in a French school textbook back in the Fifties, resonated (which shows the power of a credible source and a good image) so much that it stuck and has been reproduced millions of times over. I had no idea how inaccurate it was and like to think that I am not very credulous. It goes to show the power of something that has been ever-present though. Few people except the experts challenge it, even now.

The iconic, contested and wholly inaccurate image

Bringing this to business: I feel for the person or team at Apple that had to brief Tim Cook and co that the earnings forecast had to be dramatically trimmed because the previous cash-cow of the iPhone was no longer selling as quickly. I appreciate I have the benefit of hindsight regarding the following remark; the fact that people were hanging onto their devices for longer and were railing against the so-called planned obsolescence that many believed was being built in.,coupled to the belief that the latest OS was designed to overwhelm older devices and yet without the latest OS then the functionality was going to limited henceforth, really upsets consumers. If that is combined with the increase in length of the service contracts we are all but forced to agree to by the network providers (here in the UK at any rate) in order to have the latest tech, subsidised by these growing contracts, I suspect this wouldn’t be such new news.

We can see the clever PR operation swing into action. Apparently great PR relies so heavily on gut feelings and relationships that people overlook how incredible people are at computing very complex Big Data. Still far ahead of any computer. To whit: the entire slowdown has been pinned almost completely on the Chinese market. Something I find hard to swallow. I have no doubt it is a large component and very politically expedient given the way China is portrayed in the US these days. The messaging seems to play heavily on the deterioration of relations between the US and China. The PR teams are operating on very thick and contextual data, nothing more. The human brains are the computers here. Either way, is apparently, not the fault of Apple… *coughs politely*

blaming everyone else

On the other hand, perhaps they knew of this trend and the feelings that underpinned it because they had excellent Big Data, had combined it with the Thick Data approach and insights of Anthropologists, Sociologists and Political Scientists who specialise in these fields, so they could synthesise the findings into usable data, and the real issue wasn’t knowing this but when to let the markets know? Sadly, few large companies manage to meld their data very effectively and usually the larger they are the greater the disconnect between the boardroom and the customer, and the inadequacies of the information providers aren’t spotted soon enough.

What about the person responsible, or is there one? Challenging assumptions is often uncomfortable and often seen in an organisation as disruptive and potentially unwanted behaviour. A Chief Data Officer (CDO) ought to have both the support and power to ask the ‘who, what, when, where and why’ questions relentlessly. In fact, if they aren’t querying the data they are to use for gaining insight and helping the other leaders to make the best informed decisions, they are probably falling short in their role.

How Do I Know…

…if I am getting the entire Data Story?

…if it was analysed properly?

…if I can trust the conclusions and recommendations?

Every executive that is reliant on decision-making data presented to them by other people shares these doubts. If you don’t know how to ask the correct questions, parse the information in the replies correctly and follow-up with the right requests for more information you will forever be at the mercy of others. My experience is that people with responsibility do not enjoy that situation.

Without an impartial assessment of the Data Story they will not be able to satisfy themselves that the Data Story they are being told is the right one. Every big decision needs to be made with a greater element of faith than was intended.


There are two basic elements to achieving an accurate Data Story. The first is the human, and the second is the technical.

  1. Human

Everything may be tickety-boo, the best, most loyal people, are giving you a perfect Data Story. If you know this to be true then stop reading now. Life is great. On the other hand, if you ever wonder then keep reading.

(Type 1, Type 2, and Type 3 data - a recap here -  for clarity , I am writing about Type 2 and Type 3 data. Remember, Type 1 is the Mars Lander sort of stuff!)
  • “These results are from AI. It can do things we can’t.”

Whether the results are attributed to AI, which has spotted a very subtle pattern in a vast mass of data, or a straight survey designed, run and analysed by , means nothing in and of itself.

Even if an AI tool uses the best and the brightest to program the algorithms it ‘thinks and learns’ with, the fact remains that people – with all their attendant beliefs, prejudices, biases, agendas etc – set the rules, at least to start. If the machine has indeed learned by trial and error, it was still programmed by people. Therein lies the weakness.

human AI blend

This weakness comes from the initial decision makers, precisely because they aren’t you or your Board. The Board is likely to have a much wider range of experience and carry more responsibility than the Data Science/IT/Marketing departments.

How often have you spent time with these people? Are they even in the same office as you? How old are they? What are their social and political biases? And so on. Unless you know this then how can you begin to understand anything about the initial algorithms that started the AI going. When were they written, what was the market like then, by whom, in which country?

With all data collection and manipulation it is crucial to have a fuller story. It is the  background and understanding of those setting the questions, writing the algorithms, tweaking the machine learning, analysing the data, their managers, the instructions they have been given, the emphasis that this Data Story has received in the rest of the organisation before you see it. It is also insight into the marketplace provided by the sort of Thick Data that Tricia Wang and other ethnographers have popularised.

My message to you is that data is so much more than numbers. Just numbers can be misrepresent the story so greatly. We are social animals and as long as there are people involved in the production, analysis and presentation of data it doesn’t matter a jot how incredibly intelligent and fast the tools are. We are the weakness.

complicated employees

If you still struggle believing this concept then think about electronic espionage. It is rarely a failure in something mechanical that causes catastrophic breaches of security, it is the relative ease with which people can be compromised and share information. The people are the weak link.  In the very first days of hacking a chap called Kevin Mitnik in the US spoke of Social Engineering as the means to an end. We are all inherently flawed, these flaws are shaped and amplified by our social and work environments, so why couldn’t that affect the Data Story you get?

    2. Technical

  • “The data we have used is robust.”

I’ve heard that line trotted out many times. Gosh, where to start? It may be. Nonetheless, a lot can and does happen to the data before you see the pretty graph. Here are just a few things to consider before just agreeing with that assertion:

What was/were the hypothesis/hypotheses being tested?


When was it collected?

By whom (in-house or bought in from a third-party)?

Qualitative, quantitative, or a blend?

What was the method of collection (face to face interviews, Internet, watching and ticking boxes, survey, correlational, experimental, ethnographic, narrative,phenomenological, case study – you get the idea, there are more…)?

How was the study designed?

Who designed it?

How large was the sample(s)?

How was the data edited before analysis (by who, when, with what tools, any change logs etc, what questions were excluded and why)?

How was the data analysed (univariate, multivariate, logarithmic, what were the dummy variables and why, etc.)?

How is being presented to me, and why this way (scales, chart types, colouring, size, accompanying text etc)?

Research design

And so on. This is just a taste of the complexity behind the pretty pictures shown to you as part of the Data Story. From these manicured reports you are expected to make serious decisions that can have serious consequences.

You must ask yourself if you are happy knowing that the Data Story you get may be intentionally curated or unintentionally mangled. I started this site and the consultancy because I am an independent sceptic. In this age of data-driven decision-making you mustn’t forget. Incorrect data can’t take responsibility for mistakes, but you will be held to account. This is not scaremongering, it is simply fact.

If you need a discreet, reliable and sceptical  third-party to ask these questions then drop me an email.  I compile the answers or understand and highlight the gaps. You make the decisions, albeit far better informed and with the ability to show that you didn’t take the proffered Data Story at face-value, but asked an expert to help you understand it.



AI, ML & DL – A Bluffer’s Guide

AI, ML and DL are our attempts to get machines to think and learn in the way that we can. Get that right and you’ll take the power of the human multiplied a million-fold, to have a breathtakingly capable machine. Probably our new robot overlords but we’ll cover that later. Whilst I do not have any issue with these developments, and do believe it is both attainable and useful, we are not there yet. To date we have these incredibly fast calculators that are essentially linear and binary. These are our modern computers. There are boffins in labs developing non-linear and non-binary counting machines but they are not here yet. This means that we are left with the brute force approach to problem solving. Run the right algorithm (at least to start it is provided by a   human) and you can get the giant calculator to supply an answer, often the correct one but f not then it can learn from its mistakes, rewrite the algorithm and try again. (By the way: that is ML/DL in a nutshell) Machine learning and AI.jpg Here is a definition of ML: Machine learning is the study of algorithms and mathematical models that computer systems use to progressively improve their performance on a specific task. That’s it. It is a computer learning to improve and tweak it’s algorithm, based on trial and error. Just like we learn things. No difference. Here is a definition for AI: Artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. However, AI is where things can really come unstuck. The aim is to get machines to think as we do. In a non-linear way. Human beings deal exceptionally well with ambiguity and we have an ability to match things up like apparently different words and images. Have you ever been transported back in time, in an instant, by a song clip or a smell? That is  human, no one taught you to do that. A computer could conceivably do that but only if it had previously been instructed to do so. It can do it so very fast you would be forgiven for thinking it was natural. It is not though, it is programmed to do it. Sure, it might have learnt to improve its own algorithm (Machine Learning again) to do that based on observations of human behaviour. It is still just mimicking what it sees as the appropriate behaviour, there has never been that spontaneous connection that you experienced that transported you to another time and place, even fleetingly. A recent high-profile example of AI and ML going a little bit awry and showing bias is in this article here. “Amazon Reportedly Killed an AI Recruitment System Because It Couldn’t Stop the Tool from Discriminating Against Women“ Well worth listening to the video and understanding the unconscious bias exhibited by the builders of the algorithms. There are efforts to remove the human biases that the machines learn from and perpetuate. But what is Deep Learning, I hear you cry? It  can simply be differentiated from Machine Learning as when the need for a human being to categorise all the different data inputs is eliminated. Now the machine (still only  the really fast calculator). Think self-driving cars, drones and many more much duller things. Presently, we humans need to be involved in the categorisation. There is even a Data Labelling factory in China to use humans to ‘teach’ machines what it is  that they are seeing. Equitable, Just, Neutral and Fair are components of moral behaviour that reside in the interpretation of the present societal norms, and not everyone agrees with them. Different cultures can have quite different views on a correct moral choice. Remember this when someone is trying to argue about the infallibility of computers. They can only be programmed with lagging data and they will always reflect us and our biases. For better or worse. bias see-saw.jpg

Algorithms – A Bluffers Guide

A breakdown and simplification of some current tech speak

The word ‘algorithm’ is uttered with a degree of reverence these days. It is the black magic behind AI and Machine Learning and is a favourite thing to go rogue in a modern plot line. The actors merely blame the bad algorithm when a computer goes crazy in a dystopian sci-fi catastrophe.   The decision making requirements that we are faced with in the modern commercial world far exceed our capacity in many instances because our brains evolved for a very different sort of world. A world of small groups where we rarely met anyone very  different from ourselves. We had significantly shorter lives and our main priorities were sex and survival. These days there is hugely increased complexity and nuance yet the evolved desire for rapid choice-making hasn’t left us.  Faced with these pressures we turn to computers for help. Computers helping humans is so pervasive and permeates almost all aspects of life. Such a rapid change has occurred in the last lifetime as the evolution of computing capacity increases exponentially. Your mobile telephone has vastly greater computing power than all three computers on the first Space Shuttle. Think about it for a moment. Your phone possesses all the computing power required to fire you into space. This incredible capability means that people have been fascinated with the idea that a computer can be turned from a dumb machine into a thinking machine (thinking as we do) since the dawn of the first machine. However, computational power is one thing. How to make it work as an independent thinking machine is another thing all together. One of the key things you need to do this you need an algorithm.

Algorithms: the rules needed for machine thinking. 

Algorithm Just to clear this up. Machines DO NOT think. Computers can process a huge volume of information, really really quickly because they are unbelievably fast calculators. The hardware is just a superfast counting machine with a screen. Algorithms are not hard to conceive, if you think of them like this; an algorithm is what you need to cook supper for your family. Few families eat the same thing for every meal of every day so there are constraints and variables. Imagine there are four of you. One is a vegetarian, one is on a low-fat diet and the other too aren’t that fussy but do have preferences. You want to provide them with a nutritious and tasty meal that ensures everyone enjoys the experience, including you.   Let’s imagine that you are 45 and have cooked for the same people many times before (almost daily) and as a consequence you have learnt a lot about what works and what doesn’t. However, this week is different and you haven’t had time to shop and the other three did the shopping for you. You open the cupboard doors and have a peer in the fridge and freezer to get an idea of what is available for you to cook with. Within about 30 seconds of taking stock of the cupboard contents, the fridge contents, the available utensils to cook with, any time constraints, the dietary preferences and so on you decide on a meal. You cook it, serve it and everyone eats. They get up from the table appropriately nourished leaving the process to be repeated the next day. What allowed you to do this was an algorithm in you head. Call it the ‘cooking for family’ algorithm.  Algorith wordcloud   Pause for a moment though and think about how simple it can sound and actually how the thinking and actions required was so incredibly, amazingly, mind-blowingly complex and nuanced.

 A quick note as to where this can go wrong

Simply put, computers are not people. Computers are superb for making decisions that do not require any emotion, ethics, bias and the like. Eventually a computer beat a Chess Grandmaster and uit did it by sheer computational brute force. However, to take the supper example: the cook knows the audience at a level a computer can’t match. All the calculations from an algortihm and it can’t know from someone’s face if they are the special kind of tired that a Wednesday can make them, so putting any kind of pie down for dessert would mean the world to them. And the others would see that a pie was not only what was needed but was a very thoughtful gesture thereby elevating the cook in the eyes of the other three and making an intangible but felt contribution to them too.  The aim is to have algorithms teach themselves by learning from mistakes in order to achieve the desired outcome of the programmer(s). They try,  but they are far from perfect and because we expect perfection from computers, in a way that is different from our expectations of one another, then mistakes are not easily forgiven. Algorithm 2

Data Ethics For Business

We exist in an increasingly data driven world. More and more, we are encouraged or directed to ‘listen to the data’ above all else. After all, the data doesn’t lie. Does it?


Data Ethics in business is the name of the practice used to ensure that the data being used to make high-value commercial decisions is of the highest quality possible. However, there is a catch. Human beings are the catch. We have  gut-instinct, prejudices, experience, belief systems, conditioning, ego, expectation, deceit, vested interests etc. These behavioural biases all stand to cloud the data story, and usually do.

A high-value commercial decision does not necessarily have immediate financial consequences. Although, in commercial terms, a sub-optimal outcome is invariably linked with financial loss. In the first instance, the immediate effects of a high-value decision can be on organisational morale or have reputational consequences.


When a high-value decision is to be made there are invariably advocates and detractors. Both camps like to believe that they are acting in the service of a cause greater than themselves. Occasionally, some of the actors cloud the story because their self-interest is what really matters to them, and they try hard to mask that with the veneer of the greater good. Hence the term ‘Data Story’, because behind the bare numbers and pretty graphics  there is an entire story.

The concept of conducting a pre-mortem examination of the entire data story to model what can go wrong is becoming more important for senior decision makers. It is getting increasingly difficult to use the traditional internally appointed devil’s advocate as, due to the inherent complexity of understanding a data story, this function needs to be performed by subject matter experts. Although the responsibility for decision-making always falls on the Senior Management, they want to do it with a full breakdown of the many facets of the data story.



In order to achieve this, individuals with a unique blend of talents, experience and inquisitiveness must be used. People with absolute objectivity and discretion, who don’t rely on inductive reasoning. Ones who are robust enough to operate independently, diplomatically and discreetly and have executive backing to interrogate all the data sources, ask the difficult questions and highlight any gaps, inconsistencies, irregularities. From this they can provide a report for the Executive Sponsor(s) with questions to ask and inquiries to make so a well-informed decision can be made.

After all, when there is  lots at stake, no one wants to be remembered as the person that screwed-up and tried to blame the data?