Data – The Fog of Promises, and What To Do About It

Calculating the value of data is something I have been thinking about a lot. Data, any and all, seems to be relentlessly hoovered up whenever we use any form of connected device. Who had ever heard of data as a mainstream topic twenty years ago? Nowadays, we have seen Mark Zuckerberg answering to Congress in the States and countless articles based around what Google and Apple know about us. Some people are laissez-faire about it whilst others veer towards the downright paranoid.

zuck in congress

Organisations collect data, they hoard data and (hopefully) guard these vast amounts of data that they collect. Why? Because it is valuable. It is useful. Apparently. However, who in a company actually gets down to the nitty-gritty of this and can measure and express the Return on Data that this feverish collection and hoarding actually brings to the organisation?

In 2015 Doug Laney from Gartner wrote about data in financial terms. How it can affect the value of a takeover target, if they have a vast unexploited data store, for example. Were that to be monetised then what is it worth? Does this mean the buyer is getting a fantastic deal or when it seems to be overvalued on traditional metrics is the difference made up by the value of their data? Herein lies a real problem as the difficulty in valuing data stems from several reasons.

Firstly, that there is no firm formula to do so, because to some that data is just wasted storage and to others it is gold. Whereas a physical asset, such as a piece of land, is such a mainstream asset that it is far easier to value. With data, the  great big lump of bits and bytes only has value if the owner knows how to extract information and insight from it, and use that effectively to make them more competitive or to sell to someone else in a finished and usable form. People have had a stab at it by trying to vary old maths and make it new maths. I found the following on the Internet:

ROD_Definition

Though his looks like an elegant formula,  the Gain from Data metric is subject to so many other variables, primarily time,  it is almost impossible to calculate so simply, makinf the formula impossible to scale. It only serves to highlight just how the temporal aspect of data value is so important. Depending on what it is, it may be very time limited, making it useful only in a very brief window. Think of data like a paper currency that can burst into flames at any moment.burning-money-png-2

One second it has the face value and the next it is ashes.

In contrast, a piece of land is just there. No more land is being created, whereas data creation is never-ending: limited only by our ability to get it and store it.

Secondly, the technical aspects are crucial. What form is it held in, on what type of database, where is it held (there are massive regulatory differences around the world), have the data owners consented to its use, by whom, how old is it, how consistent is it and so on. If I can’t use in my company for my purposes then it is just Ones and Zeros on a hard drive somewhere, merely cluttering up the ether. Utterly without value.

The fact remains that extraordinary amounts of data are being recorded about us, all of the time. I recently holidayed in Norway and in ten days I didn’t use one bit of hard currency. All card, all the time. I navigated around using Google Maps. I checked TripAdvisor and used Uber, as well as uploading countless photos to Facebook for family abroad to see. In doing so I must have left an enormous digital smear across the Norwegian landscape. Me and the thousands of other tourists on holiday at the same time. Can you imagine the quantity of data generated by me and the billions of other people using connected services every single day?

To be able to achieve a RoD that makes all the efforts and costs at collection and storage worthwhile, several things need to happen and I can only really see that these can happen under the guidance and direction of a very senior – if not on the Board type senior – individual who guides a team with specific responsibilities. Call them a Chief Data Officer (CDO).

Ideally the value of data is considered so important that the CDO is on the Board. The CDO would need to have close ties with Marketing and Strategy functions to understand how they intend to use resources to achieve them, and whether existing data is useful or new data needs to be acquired. Additionally,  they need to know how to shape and deliver it to them in a worthwhile manner. Then there needs to be a real-time feedback loop – Sales? – in order to assess the efficacy of the deployed data as well, as a direct line between them and the technical functions of the company. The sort of things CIO deals with, especially storage and access. The CFO will have demands on their funds from the CDO. They need to be able to understand the RoD and how it is affecting the bottom line, the share price, their partners and so on.

businessman staring into fog

Most importantly of all is someone who can see through the Fog of Promise that all this data is purported to hold. The RoD that can be achieved if only they used it ‘properly’ is the sort of golden thread that is so often sold to them. Correlation does not equal causation. I’ll repeat that: correlation DOES NOT equal causation. Falling into the Feynman Trap is something that affects the best and the brightest (Famously, Jim Collins did this in Good to Great). Usually when they become mesmerised by their own belief in the infallibility of data.

The CDO not only ensures the data is valued correctly, they are responsible for preventing their company being led down a rabbit-hole of promise of the jam-tomorrow variety. The sunken cost fallacy remains as relevant today as it ever was and sometimes the emperor is indeed naked.

 

 

How Do I Know…

…if I am getting the entire Data Story?

…if it was analysed properly?

…if I can trust the conclusions and recommendations?

Every executive that is reliant on decision-making data presented to them by other people shares these doubts. If you don’t know how to ask the correct questions, parse the information in the replies correctly and follow-up with the right requests for more information you will forever be at the mercy of others. My experience is that people with responsibility do not enjoy that situation.

Without an impartial assessment of the Data Story they will not be able to satisfy themselves that the Data Story they are being told is the right one. Every big decision needs to be made with a greater element of faith than was intended.

untrustworthy.png

There are two basic elements to achieving an accurate Data Story. The first is the human, and the second is the technical.

  1. Human

Everything may be tickety-boo, the best, most loyal people, are giving you a perfect Data Story. If you know this to be true then stop reading now. Life is great. On the other hand, if you ever wonder then keep reading.

(Type 1, Type 2, and Type 3 data - a recap here -  for clarity , I am writing about Type 2 and Type 3 data. Remember, Type 1 is the Mars Lander sort of stuff!)
  • “These results are from AI. It can do things we can’t.”

Whether the results are attributed to AI, which has spotted a very subtle pattern in a vast mass of data, or a straight survey designed, run and analysed by , means nothing in and of itself.

Even if an AI tool uses the best and the brightest to program the algorithms it ‘thinks and learns’ with, the fact remains that people – with all their attendant beliefs, prejudices, biases, agendas etc – set the rules, at least to start. If the machine has indeed learned by trial and error, it was still programmed by people. Therein lies the weakness.

human AI blend

This weakness comes from the initial decision makers, precisely because they aren’t you or your Board. The Board is likely to have a much wider range of experience and carry more responsibility than the Data Science/IT/Marketing departments.

How often have you spent time with these people? Are they even in the same office as you? How old are they? What are their social and political biases? And so on. Unless you know this then how can you begin to understand anything about the initial algorithms that started the AI going. When were they written, what was the market like then, by whom, in which country?

With all data collection and manipulation it is crucial to have a fuller story. It is the  background and understanding of those setting the questions, writing the algorithms, tweaking the machine learning, analysing the data, their managers, the instructions they have been given, the emphasis that this Data Story has received in the rest of the organisation before you see it. It is also insight into the marketplace provided by the sort of Thick Data that Tricia Wang and other ethnographers have popularised.

My message to you is that data is so much more than numbers. Just numbers can be misrepresent the story so greatly. We are social animals and as long as there are people involved in the production, analysis and presentation of data it doesn’t matter a jot how incredibly intelligent and fast the tools are. We are the weakness.

complicated employees

If you still struggle believing this concept then think about electronic espionage. It is rarely a failure in something mechanical that causes catastrophic breaches of security, it is the relative ease with which people can be compromised and share information. The people are the weak link.  In the very first days of hacking a chap called Kevin Mitnik in the US spoke of Social Engineering as the means to an end. We are all inherently flawed, these flaws are shaped and amplified by our social and work environments, so why couldn’t that affect the Data Story you get?

    2. Technical

  • “The data we have used is robust.”

I’ve heard that line trotted out many times. Gosh, where to start? It may be. Nonetheless, a lot can and does happen to the data before you see the pretty graph. Here are just a few things to consider before just agreeing with that assertion:

What was/were the hypothesis/hypotheses being tested?

Why?

When was it collected?

By whom (in-house or bought in from a third-party)?

Qualitative, quantitative, or a blend?

What was the method of collection (face to face interviews, Internet, watching and ticking boxes, survey, correlational, experimental, ethnographic, narrative,phenomenological, case study – you get the idea, there are more…)?

How was the study designed?

Who designed it?

How large was the sample(s)?

How was the data edited before analysis (by who, when, with what tools, any change logs etc, what questions were excluded and why)?

How was the data analysed (univariate, multivariate, logarithmic, what were the dummy variables and why, etc.)?

How is being presented to me, and why this way (scales, chart types, colouring, size, accompanying text etc)?

Research design

And so on. This is just a taste of the complexity behind the pretty pictures shown to you as part of the Data Story. From these manicured reports you are expected to make serious decisions that can have serious consequences.

You must ask yourself if you are happy knowing that the Data Story you get may be intentionally curated or unintentionally mangled. I started this site and the consultancy because I am an independent sceptic. In this age of data-driven decision-making you mustn’t forget. Incorrect data can’t take responsibility for mistakes, but you will be held to account. This is not scaremongering, it is simply fact.

If you need a discreet, reliable and sceptical  third-party to ask these questions then drop me an email.  I compile the answers or understand and highlight the gaps. You make the decisions, albeit far better informed and with the ability to show that you didn’t take the proffered Data Story at face-value, but asked an expert to help you understand it.