AI, ML and DL are our attempts to get machines to think and learn in the way that we can. Get that right and you’ll take the power of the human multiplied a million-fold, to have a breathtakingly capable machine. Probably our new robot overlords but we’ll cover that later. Whilst I do not have any issue with these developments, and do believe it is both attainable and useful, we are not there yet.
To date we have these incredibly fast calculators that are essentially linear and binary. These are our modern computers. There are boffins in labs developing non-linear and non-binary counting machines but they are not here yet. This means that we are left with the brute force approach to problem solving. Run the right algorithm (at least to start it is provided by a human) and you can get the giant calculator to supply an answer, often the correct one but f not then it can learn from its mistakes, rewrite the algorithm and try again. (By the way: that is ML/DL in a nutshell)
Here is a definition of ML: Machine learning is the study of algorithms and mathematical models that computer systems use to progressively improve their performance on a specific task. That’s it. It is a computer learning to improve and tweak it’s algorithm, based on trial and error. Just like we learn things. No difference.
Here is a definition for AI: Artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals.
However, AI is where things can really come unstuck. The aim is to get machines to think as we do. In a non-linear way. Human beings deal exceptionally well with ambiguity and we have an ability to match things up like apparently different words and images.
Have you ever been transported back in time, in an instant, by a song clip or a smell? That is human, no one taught you to do that. A computer could conceivably do that but only if it had previously been instructed to do so. It can do it so very fast you would be forgiven for thinking it was natural. It is not though, it is programmed to do it. Sure, it might have learnt to improve its own algorithm (Machine Learning again) to do that based on observations of human behaviour. It is still just mimicking what it sees as the appropriate behaviour, there has never been that spontaneous connection that you experienced that transported you to another time and place, even fleetingly.
A recent high-profile example of AI and ML going a little bit awry and showing bias is in this article here. “Amazon Reportedly Killed an AI Recruitment System Because It Couldn’t Stop the Tool from Discriminating Against Women“ Well worth listening to the video and understanding the unconscious bias exhibited by the builders of the algorithms. There are efforts to remove the human biases that the machines learn from and perpetuate.
But what is Deep Learning, I hear you cry? It can simply be differentiated from Machine Learning as when the need for a human being to categorise all the different data inputs is eliminated. Now the machine (still only the really fast calculator). Think self-driving cars, drones and many more much duller things. Presently, we humans need to be involved in the categorisation. There is even a Data Labelling factory in China
to use humans to ‘teach’ machines what it is that they are seeing.
Equitable, Just, Neutral and Fair are components of moral behaviour that reside in the interpretation of the present societal norms, and not everyone agrees with them. Different cultures can have quite different views on a correct moral choice.
Remember this when someone is trying to argue about the infallibility of computers. They can only be programmed with lagging data and they will always reflect us and our biases. For better or worse.
A breakdown and simplification of some current tech speak
The word ‘algorithm’ is uttered with a degree of reverence these days. It is the black magic behind AI and Machine Learning and is a favourite thing to go rogue in a modern plot line. The actors merely blame the bad algorithm when a computer goes crazy in a dystopian sci-fi catastrophe.
The decision making requirements that we are faced with in the modern commercial world far exceed our capacity in many instances because our brains evolved for a very different sort of world. A world of small groups where we rarely met anyone very different from ourselves. We had significantly shorter lives and our main priorities were sex and survival. These days there is hugely increased complexity and nuance yet the evolved desire for rapid choice-making hasn’t left us.
Faced with these pressures we turn to computers for help. Computers helping humans is so pervasive and permeates almost all aspects of life. Such a rapid change has occurred in the last lifetime as the evolution of computing capacity increases exponentially. Your mobile telephone has vastly greater computing power than all three computers on the first Space Shuttle. Think about it for a moment. Your phone possesses all the computing power required to fire you into space. This incredible capability means that people have been fascinated with the idea that a computer can be turned from a dumb machine into a thinking machine (thinking as we do) since the dawn of the first machine.
However, computational power is one thing. How to make it work as an independent thinking machine is another thing all together. One of the key things you need to do this you need an algorithm.
Algorithms: the rules needed for machine thinking.
Just to clear this up. Machines DO NOT think. Computers can process a huge volume of information, really really quickly because they are unbelievably fast calculators. The hardware is just a superfast counting machine with a screen.
Algorithms are not hard to conceive, if you think of them like this; an algorithm is what you need to cook supper for your family.
Few families eat the same thing for every meal of every day so there are constraints and variables. Imagine there are four of you. One is a vegetarian, one is on a low-fat diet and the other too aren’t that fussy but do have preferences. You want to provide them with a nutritious and tasty meal that ensures everyone enjoys the experience, including you.
Let’s imagine that you are 45 and have cooked for the same people many times before (almost daily) and as a consequence you have learnt a lot about what works and what doesn’t. However, this week is different and you haven’t had time to shop and the other three did the shopping for you. You open the cupboard doors and have a peer in the fridge and freezer to get an idea of what is available for you to cook with.
Within about 30 seconds of taking stock of the cupboard contents, the fridge contents, the available utensils to cook with, any time constraints, the dietary preferences and so on you decide on a meal. You cook it, serve it and everyone eats. They get up from the table appropriately nourished leaving the process to be repeated the next day.
What allowed you to do this was an algorithm in you head. Call it the ‘cooking for family’ algorithm.
Pause for a moment though and think about how simple it can sound and actually how the thinking and actions required was so incredibly, amazingly, mind-blowingly complex and nuanced.
A quick note as to where this can go wrong
Simply put, computers are not people. Computers are superb for making decisions that do not require any emotion, ethics, bias and the like. Eventually a computer beat a Chess Grandmaster and uit did it by sheer computational brute force. However, to take the supper example: the cook knows the audience at a level a computer can’t match. All the calculations from an algortihm and it can’t know from someone’s face if they are the special kind of tired that a Wednesday can make them, so putting any kind of pie down for dessert would mean the world to them. And the others would see that a pie was not only what was needed but was a very thoughtful gesture thereby elevating the cook in the eyes of the other three and making an intangible but felt contribution to them too.
The aim is to have algorithms teach themselves by learning from mistakes in order to achieve the desired outcome of the programmer(s). They try, but they are far from perfect and because we expect perfection from computers, in a way that is different from our expectations of one another, then mistakes are not easily forgiven.