My friends at GRAKN.AI recently published an interesting article lamenting that “machines should be able to outperform humans in many more tasks than they currently can, or at least that they should be able to make truly smart predictions.”
The article makes the point that AI has cracked one of the key attributes of human intelligence — learning — but still has some way to go with logical reasoning over a representation of knowledge.
How do we help artificial intelligence to reason? It is so innate to us that we don’t even know we are doing it.
Take a simple example:
- If grass is not an animal.
- If vegetarians only eat things that are not animals.
- If sheep only eat grass.
It is possible to infer the following:
- Then sheep are vegetarians.
The ‘if’ statements can be seen as a set of premises. If all the premises are met, we infer through reasoning the new fact that sheep are vegetarians.
Reasoning works on existing data to build new information, adding value in the process. It is fundamental to propelling AI to the next level. Reasoning relies on context, which is how items relate to each other in the real world. To use reasoning for a given data point, we need to know what type of data it is and how it relates to other data points. This forms the basis of knowledge representation and plays a key role in the creation of intelligent systems, enabling them to make sense of complexity.
The GRAKN.AI piece describes the area of graph learning, a new research area where some of the most promising models are Graph Convolutional Networks (GCN). In this article, I want to take a step back and look at another form of computer reasoning found in hybrid intelligence, where AI and humans collaborate.
In hybrid intelligence, machines learn to make decisions about how to perform tasks alongside humans. You can find out more in a paper from 2016, which reviews systems focusing on reasoning methods for computers to optimize how and when they “access” human intelligence to gain help.
We are all familiar with the idea of a semi-autonomous car: a self-driving system that has a human driver onboard to take over in emergencies. Furthermore, we know this isn’t always a successful collaboration, as a fatal accident has illustrated. However, there is a belief that a successful hybrid system would find a way to offload certain computational tasks to humans where necessary, using reasoning capabilities to make effective decisions about when to ask human intelligence to step in.
In the business world, Cindicator is a startup combining human analysts with machine learning models to make investment decisions. As their white paper describes, Cindicator takes a number of diverse financial analysts and a set of machine-learning models and combines them to manage financial investments.
In scientific research, crowdsourcing is a good example of where hybrid intelligence can shine. To date, crowdsourcing typically involves a group of people working collectively on tasks such as image labeling, with their computers mostly involved passively by providing a platform within which they collaborate. However, for efficiency, it is possible for AI to “triage” tasks and make decisions about when to ask for a contribution from humans.
One such example is CrowdSynth, a large-scale crowdsourcing system for citizen science in the Galaxy Zoo project. In Galaxy Zoo, volunteers provide votes about the correct classifications of millions of galaxies that have been recorded in an automated sky survey. (Crowdsourcing provides a way for astronomers to reach a large group of workers around the world and collect millions of classifications under the assumption that the consensus of many workers provide the correct classification of a galaxy from a choice of 6 possible classes: elliptical galaxy, clockwise spiral galaxy, anticlockwise spiral galaxy, another spiral galaxy, and stars and mergers).
CrowdSynth is a model that combines Machine Learning and decision-theoretic optimization techniques to pull together the complementary strengths of humans and machines. Each Galaxy Zoo task uses automated computer vision and then combines this with supervised learning to infer its accuracy when compared to the accuracy of human Galaxy Zoo workers. It trades off the value of acquiring assessment from a human worker with the time and financial cost of involving a human, so it optimizes reliance on human intelligence to edge cases where it is unsure of its analysis. CrowdSynth was found to achieve the maximum accuracy using just 47% of the original set of human workers involved in the analysis. When working under a fixed budget, the gains from using CrowdSynth allow scientists to use their human “resource” more intelligently and efficiently.
An AI system grappling with the decision of accessing human help needs to have an understanding of the capabilities of its helper and the costs and constraints associated with asking for help. It can even be tweaked to make effective decisions about the best worker to hire (who’s best at spotting spiral galaxies?) and the best task to assign to workers as they become available.
Perhaps it will be a while before machine intelligence outperforms us, but hybrid intelligence model provides a way for us to work together. Here’s to collaboration!