How can a discipline studying the way brains evolve, develop and work drive business growth? It certainly appears to be generating a lot of interest from the likes of serial entrepreneur and innovator Elon Musk, who founded the company Neuralink in 2017 aiming to build a non-invasive brain-computer interface. In the same year, Facebook set up its own neuroscience research lab dedicated to devising new marketing techniques.
We, as Artificial Intelligence engineers, are primarily interested in understanding the brain at a systems, infrastructure level – that is, its functional architecture and representations, how it processes and stores information – in order to mimic these processes through algorithms.
Although neuroscience and artificial intelligence (AI) have diverged in recent times and become separate fields in their own right, understanding how biological brains (be they human or animal) work will be crucial for building the intelligent machines of the future. Neuroscience can inspire and guide new types of algorithms and architectures, independent of, but complementary to, the mathematical and logic-based techniques that have been the mainstay of AI until now. Indeed, if a new aspect of biological computation was found to be vital for a cognitive function, it might be exploited in an artificial system. The same applies to a known algorithm that was found to be naturally occurring in the brain.
Neuroscience at Mantu
Mantu’s Research & Development Lab is working on a diverse range of data science projects that will have applications in a multitude of industries, including finance, recruitment, marketing, analysis, management and IT systems. We focus on finding solutions to the Group’s needs be they external (from our clients) or internal (emanating from our teams).
When a client asks us to solve a problem, we sometimes already have an appropriate algorithm we can deploy, but when we don’t have the relevant tools, or we seek better results, we can make use of neuroscience concepts to build custom solutions. Amongst other data science projects, we are working on neocortical-based prototypes such as algorithms and various technical tools. For topics like anomaly detection, for instance, we have been focusing on neuroscience-inspired artificial intelligence and exploiting recent advances in the field to help our clients transform and enhance their businesses.
Anomaly detection
Anomaly detection (and in particular unsupervised anomaly detection) involves developing algorithms that describe the processes through which a human would find a given situation abnormal. A lot of research has been done in this field since the early 1970s, but it is becoming especially relevant now as we see an enormous surge in the amount of streaming, time-series data becoming available, largely driven by the increase in connected real-time data sources and the Internet of Things (IoT). An anomaly can be broadly defined as a point in time in which the behaviour of a system is perceived as unusual and significantly different from previous, normal behaviour (although more complex approaches do exist, for instance in
Detecting anomalies in time series could have a far-reaching impact on a range of industries including finance, banking, IT, security, healthcare, energy, e-commerce, agriculture and social media. For example, a fluctuation in the turbine rotation frequency of a jet engine might indicate impending failure, while an anomaly in a heart monitoring data stream could be a sign of a heart attack. An anomaly doesn’t have to be negative though – an abnormally high number of web clicks on a new product page could simply imply stronger than usual demand.
In any case, anomalies in data invariably identify this abnormal behaviour with potentially useful information, and we need this information early enough to be able to act – possibly to prevent a system failure, for example.
Hierarchical Temporal Memory
One of the theoretical neocortical frameworks for anomaly detection that Mantu’s R&D Lab has worked on is an online sequence memory algorithm called ‘Hierarchical Temporal Memory’ (HTM). This algorithm mimics how biological brains build up memories using space and time representations and it could be used to construct new kinds of Machine Learning algorithms. In the brain, memories are forged thanks to pre- and post-synaptic stimulations of neurons when dealing with a familiar object. Unfamiliar, or familiar but unexpected, events can be associated with a mismatch or few synaptic connections between two consecutive neuron assemblies.
Deep learning, which is a subset of AI Machine Learning, is based on this biological brain
Artificial Intelligence mimicking cortical processes
In this context, one of our goals is to build systems that perceive the world as we represent it. One way to build AI algorithms is to try having them solve problems as humans do: mimicking humans might not be the best approach, but it is something we are familiar with, and the more we can model human behaviour, the better. An added advantage is that, thanks to these models, we can begin to better understand ourselves and how we make decisions.
Grid cells and spiking neural networks are also interesting in this context because they could be used to inspire frameworks in which information can be processed in discrete terms and so help construct consistent temporal relationship representations.
Another project we are working on is called Gazeline, which revaluates how well a candidate answers a given question. In practical terms, they try to solve the problem while a camera tracks their eye movement. Detecting gaze targets is known to play a role in extrapolating people’s goals, and such innate structures could one day be replicated by constructing local cortical regions with specific initial neuronal connectivity, supplying inputs and error signals to particular targets.
Machine learning: looking to the future
Our aim is to develop innovative solutions that address challenges for short and long terms use-cases and, if required, using neuroscience and frameworks like HTM. To this end, we are also starting collaborations with other academic labs working in AI.
Mantu’s teams are currently working on an extensive range of projects, including: Natural Language Processing (for knowledge extraction and chat bots); clustering and classification; geographic human flow models for market analyses with Markov models; Eye Tracking; Automatic Machine Learning and Augmented Realty. Deep Learning and Reinforcement Learning are also important topics of research.
Ultimately, all these research projects will find applications in Market Intelligence, Online Reputation, Talent Management, Maintenance and Cybersecurity.
According to a recent report by McKinsey Global Institute, by 2030 AI is expected to generate $13 trillion’s worth of additional business activity in the world. Neuroscience and AI will thus play a key role in the transformation of our society, and our Group aims to play an active part in this revolution.
Future global challenges, such as efficient food production for a growing population, energy consumption and more complex societal structures will all require sophisticated modelling and an in-depth understanding of these models. Neuroscience and AI will help us face these issues and ultimately allow us to better explain our behaviour in a variety of situations and the way we react to the world around us.
References
Using neuroscience to develop artificial intelligence, Shimon Ullman, Science Vol 363, Issue 6428, pp. 692-693 (2019)
Infants’ ability to connect gaze and emotional expression to intentional action, Ann T. Phillips et al., Cognition 85 (2002) 53-78
Notes From The AI Frontier – Modelling The Impact Of AI On The World Economy, Jacques Bughin, Jeongmin Seong, James Manyika, Michael Chui, and Raoul Joshi, McKinsey Global Institute (2018)