top of page
Writer's pictureAkseli Ilmanen

Getting to know AI

Updated: Oct 19, 2022



‘Select all squares with traffic lights’ is a common online prompt to prove that you’re not a robot. It is also a typical task in machine learning – the biggest subfield of artificial intelligence (AI). More formally known as a ‘binary classification problem’, an algorithm must determine whether an image contains an object – for example, does an image contain a cat or not. The architecture of such an algorithm might look like Figure 1, where the circles are known as ‘nodes’ or ‘artificial neurons’ and the arrows as ‘connections’. Fittingly, the architecture is called an artificial neural network (ANN) and the field is described as connectivism, although the term ‘deep learning’ is more commonly used these days. How does this deep learning work? The content of a cat image can be represented in numbers by taking the RGB colour values of each image pixel. This numerical information is inserted into the nodes in the input layer from where it is processed and passed forward through the hidden layers to the output layer. If the value in the output layer passes a certain threshold, the ANN classifies the image as a cat. Depending on how well or badly the ANN did, the ‘weights’ of the connections are adjusted – see how in Figure 1 some arrows are thicker than others. This process, known as backpropagation, is where the ‘learning’ happens and is repeated with many thousand images until the algorithm can reliably determine whether an image contains a cat. Neuroscience has also inspired other forms of machine learning. In reinforcement learning, for example, agents learn by discovering for themselves how to maximize a ‘reward’, the reward being related to the purpose of the algorithm. These types of approaches are generating programs with remarkable ‘superhuman’ capabilities (see page 30).


The AI renaissance came with (computational) power

Why has there been an AI renaissance during the last decade? One reason is that there is more data for AI to learn from – think Big Data. What is less obvious is the role of computational power. Indeed, the theoretical breakthroughs that underlie current machine learning algorithms are actually quite old and at times inspired by neuroscience. Discoveries made in the 1940s about the ‘all-or-none’ character of neuronal firing and Hebbian learning, often summarized as ‘cells that fire together wire together’, set the scene for ANNs. And backpropagation was input layer hidden layer output layer already around in the 1970s and 1980s. It’s worth highlighting that researchers disagree on whether something similar to backpropagation occurs in the brain. Why are these algorithms only successful now? Previously, computers did not have enough computational power to support large ANNs. This has been changing. The number of transistors in a computer circuit has been doubling every two years, a phenomenon known as Moore’s law. The resulting increase in computing power allows data to be processed at astronomical speed. To further increase energy efficiency and computational power, researchers in ‘neuromorphic’ computing are creating hardware and software that mimics the human brain. For example, the SpiNNaker project in Manchester is a computer architecture that mimics the membrane potential in neurons; instead of transmitting information between every node all the time, information is only transmitted if it reaches a specific threshold.


AI for data analysis Central to this year’s

BNA theme is how AI can be used to interrogate neuroscience data. We do not have to look far for examples. Rik Henson’s group (Cambridge) is investigating whether machine learning algorithms can be used for the early detection of Alzheimer’s disease. Training an algorithm on MEG data about brain connectivity could enable an AI system to detect early synaptic dysfunction, thereby facilitating early intervention. Besides helping to analyse neural data, AI could also support other parts of the research process, such as literature searches. Launching our year of AI with the BNA Festive Symposium, we heard from Biorelate Ltd how its software can be used to detect cause-and-effect data in the neuroscience literature. We are excited to hear from you about how AI can inspire neuroscience and vice versa.


I published this article in the BNA Bulletin for their year of AI (in 2022). Thought I share it here too.

Commenti


bottom of page