HomeNewsResearchers Made Computers See in Higher Dimensions

Researchers Made Computers See in Higher Dimensions

Computers are now able to drive cars, beat world champions at board games such as chess, and can even write prose. The development in artificial intelligence mainly sources from the power of one specific type of artificial neural network, whose design is based on the linked layers of neurons in the mammalian visual cortex.

These “convolutional neural networks” (CNNs)​ have demonstrated to be unexpectedly adept at learning patterns in two-dimensional data, more so in computer vision functions such as recognizing handwritten words and items in digital images.

However, when used to data sets with no default planar geometry, this powerful machine learning structure is not functioning well. Back in 2016, a new science known as geometric deep learning appeared with the aim of taking CNNs out of flatland.

Now, scientists have revealed, with a new theoretical structure for creating neural networks that can learn behaviors on any type of geometric surface. These ‘gauge-equivariant convolutional neural networks​,’ or gauge CNNs, created at the University of Amsterdam and Qualcomm AI Research by Taco Cohen, Maurice Weiler, Berkay Kicanaoglu, and Max Welling​ are able to identify patterns in spheres and asymmetrically curved objects.

“This framework is a fairly definitive answer to this problem of deep learning on curved surfaces,” Welling said.

Escaping Flatland

Gauge CNNs have massively exceeded their predecessors in learning behaviors in simulated global climate data, which is typically mapped onto a sphere. The algorithms may also demonstrate to be useful for enhancing the vision of drones and autonomous computers that see items in 3D, and for identifying patterns in the information collected from the uneven curved surfaces of hearts, brains or other organs.

The scientists’ solution of making deep learning to function outside flatland also has profound associations to physics. Physical theories that depict the world, such as Albert Einstein’s general theory of relativity and the Standard Model of particle physics, depict a property known as ‘gauge equivariance.’

This means that amounts in the world and their connection do not rely on arbitrary benchmarks, or ‘gauges,’ but they remain consistent whether an observer is in motion or standing still, irrelevant on the distance the numbers on a ruler are positioned at. Calculations made in those various gauges must be convertible into each other in a manner that keeps the underlying connection between objects.

“The same idea [from physics] that there’s no special orientation—they wanted to get that into neural networks,” said Kyle Cranmer, a physicist at New York University who uses machine learning in particle physics data. “And they figured out how to do it.”


Most Popular

Recent Comments