AI cuts, sinks and goes green – TechCrunch





The research in the area of ​​machine learning and AI, now a key technology in virtually every industry and business, is far too voluminous for anyone to read it all. This column aims to bring together some of the most relevant recent findings and papers – particularly, but not limited to, artificial intelligence – and explain why they matter.

This week, AI applications were found in several unexpected niches due to its ability to sort through large amounts of data or make sensible predictions based on limited evidence.

We’ve seen machine learning models support large data sets in biotechnology and finance, but researchers at ETH Zurich and LMU Munich are applying similar techniques to data generated by international aid projects. to development such as disaster relief and shelter. The team trained their model on millions of projects (amounting to $2.8 trillion in funding) from the past 20 years, a huge dataset too complex to manually analyze in detail.

“You can think of the process as an attempt to read an entire library and sort similar books into thematic shelves. Our algorithm considers 200 different dimensions to determine how similar these 3.2 million projects are to each other – an impossible workload for a human being,” said Malte Toetzke, author of the study.

The very high-level trends suggest that spending on inclusion and diversity has increased, while climate spending has surprisingly fallen in recent years. You can review the dataset and the trends they analyzed here.

Another area that few people think about is the huge number of machine parts and components that are being produced by various industries at an enormous rate. Some can be reused, some recycled, others must be disposed of responsibly – but there are too many for human specialists to examine. German R&D team Fraunhofer has developed a machine learning model to identify parts so they can be used instead of heading for scrap.

A coin sits on a table as part of a demonstration of an identification AI.

Picture credits: Fraunhofer

The system relies on more than ordinary camera views, as parts may look similar but be very different, or be mechanically identical but visually differ due to rust or wear. Thus, each piece is also weighed and scanned by 3D cameras, and metadata such as origin is also included. The model then suggests what it thinks the part is so the man inspecting it doesn’t have to start from scratch. It is hoped that tens of thousands of coins will soon be saved, and the processing of millions accelerated, using this AI-assisted identification method.

Physicists have found an interesting way to apply the qualities of ML to a centuries-old problem. Essentially, researchers are still looking for ways to show that the equations that govern fluid dynamics (some of which, like Euler’s, date from the 18th century) are incomplete – that they break at certain extreme values. Using traditional computational techniques, this is difficult to do, but not impossible. But researchers from CIT and Hong Kong’s Hang Seng University are proposing a new deep learning method to isolate likely cases of fluid dynamics singularities, while others are applying the technique in other ways on field. This Quanta article explains this interesting development quite well.

Another age-old concept that gets an ML layer is kirigami, the art of paper cutting that many are familiar with in the context of creating paper snowflakes. The technique dates back centuries to Japan and China in particular, and can produce remarkably complex and flexible structures. Argonne National Labs researchers took inspiration from the concept to theorize a 2D material that can retain microscale electronics but also bend easily.

The team had manually performed tens of thousands of experiments with 1 to 6 cuts and used that data to train the model. They then used a Department of Energy supercomputer to perform simulations down to the molecular level. Within seconds, it produced a 10-cut variation with 40% stretch, far beyond what the team had anticipated or even attempted on their own.

Simulation of molecules forming a 2D stretchable material.

Picture credits: Argonne National Laboratories

“He understood things that we never told him to understand. He learned something like a human learns and used his knowledge to do something different,” said project manager Pankaj Rajak. The success prompted them to increase the complexity and scope of the simulation.

Another interesting extrapolation made by a specially trained AI has a computer vision model reconstructing color data from infrared inputs. Normally, a camera capturing infrared would know nothing about the color of an object in the visible spectrum. But this experiment found correlations between certain IR bands and the visible ones, and created a model for converting images of human faces captured in IR to those that approximate the visible spectrum.

It’s still just a proof of concept, but such spectrum flexibility could be a useful tool in science and photography.

Meanwhile, new research co-authored by Google AI head Jeff Dean pushes back against the idea that AI is an environmentally costly endeavor, due to its high computational demands. While some research has shown that training a large model like OpenAI’s GPT-3 can generate carbon dioxide emissions equivalent to that of a small neighborhood, the Google-affiliated study claims that “following best practices” can reduce carbon emissions from machine learning by up to 1,000 times.

The practices in question relate to the types of models used, the machines used to train the models, “mechanization” (e.g. cloud computing versus local computers) and “map” (choosing the locations of the centers of data with the cleanest energy). According to the co-authors, selecting “efficient” models alone can reduce computation by factors of 5-10, while using processors optimized for machine learning training, such as GPUs, can improve the performance-to-performance ratio. watt by factors of 2 to 5.

Any research thread suggesting that the environmental impact of AI can be mitigated is indeed cause for celebration. But it should be noted that Google is not a neutral party. Many of the company’s products, from Google Maps to Google Search, rely on templates that require large amounts of energy to develop and function.

Mike Cook, a member of the Knives and Paintbrushes open research group, points out that – even if the study’s estimates are accurate – it is simply not a good reason for a business not to grow in an energy-inefficient way if it benefits it. While academic groups might pay attention to metrics like carbon impact, companies aren’t as incentivized in the same way – at least currently.

“The reason we’re having this conversation to begin with is that companies like Google and OpenAI effectively had infinite funding and chose to leverage it to build models like GPT-3 and BERT at all costs because they knew it gave them an edge,” Cook told TechCrunch via email. “Globally, I think the document says some nice things and that’s good if we think about efficiency, but the issue is not technical in my opinion – we know full well that these companies will get big when they need to, they have won’t hold back, so saying it’s now solved forever sounds like an empty line.

The final topic for this week isn’t exactly about machine learning, but more about what might be a way forward to simulate the brain in a more direct way. EPFL bioinformatics researchers have created a mathematical model to create tons of unique yet precise simulated neurons that could eventually be used to build neuroanatomy digital twins.

“The findings are already enabling Blue Brain to construct biologically detailed reconstructions and simulations of the mouse brain, by computer reconstructing brain regions for simulations that replicate the anatomical properties of neural morphologies and include region-specific anatomy.” , said researcher Lida Kanari.

Don’t expect sim-brains to make better AIs – this is really in pursuit of advances in neuroscience – but perhaps knowledge of simulated neural networks can lead to fundamental improvements in process understanding that the AI ​​seeks to digitally imitate.




Leave a Comment

x