Automated techniques could make it easier to develop AI


“BERT takes months of computation and is very expensive, like a million dollars to generate this model and repeat these processes,” says Bahrami. “So if everyone wants to do the same thing, it’s expensive, it’s not energy efficient, it’s not good for the world.”

Although the field is promising, researchers are still looking for ways to make autoML techniques more computationally efficient. For example, methods such as neural architecture search currently build and test many different models to find the best fit, and the energy required to complete all these iterations can be significant.

AutoML techniques can also be applied to machine learning algorithms that do not involve neural networks, such as creating random decision forests or support vector machines to classify data. Research in these areas is more advanced, with many coding libraries already available for people who want to incorporate autoML techniques into their projects.

The next step is to use autoML to quantify uncertainty and address issues of reliability and fairness in algorithms, says Hutter, a conference organizer. In this view, standards on reliability and fairness would be similar to any other machine learning constraints, such as accuracy. And autoML could automatically capture and correct biases found in these algorithms before they are published.

The search continues

But for something like deep learning, autoML still has a long way to go. The data used to train deep learning models, such as images, documents, and recorded speech, are often dense and complicated. It takes immense computational power to handle. The cost and time to train these models can be prohibitive for anyone other than researchers working in deep-pocketed private companies.

One of the conference’s competitions asked participants to develop alternative energy-efficient algorithms for neural architecture research. It is a considerable challenge because this technique has infamous computational demands. It automatically cycles through countless deep learning models to help researchers choose the right one for their application, but the process can take months and cost more than a million dollars.

The goal of these alternative algorithms, called zero-cost neural architecture search proxies, is to make neural architecture search more accessible and environmentally friendly by significantly reducing its computational appetite. The result takes just a few seconds to run, instead of months. These techniques are still in the early stages of development and often unreliable, but machine learning researchers predict they have the potential to make the model selection process much more efficient.



Source link

Leave a Comment