Google open-sources GPipe, a library for efficiently training large deep neural networks

Google’s AI research division today open-sourced GPipe, a library for “efficiently” training deep neural networks (layered functions modeled after neurons) under Lingvo, a TensorFlow framework for sequence modeling.

Without GPipe, Huang says, a single core can only train up to 82 million model parameters.

“[In] GPipe … we demonstrate the use of pipeline parallelism to scale up DNN training to overcome this limitation.”As Huang and colleagues explain in an accompanying paper (“GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism“), GPipe implements two nifty AI training techniques.

“Deep neural networks (DNNs) have advanced many machine learning tasks, including speech recognition, visual recognition, and language processing.

Most of GPipe’s performance gains come from better memory allocation for AI models.

If you’re in the business of training large-scale AI systems, good news: Google’s got your back.

In one experiment, Google trained a deep learning algorithm — AmoebaNet-B — with 557 million model parameters and sample images on TPUs, incorporating 1.8 billion parameters on each TPU (25 times more than is possible without GPipe).

This article was summarized automatically with AI / Article-Σ ™/ BuildR BOT™.

Original link