As we begin a new year and decade, VentureBeat turned to some of the keenest minds in AI to revisit progress made in 2019 and look ahead to how machine learning will mature in 2020.
While some predict advances in subfields like semi-supervised learning and the neural symbolic approach, virtually all the ML luminaries VentureBeat spoke with agree that great strides were made in Transformer-based natural language models in 2019 and expect continued controversy over tech like facial recognition.
Like most of the other industry leaders VentureBeat spoke with for this article, Chintala predicts the AI community will place more value on AI model performance beyond accuracy in 2020 and begin turning attention to other important factors, like the amount of power it takes to create a model, how output can be explained to humans, and how AI can better reflect the kind of society people want to build.
Unequivocally, one of the biggest machine learning trends of 2019 was the continued growth and proliferation of natural language models based on Transformer, the model Chintala previously referred to as one of the biggest breakthroughs in AI in recent years.
Dean pointed to the progress that has been made, saying ” that whole research thread I think has been quite fruitful in terms of actually yielding machine learning models that [let us now] do more sophisticated NLP tasks than we used to be able to do.
The development of more efficient AI models was an emphasis at NeurIPS, where IBM Research introduced techniques for deep learning with an 8-bit precision model.
In the year ahead, Gil is particularly interested in neural symbolic AI. IBM will look to neural symbolic approaches to power things like probabilistic programming, where AI learns how to operate a program, and models that can share the reasoning behind their decisions.
This article was summarized automatically with AI / Article-Σ ™/ BuildR BOT™.