Simple considerations for simple people building fancy neural networks
About this article
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Back to Articles 🚧 Simple considerations for simple people building fancy neural networks Published February 25, 2021 Update on GitHub Upvote 2 Victor Sanh VictorSanh Follow Photo by Henry & Co. on Unsplash As machine learning continues penetrating all aspects of the industry, neural networks have never been so hyped. For instance, models like GPT-3 have been all over social media in the past few weeks and continue to make headlines outside of tech news outlets with fear-mongering titles. An article from The Guardian At the same time, deep learning frameworks, tools, and specialized libraries democratize machine learning research by making state-of-the-art research easier to use than ever. It is quite common to see these almost-magical/plug-and-play 5 lines of code that promise (near) state-of-the-art results. Working at Hugging Face 🤗, I admit that I am partially guilty of that. 😅 It can give an inexperienced user the misleading impression that neural networks are now a mature technology while in fact, the field is in constant development. In reality, building and training neural networks can often be an extremely frustrating experience: It is sometimes hard to understand if your performance comes from a bug in your model/code or is simply limited by your model’s expressiveness. You can make tons of tiny mistakes at every step of the process without realizing at first, and your model will still train and give a decent performance. In this post, I will try to highlight a f...