Flapping Airplanes on the future of AI: 'We want to try really radically different things' | TechCrunch
Summary
Flapping Airplanes, a new AI lab, aims to revolutionize AI training by focusing on data efficiency, backed by $180 million in funding. Founders discuss their unique approach and vision for the future of AI.
Why It Matters
This article highlights a fresh perspective in AI research, emphasizing the need for data-efficient models that could transform various sectors, including robotics and scientific discovery. As traditional AI models become increasingly data-hungry, exploring alternative training methods could lead to significant advancements in AI capabilities and applications.
Key Takeaways
- Flapping Airplanes is focused on creating more data-efficient AI models.
- The founders believe current AI models are limited by their data requirements.
- Their approach draws inspiration from the human brain's learning processes.
- The lab aims to address challenges in sectors constrained by data availability.
- With substantial funding, they have the resources to explore innovative AI solutions.
There’s been a bunch of exciting research-focused AI labs popping up in recent months, and Flapping Airplanes is one of the most interesting. Propelled by its young and curious founders, Flapping Airplanes is focused on finding less data-hungry ways to train AI. It’s a potential game-changer for the economics and capabilities of AI models — and with $180 million in seed funding, they’ll have plenty of runway to figure it out. Last week, I spoke with the lab’s three co-founders — brothers Ben and Asher Spector, and Aidan Smith — about why this is an exciting moment to start a new AI lab and why they keep coming back to ideas about the human brain. I want to start by asking, why now? Labs like OpenAI and DeepMind have spent so much on scaling their models. I’m sure the competition seems daunting. Why did this feel like a good moment to launch a foundation model company? Ben: There’s just so much to do. So, the advances that we’ve gotten over the last five to ten years have been spectacular. We love the tools. We use them every day. But the question is, is this the whole universe of things that needs to happen? And we thought about it very carefully and our answer was no, there’s a lot more to do. In our case, we thought that the data efficiency problem was sort of really the key thing to go look at. The current frontier models are trained on the sum totality of human knowledge, and humans can obviously make do with an awful lot less. So there’s a big gap there, and it’s wort...