We interview a lot of candidates who are making the move from academia into industry. It's a big transition and it makes sense that people carry assumptions from one world into the other. There's genuine overlap between the two, but the overlap is incomplete. The part that tends to surprise people most is how much engineering is involved.
This post is specifically about what it means to be an AI researcher on the Matilda team, where engineering isn't a side activity but the bridge between having an innovation and getting it into the hands of humans, without which nothing ships or scales. Our Residency program is closer to what you'd expect from academia, structured around exploration with room to publish and where the output can be a paper, but that's a different thing entirely.
I'll start with the part that will feel most familiar. The workflow of reading papers, tracking the state of the art from major labs, designing experiments and running ablations while exploring a possibility space where honestly most of the paths turn out to be dead ends. If you've spent a few years in a research lab you know the rhythm and it's not that different here.
Where it diverges is the finish line. In academia the end state is a contribution to knowledge, whether that's a paper, a proof, or a benchmark result, and once you've packaged the insight and defended it you move on. On Matilda the end state is software running in production. The insight matters but it's the starting point, not the destination, which is turning that insight into something that works reliably at scale for people who depend on it.
That shift has a lot of downstream effects worth understanding before you make the jump.
Scale is a big one. A prototype that runs on 10K examples is perfectly valid for a paper but production needs it on billions, which changes what algorithms are even viable and changes the math entirely, because approaches that are elegant at small scale can become completely impractical and part of the job is developing an intuition for that early in the process.
Customer impact is another. In academia a regression in your model is a line in your limitations section, but in production a regression means someone's workflow is broken, and that's a different kind of accountability that shapes how you think about risk and validation.
There's also a tradeoff around time and completeness that's genuinely different from academic work. The elegant approach that takes six months to build will sometimes lose to the approach that gets you 80% of the way there in three weeks, not because you're cutting corners but because learning to weigh completeness against velocity is a skill that comes from shipping things and seeing what happens in the real world.
The engineering itself is also different from what most research environments prepare you for. The code you write has to live inside a production system where other engineers are going to maintain it and extend it and debug it, so it has to be readable, tested, and fit into something larger than your project. If you're coming from a research environment where code is mostly a means to an answer this is a real shift, not a negative one, but worth knowing about upfront.
And then there are production realities that just don't come up in academic settings, things like latency budgets and memory constraints and failure modes that emerge when data pipelines behave in ways nobody anticipated. The gap between "it works" and "it works reliably" is where a lot of the genuinely hard engineering problems live, and in my experience it's where you end up spending more time than you'd expect and also where a lot of the most interesting learning happens.
People on Matilda do still publish sometimes, and work from the team has shown up at conferences, but I want to be transparent about where publishing sits in the priority stack because I think a lot of companies aren't. The primary loop on this team is building and shipping and measuring and iterating, and sometimes that loop produces something worth writing up, which is great when it happens. But if publishing is the main thing you want to optimize for, the Residency program is a better fit and it exists for exactly that reason.
I think being upfront about this matters. This is not new thinking, and there are a collection of excellent posts I've read from researchers who've been on both sides that helped shape my view, you can find them here:
- Rowan Zellers — Why I chose OpenAI over academia
- Nicholas Carlini — Career Update: Google DeepMind → Anthropic
- David Stutz — Thoughts on Academia and Industry in ML Research
- Edouard Fouché — Observations about my transition from Academia to Industry
- Yash Bhalgat — Should You Apply for a PhD in AI?
The goal here isn't to convince anyone that one path is better but to be clear about what this particular path actually looks like so you can make a good decision.
Plainly, it's a team that builds working software at scale where the problems are genuinely hard and the research component is real, but the artifact at the end is a system, not a PDF. The feedback loop is faster than academia because you ship something and find out relatively quickly whether it worked, and there's something genuinely satisfying about that if you're the kind of person who wants to see their work out in the world doing something useful.
None of this is meant to suggest that industry is better than academia or that engineering is more valuable than pure research, they're different paths with different tradeoffs and rhythms. The right choice depends entirely on what you want your day to day to look like and what kind of impact feels meaningful to you. If what you want is to build systems that solve real problems for real people and you're interested in the engineering that makes that possible, Matilda is a good place to do it.