Learning to be Maincode
Why operationalizing discovery is the hardest and most important challenge ahead

Maincode was founded with a deep respect for the pioneers of artificial intelligence. Visionaries like Demis Hassabis, Yoshua Bengio, and Rich Sutton reshaped how we think about intelligence, learning, and the promise of synthetic cognition. Their contributions, rooted in neuroscience, cognitive science, and reinforcement learning, provided a powerful foundation for the field and continue to guide how many of us think about where the discipline might go.
But as the field matures, it's become clear that this foundation, while profound, leaves critical gaps, especially when it comes to translation. Neuroscience-inspired models, for example, often face major barriers moving from theory into practice. Local learning rules, biologically plausible energy dynamics, and brain-like representations often don't scale well on modern compute (Deng et al., 2024). These models can be conceptually elegant and biologically insightful, but frequently lack the software tooling, theoretical guarantees, or infrastructure compatibility to operate in real-world, high-stakes environments.
Deep learning, too, is approaching visible limits. While scaling up models has yielded undeniable breakthroughs, it’s becoming harder to ignore diminishing returns (Bommasani et al., 2021). As costs grow and generalization narrows, the frontier is shifting. More and more, researchers are pointing to a need to “scale out”, to create modular, adaptive, and interactive AI systems that can cope with complex environments, evolving tasks, and diverse user needs.
These limitations point to a deeper problem: a persistent translation gap between upstream scientific insight and downstream usable systems. This gap often isn’t just technical. It’s about validation, context, reliability, adaptability, issues that emerge when a theoretical capability is dropped into a dynamic, imperfect, human world. The challenge isn't inventing clever models (Sculley et al., 2015). It’s turning those models into real systems that people can understand, trust, and use.
At Maincode, we’re approaching this space with the assumption that the next leap in AI will come not from a singular algorithmic insight, but from learning how to build the systems and organizations that can translate foundational ideas into scalable, adaptive, real-world capabilities. We don't believe there's a single answer to this problem. What we do believe is that it's possible to learn the model by practicing it, to treat the way we work as an experiment, and use each iteration to uncover more about what that operational blueprint could be.
That’s why we're structured around a four-phase process: foundational science, mathematical abstraction, systems engineering, and product integration. But we don’t treat this as a linear pipeline, we treat it as a working theory. Foundational science offers us signals, patterns in physics, neuroscience, or thermodynamics that suggest principles of cognition or adaptation. Math gives us the tools to formalize those ideas into abstract representations. Systems engineering lets us prototype, break, and observe their behavior under stress. And only then do we explore how those systems might support actual decision-making in a user’s real context.
This process is not about polishing a fixed idea, it's about surfacing constraints, discovering capabilities, and updating direction. And because so many of the most important lessons happen after something is built, we’ve adopted a recursive mindset. Like backpropagation, every forward motion, from insight to impact, is followed by a return loop, where we evaluate what held, what broke, and what assumptions we need to revise.
We're not claiming to have found the model of operations. What we're doing is trying to discover it through disciplined execution, by treating our work not only as a means to an end, but as a learning surface for something deeper. The experiments we run aren’t just technical. They’re organizational. How do we structure feedback? What kind of team composition accelerates learning? When do abstractions become bottlenecks? These are live questions, and we treat each two-week evolution, each prototype, each mistake as a partial answer.
Our goal isn’t just to build useful systems, it’s to learn how to build useful systems in a way that can scale with us. In that sense, the company itself is a recursive structure: a process for refining not just outputs, but the method by which those outputs are created.
This is a long-horizon bet. But it’s based on a clear observation: that the next wave of AI progress will be defined not just by what is possible in theory, but by what can be operationalized under constraint, embedded with care, and trusted in the wild. That translation requires more than research. It requires a new kind of practice.
The blueprints from early AI visionaries remain foundational. But realizing their full potential will depend on something more: a bridge between insight and impact, one built not by assumption, but by repeated, reflective effort. That’s the kind of bridge we’re trying to learn how to build.
Deng, Y., Wang, Y., & Zhang, Y. (2024). LLS: Local Learning Rule for Deep Neural Networks Inspired by Neural Activity Synchronization. arXiv. Retrieved May 21, 2025, from https://arxiv.org/html/2405.15868v1
Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., ... & Liang, P. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258. Retrieved May 21, 2025, from https://arxiv.org/abs/2108.07258
Sculley, D., Holt, G., Golovin, D., Davydov, E., Phillips, T., Ebner, D., Chaudhary, V., Young, M., Crespo, J.-F., & Dennison, D. (2015). Hidden technical debt in machine learning systems. In Advances in Neural Information Processing Systems (NeurIPS 2015), 28, 2503–2511. Retrieved May 21, 2025, from https://papers.nips.cc/paper/5656-hidden-technical-debt-in-machine-learning-systems.pdf