The fuzzy cutting edge is our playground

The fuzzy cutting edge is our playground

At Maincode, we don’t just explore AI - we ship it. That means staying ahead of the curve and turning ideas into real, shippable things. To do that, we’ve had to rethink how we approach research in a world where the edge moves at an insane speed.

If we did what most companies do, we wouldn’t get there.

The recipe

  1. Take the cutting edge of AI research
  2. Push one step ahead. Try something whacky, take risks, be foolish.
  3. Build an experiment that people can use.
  4. See what happens.

Go back to 1)

All four steps in this recipe demand real cognitive firepower.

Here’s how we think about staying on top of the cutting edge - both as individuals and as a growing team.

Road to the fuzzy cutting edge

Getting on top of what the cutting edge actually looks like at any given moment, is vital.

Why does it matter so much?

Everything truly new anybody does in engineering, science or business today emerges off the work of all that came before them. That’s why it’s not optional - we have to play at the very edge. In fact, the edge isn’t all that sharp. Knowledge doesn’t progress that way. Paul Graham put it beautifully - “Knowledge grows fractally. From a distance its edges look smooth, but when you learn enough to get close to one, you'll notice it's full of gaps.” . This fuzzy cutting edge is our playground.

So what are we optimizing for and what do we need to avoid?

We want to make sure that it’s fast and efficient for everyone in the organisation to find out what the state of the art (SOTA) is for whatever they’re working on right now but also make sure they come across seemingly unrelated research that sparks weird new connections.

Every PhD student knows the pain.

You start by pulling together some papers that look about right, you start reading, go down this rabbit hole and that rabbit hole. Some are relevant, others turn out to be a complete waste of time. You're colleagues just shared three new papers already. By paper ten you forgot what you read in the first one. Drawing connections between papers is harder than you thought. Also, are these two authors talking about the same thing but calling it different names? Over lunch you learn that your colleague had already figured out a month ago that they do in fact talk about the same thing.

And then there’s the sheer amount of research output you need to stay on top of. The arXiv counted over 6000 new submissions in the categories of ‘Computer Vision’ and ‘Computation and Language’ in October 2024 alone.

And maybe the worst - if you get lazy and just read what your colleagues already read before you, then groupthink is likely the result. Innovation stops. This is not an option.

You find yourself spending a lot of time consuming content and having barely any time left for what you love: building stuff.

Let’s not do that. We need to figure out a smarter way because research velocity matters more than ever.

The Maincode pipeline

We’re building a research pipeline that evolves with the organisation, one that we can adapt as we’re figuring out our research process in real time.

The first step is the ingestion stage. We need to broadly capture relevant research. This is where we leverage existing repositories like the arXiv, top journals, conferences but also newsletters, twitter/X, public Slack and Discord channels and AI agents that we built in-house to constantly scout the jungle that is ML/AI frontier research.

Once research has made it into our database, the real work starts. We can’t all read all of it so we filter and route it to the right people and teams at the right time. We leverage human based methods like peer-upvoting and tagging. But the real magic kicks in with automated relevance scoring that learns, AI summarisation and agentic routing. Avoiding research duplication, making chaos efficient.

Imagine starting each day with a well summarized and focused selection of research to look into. Not just some semi-relevant list of papers that you can get from any of the AI literature research tools, but a selection that is context aware and deeply integrated with your work. That knows what you’ve worked on for the last week, that has a hunch that this paper from a totally different field does something interesting you need to know about, that provides the answer to a question you couldn’t answer last month. That’s how we want to work.

And we get together often as a team. We discuss, we brainstorm, often several times a week. Only that way we can distill ideas for the next experiment to run or product to build.

We see our research pipeline as an experiment in its own right. As the team grows we will keep experimenting with new strategies and mechanisms.

The edge will stay fuzzy. That’s the point. But with the right tools, we can play at the edge and build things that push it forward.