What If We Built the Last Mile First?
Sarah’s interview, Stevey’s rant, and how to enable AI in an organization
I read and enjoyed Sarah Tavel's recent post, “Becoming AI Native,” an interview with Borislav Nikolov, the CTO of Rekki about his evolving approach to AI. But I kept thinking about it for the next few days because it reminded me of something I just couldn’t place. So I went back to it and then something clicked. As I nodded along with Nikolov's description of his role - "I almost think of myself as an internal PaaS provider, the whole company is full of developers that I cannot trust" - the take felt very familiar, like it rhymed with something I’d heard before. And then I realized: he's re-articulating and re-applying the lessons of one of the most important, insightful, and delightfully snarky blog posts in the tech writing canon - Steve Yegge's Google Platforms Rant (if you haven’t read it, remedy that now. and re-read it from time to time).
But there's a crucial distinction between the rant and the revelation. While Yegge's rant was about organizational design and platform strategy in the broadest sense, Nikolov is employing similar infrastructure-first thinking to solve a much more specific and immediate challenge: the AI integration crisis that's blocking organizations from moving beyond impressive demos to practical implementation.
This approach connects two threads I've been exploring: the persistent challenge of AI's "last mile" implementation problems, and the emerging patterns of truly AI-native organizations. What if the companies struggling with AI integration are solving the problems in the wrong order? What if instead of building AI capabilities and then discovering integration nightmares, we built the integration infrastructure first? In essence, Nikolov decided to build the last mile first to empower his team in the same way that Bezos (to Stevey’s chagrin) insisted on building the last mile first to accelerate Amazon’s innovation.
It’s easier said than done, but building the last mile first pays huge dividends over time.
The Traditional Sequence vs The Platform Inversion
Most organizations approach AI implementation with what seems like logical sequencing:
Identify AI capabilities that could add value
Build or buy AI solutions
Deploy to users and existing systems
Discover integration problems: latency issues, data format conflicts, security gaps, user experience failures
Retrofit solutions to address these problems
This traditional sequence inevitably leads to what I've previously described as "last mile" challenges - the unglamorous but critical work of making AI actually function in real-world environments. These problems are predictable and persistent: API integration complexity, user experience gaps, performance translation issues, security and compliance, and the messy reality of connecting sophisticated AI to legacy systems and human workflows. Most of all, a consistent failure to understand that the systems are interdependent.
Nikolov represents a fundamentally different approach - the platform inversion:
Build integration infrastructure first, designed around the assumption that everyone will eventually need to solve problems with AI
Create secure, reliable primitives that make AI capabilities safely accessible
Enable domain experts to experiment and deploy AI solutions themselves
Achieve seamless adoption by design, because the "last mile" infrastructure already exists
The difference isn't just methodological - it's philosophical. Instead of asking "How do we add AI to our existing systems?" Nikolov asked the better question: "How do I build systems that AI can safely inhabit?"
Infrastructure as Prophecy: The Yegge Pattern Repeats
Yegge's rant about Google's platform failures wasn't just organizational commentary - it was technological prophecy. His insight was that Amazon succeeded by mandating that all internal teams communicate through service interfaces, essentially forcing every team to build platform-ready systems before they needed to be platforms. This seemed like overkill until AWS launched and dominated cloud computing by leveraging infrastructure that was already battle-tested internally.
Yegge's prophecy: "Companies that get their platform infrastructure right will eat the lunch of companies that don't." This prediction proved remarkably accurate. Amazon's internal platform discipline became the foundation for AWS, which became the backbone of the modern internet.
Now we're witnessing the same pattern with AI. Nikolov embodies the AI-era version of Amazon's platform mandate. By treating his entire company as "developers he cannot trust" - just as AWS engineers view internal and external customers - he's building the integration infrastructure that will determine winners in the AI transformation.
The historical parallel is striking. Just as AWS didn't predict cloud computing so much as create the infrastructure that made cloud computing inevitable, companies building robust AI integration platforms today are creating the infrastructure that will make AI-native operations inevitable.
Consider what happened at Rekki. Instead of maintaining a traditional engineering backlog where the operations team requests technical solutions, Nikolov built platform primitives that enabled operations staff to solve their own problems. When someone needed to detect potential fraud patterns in restaurant ordering data, instead of writing custom monitoring systems, Nikolov enabled them to write SQL queries that automatically became APIs, with built-in monitoring, security, and alerting.
This isn't just about efficiency - it's about fundamentally different organizational capabilities. The operations team member who can now create "20 scrapers that he just uses nonstop" isn't just more productive; he's operating in a different paradigm where technical capability is democratically accessible rather than bureaucratically rationed.
Centralized vs Decentralized: The Organizational Shift
The platform approach requires a fundamental shift from centralized development to decentralized AI experimentation and tinkering.
The Centralized Model treats AI capabilities as scarce resources managed by technical gatekeepers. AI teams become bottlenecks, with domain experts petitioning for solutions and waiting for custom integrations. Each use case requires bespoke development, testing, and deployment. The result is familiar to anyone who has managed enterprise software: endless backlogs, frustrated users, and solutions that never quite fit the actual problem.
The Decentralized Model treats AI capabilities as abundant resources enabled by platform infrastructure. Domain experts solve their own problems using standardized primitives. The technical team shifts from building solutions to building the infrastructure that enables solutions. Integration complexity is handled once, at the platform level, rather than repeatedly for each use case.
This shift mirrors broader trends in technology infrastructure. Just as cloud computing moved from "servers as precious resources" to "servers as commodity utilities," AI implementation is moving from "AI solutions as custom projects" to "AI capabilities as platform services."
The organizational implications are profound. In decentralized models, technical teams become force multipliers and domain experts become empowered to iterate rapidly on solutions that directly address their needs. The results are a “Decentralization Dividend” that pays out repeatedly and compounds over time.
Accelerated Innovation Through Distributed Problem-Solving: When domain experts can directly implement AI solutions, innovation happens at the speed of business problems rather than the speed of engineering backlogs.
Self-Reinforcing Platform Effects: Each solution built on the primitives strengthens the systems by providing usage patterns, stress testing, and feedback that improves the infrastructure for everyone. The platform becomes more valuable as more people use it.
Competitive Moats: While AI model capabilities rapidly commoditize, platform infrastructure creates durable advantages. Organizations with mature AI enablement can deploy new capabilities faster, integrate new data sources more easily, and adapt to changing requirements more quickly than competitors building custom solutions.
Circling Back: Build the Last Mile First
The platform approach represents a fundamental inversion of how we think about AI implementation challenges. Instead of treating integration problems as obstacles to overcome after building AI capabilities, we treat integration infrastructure as the foundation that enables AI capabilities.
This connects directly to the broader lesson about transformative technologies: the companies that succeed aren't necessarily those with the best technology, but those with the best systems for implementing technology. Just as electricity required more than generators and light bulbs - it required grids, standards, and infrastructure - AI requires more than impressive models and compelling demos.
For Organizations That Adopt the Last Mile First Approach:
Integration problems become rare because infrastructure handles them preemptively rather than reactively. When your platform primitives include security, monitoring, and error handling by default, AI implementations don't create new categories of technical debt.
AI adoption accelerates organically as domain experts solve problems directly rather than waiting for technical resources. The acceleration is exponential rather than linear because each successful implementation creates patterns and examples that enable the next implementation.
Competitive advantages compound through platform effects. Organizations build capabilities that enable capabilities, creating a virtuous cycle where AI implementation becomes easier and more valuable over time.
For Organizations That Don't:
Custom integration challenges multiply with each AI implementation. Every new use case requires solving the same fundamental problems of security, monitoring, data access, and user experience. Technical debt accumulates rather than capabilities.
AI implementations remain centralized bottlenecks where business innovation depends on technical resource allocation. The pace of AI adoption is limited by the capacity of technical teams rather than the appetite of business teams.
Growing performance gaps emerge as platform-first competitors pull ahead. Organizations that solve integration once can experiment with AI applications that would be prohibitively expensive for organizations solving integration repeatedly.
The Ultimate Test:
Two years from now, the winners in AI implementation will be distinguished by having solved integration complexity through systems thinking and platform delivery. They'll be organizations where deploying new AI capabilities takes hours rather than months, where domain experts solve problems directly rather than waiting for technical resources, and where AI integration strengthens existing systems rather than creating new vulnerabilities.
The last mile advantage - the ability to turn AI possibilities into practical, profitable implementation - will belong to organizations that built the last mile first. In the race to become AI-native, the fastest path forward runs through platform infrastructure, not AI capabilities.
This is the lesson embedded in both Yegge's platform rant and Nikolov's AI transformation: sustainable competitive advantage comes from building systems that enable capabilities, not from accumulating capabilities themselves. In the AI era, as in the cloud era before it, platform thinking will separate the winners from the also-rans.