As AI changes how software gets built, the central engineering question is no longer just how fast code can be generated. The real question is whether teams can define, direct, and validate AI generated work inside real production systems.
That was the focus of this Telos Labs webinar, where Jordan Treviño was joined by Cody Watters, CTO and co founder of Riveter, Ken Kantzer, CTO and co founder of Truss, and Colleen Schnettler, Fractional Head of AI and longtime Rails developer. Together, they explored how teams are actually using AI in production, what is changing in engineering workflows, and whether Ruby on Rails offers a structural advantage in this new environment.
One of the clearest insights from the discussion was that the bottleneck has shifted. As Jordan framed it, teams are moving from a world where code generation was the hard part to one where “the bottleneck has shifted from code generation to definition, tech design and validation.” Colleen reinforced that point directly: “It’s completely flipped.” For many teams, the hardest part is no longer implementation. It is deciding what to build, why it matters, and how to define it clearly enough for AI systems to execute well.
That shift led to one of the strongest themes of the conversation: PRDs, product definition, and specification quality now matter more than ever. Several speakers described spending more time on requirements, planning, and feature definition before involving AI. In practice, this means stronger PRD culture, better planning loops, and more effort spent clarifying scope up front. The leverage has moved earlier in the lifecycle.
Rails stood out in the discussion because of its conventions. Across the panel, there was broad agreement that LLMs perform better when working inside systems with consistent patterns, shared vocabulary, and predictable structure.
Jordan summarized this idea clearly when he explained that Rails helps because:
“Part of the thesis of why Rails can be so valuable in this moment is that LLMs rely on shared conventions. When patterns are consistent, the models perform better.”
In other words, Rails gives both humans and models a more legible environment to work inside.
At the same time, the panel was clear that Rails is not the full answer to every problem. Cody described how Riveter uses Rails for orchestration and application structure, while relying on Node and TypeScript for more specialized agentic processes like large scale scraping and long running research agents. Ken pointed out that newer Rails tools such as Hotwire can still create ambiguity for LLMs because there are multiple valid implementation paths. That ambiguity matters because, as he noted, “the LLM’s gonna be probabilistic.” When patterns are less standardized, outputs become less predictable.
Another important thread was the role of guidance. Some teams are building shared markdown files, repo level rules, and workflow summaries to keep AI outputs aligned. Others rely more on codebase structure, file co location, and human review in long running sessions. Despite these differences, the panel converged on one principle: architectural consistency still matters. AI can accelerate software delivery, but it does not remove the need for strong systems thinking.
The discussion also raised a deeper long term question: if AI keeps increasing code output, what happens to simplicity, maintainability, and debugging? Ken brought in Kernighan’s Law to make the point that code that is easy to generate is not always easy to reason about later. Cody emphasized that simple codebases still matter, especially because old or unnecessary code can confuse agents and create compounding complexity over time.
For teams building with AI today, the takeaway is not that Rails magically solves agentic engineering. It is that Rails offers meaningful advantages when convention, consistency, and speed of understanding matter. In a world where AI systems work best with shared patterns and clear architectural guidance, Rails may not be the only viable framework, but it remains a very strong one.
This conversation made one thing clear: agentic engineering is already changing product development, and the teams that adapt best will be the ones that pair AI speed with stronger planning, tighter conventions, and better validation.


%20copy%202.png)

