Strong opinions loosely held with Philippe Beaudoin, Co-founder & CEO @ Waverly
In this highlight from Philippe Beaudoin’s conversation about Waverly’s journey, the discussion pivots from startup “waves” and pivots into a deeply contrarian engineering thesis that also connects to his humanist mission: instead of treating data as the main battleground, he argues that most AI progress will happen at a new software layer built on top of LLMs. He acknowledges that today’s breakthroughs still rely on massive datasets, but challenges founders and engineers to rethink where they should invest effort. In his framing, training frontier models from data will become the equivalent of building CPUs—crucial, but only a small group will own that substrate. For most builders, the next wave is about composing natural-language interfaces, verification, and robust software primitives that reliably translate between human intent and model behavior. For founders building human-centered personalization systems, this matters: if the future is software-layer innovation, then transparency, user agency, and safe interaction design become first-class technical problems—not afterthoughts. Beaudoin’s “data is dead” provocation is less about dismissing data and more about shifting your product leverage toward testing natural-language inputs/outputs, building accountable personalization engines, and iterating in a way that avoids opaque feedback loops. Watch the full video to see how this engineering perspective ties back to Waverly’s transparent personalization approach and the broader guidance Beaudoin shares on bootstrapping and building in Canada.
The Controversial Idea That Will Change AI: "Data is Dead"
In this highlight from Philippe Beaudoin’s conversation about Waverly’s journey, the discussion pivots from startup “waves” and pivots into a deeply contrarian engineering thesis that also connects to his humanist mission: instead of treating data as the main battleground, he argues that most AI progress will happen at a new software layer built on top of LLMs. He acknowledges that today’s breakthroughs still rely on massive datasets, but challenges founders and engineers to rethink where they should invest effort. In his framing, training frontier models from data will become the equivalent of building CPUs—crucial, but only a small group will own that substrate. For most builders, the next wave is about composing natural-language interfaces, verification, and robust software primitives that reliably translate between human intent and model behavior. For founders building human-centered personalization systems, this matters: if the future is software-layer innovation, then transparency, user agency, and safe interaction design become first-class technical problems—not afterthoughts. Beaudoin’s “data is dead” provocation is less about dismissing data and more about shifting your product leverage toward testing natural-language inputs/outputs, building accountable personalization engines, and iterating in a way that avoids opaque feedback loops. Watch the full video to see how this engineering perspective ties back to Waverly’s transparent personalization approach and the broader guidance Beaudoin shares on bootstrapping and building in Canada.
A Warning to Startup Ecosystems: Don't Kill the Chaos
In this episode about Waverly’s evolution from newsreader to social and ultimately to a transparent B2B personalization API, Philippe Beaudoin shifts from product philosophy to a broader warning: the startup ecosystem itself can become another kind of system that users lose agency in. After discussing how opaque, click-driven algorithms can trap people in loops, he points out a parallel risk in startup culture—when innovation becomes overly institutionalized, it can stop reflecting the founders’ values and the messy, grassroots experimentation that actually creates true change. His core insight is contrarian but practical: don’t assume that “organized” automatically means “better.” He argues that the next generation of great founders is often found in chaotic, informal spaces like hackathons, not only through structured programs. If ecosystems start to feel like a government version of success—too standardized, too polished—then the energy that makes startups worth building gets smothered. For builders of human-centered AI, the lesson maps directly onto product design: preserve autonomy, diversity of approaches, and room for doubt instead of forcing everything into rigid templates. If you’re building AI personalization—or shaping the Canadian startup environment—watch the full video to hear Philippe’s broader take on bootstrapping, engineering for LLM layers, and how to keep grassroots momentum alive.
Why Tech Listens to Clicks, Not Words
In this episode of the Waverly journey, Philippe Beaudoin connects Waverly’s past pivots to its most distinctive value: technology should listen to what people mean, not just what they do. This highlight zooms in on the shift from earlier “chat-like” experiments to a clearer mission—empower users to express themselves through words so their experience reflects their identity, aspirations, and doubts, not their click patterns. Beaudoin frames the core problem with today’s personalization systems: when algorithms optimize for clicks and dwell time, they often turn attention into a feedback loop. The result can be addictive, manipulative “doom loops” where the system learns to keep users engaged rather than help them flourish. Waverly’s contrarian answer is to move the center of personalization from behavior traces to expressed preferences in natural language, so users can see what the system understood and steer it intentionally. For founders and engineers, the implication is practical and philosophical at once: build AI products that treat human agency as a feature. Rather than engineering around opaque engagement metrics, design interfaces and personalization engines that make learning readable, editable, and grounded in real user language. If you want the full story behind Waverly’s transparent personalization approach and how Philippe applies this philosophy across product and engineering decisions, watch the full video.
How to Give Users Control of Their Algorithm
In this insightful segment, Philippe Beaudoin, known for his contrarian stance on traditional AI and commitment to human flourishing, details Waverly's vision for "transparent personalization"—a core tenet explored extensively in the full discussion about the startup’s evolution. Moving beyond the opaque, click-driven algorithms that often trap users in addictive feedback loops, Waverly proposes a revolutionary model where users actively understand and control their algorithmic experience. Imagine, Beaudoin suggests, a world where your digital platform doesn’t just subtly influence you based on hidden data points. Instead, a dedicated section, powered by Waverly's tech, would present a concise, ChatGPT-like text summary of "how I understand you." This isn't just a list of keywords, but a natural-language narrative describing your preferences—"Oh, you like politics in the morning..."—all learned from your interactions. The truly transformative part? You don't just see it; you can *change* it. Through a simple chat interface, you can directly converse with this "persona" or system understanding of yourself, refining your preferences and steering your experience away from unwanted patterns towards genuine interests and aspirations. While Beaudoin acknowledges that offering this level of control to major social platforms isn't an immediate reality, this vision underscores Waverly's B2B strategy: providing an API that empowers other applications to build genuinely human-centric personalization. This engineering philosophy, emphasizing natural language as an interface and building robust software atop LLM layers rather than bespoke data silos, is a crucial takeaway for AI founders, product builders, and engineers looking to design systems that truly serve user agency. To fully grasp Philippe's nuanced arguments on transparent AI, LLM-centric engineering, and the future of startup funding, dive into the complete podcast.
The Next Frontier of AI is Reinventing Software Engineering
In this episode about Waverly’s evolution through several “waves” of product pivots, Philippe Beaudoin shifts from the human side of anti-addiction, transparent personalization to a more contrarian engineering takeaway: the future isn’t just new models, it’s a new way to build software. In this highlight, he argues that when your system’s inputs and outputs are intertwined with natural language, traditional software engineering practices have to be relearned for a world where code and language functions are mixed together. The core value of this moment is practical: instead of treating LLMs as a black box that magically replaces everything, Waverly focuses on software primitives—building blocks and testing practices—that make natural-language-in/natural-language-out components behave reliably. Beaudoin frames this as rediscovering classic engineering fundamentals—composition, interfaces, testing, and robustness—while adapting them to “functions that contain language.” For founders and engineers, the implication is clear: if you want transparent personalization that users can actually understand and modify, you need engineering discipline at the LLM software layer, not just data pipelines. If you’re building human-centered AI products or exploring a bootstrapped path in Canada’s startup ecosystem, watch the full video to see how this philosophy connects to Waverly’s transparent personalization API and go-to-market direction.