AI Is Dangerous, but Not for the Reasons You Think | Sasha Luccioni | TED

    Sasha Luccioni

    Forget sci-fi apocalypses for a moment. In a particularly punchy segment from her insightful talk, Dr. Sasha Luccioni confronts a startling email claiming her work will ‘end humanity.’ Her response cuts straight to the chase: fixating on AI’s far-off existential risks is a colossal distraction from the very real, very tangible impacts happening *right now*. Sasha, a leading AI scientist and the AI & Climate Lead at Hugging Face, doesn't mince words. She asserts that while AI is undoubtedly moving at warp speed, its trajectory isn't some pre-destined plot twist. Far from it. We aren’t passive observers of an AI future that's already written; we are the architects, actively building the road as we walk it. This pivotal moment underscores her broader call to action throughout the video: to shift our collective focus from vague, speculative fears to the immediate, measurable harms AI is already inflicting on our planet and our societies – from environmental costs and consent violations to pervasive bias. Her message is an empowering rallying cry for policymakers, technologists, artists, and everyday users alike: we can collectively decide the direction AI takes. It’s a powerful invitation to take responsibility and steer this technology towards a future that prioritizes sustainability, rights, and fairness. Want to understand how we can actually do that – by measuring impacts, demanding transparency, and building essential guardrails? Watch the full video to uncover the concrete steps and practical tools Sasha champions for shaping AI's path.

    AI's Future Isn't a Done Deal. We're Building the Road as We Walk It.

    Forget sci-fi apocalypses for a moment. In a particularly punchy segment from her insightful talk, Dr. Sasha Luccioni confronts a startling email claiming her work will ‘end humanity.’ Her response cuts straight to the chase: fixating on AI’s far-off existential risks is a colossal distraction from the very real, very tangible impacts happening *right now*. Sasha, a leading AI scientist and the AI & Climate Lead at Hugging Face, doesn't mince words. She asserts that while AI is undoubtedly moving at warp speed, its trajectory isn't some pre-destined plot twist. Far from it. We aren’t passive observers of an AI future that's already written; we are the architects, actively building the road as we walk it. This pivotal moment underscores her broader call to action throughout the video: to shift our collective focus from vague, speculative fears to the immediate, measurable harms AI is already inflicting on our planet and our societies – from environmental costs and consent violations to pervasive bias. Her message is an empowering rallying cry for policymakers, technologists, artists, and everyday users alike: we can collectively decide the direction AI takes. It’s a powerful invitation to take responsibility and steer this technology towards a future that prioritizes sustainability, rights, and fairness. Want to understand how we can actually do that – by measuring impacts, demanding transparency, and building essential guardrails? Watch the full video to uncover the concrete steps and practical tools Sasha champions for shaping AI's path.

    The Email That Claimed My AI Work Will "End Humanity"

    When Dr. Sasha Luccioni, a leading AI scientist, received an email claiming her work would "end humanity," she wasn't entirely surprised. After all, recent headlines have spun tales of AI chatbots advising divorce or, perhaps even more alarmingly, proposing "crowd-pleasing recipes" featuring chlorine gas. In this captivating highlight, Sasha uses this startling anecdote as a springboard to tackle the pervasive, often sensationalized, anxieties surrounding artificial intelligence. While it’s easy to get caught up in futuristic doomsday scenarios or the unpredictable "what ifs" of AI a decade or two down the line—something she candidly admits "nobody really does" know—Dr. Luccioni redirects our attention to a far more pressing and measurable reality. She argues that the real danger isn't some far-off existential threat, but the tangible, immediate impacts AI is already having on our planet and people. This moment is a powerful pivot, inviting us to move beyond speculative fears to confront the very real issues unfolding today. Instead of getting lost in the "singularity" hype, Sasha challenges us to focus on practical, evidence-based solutions for current harms, from environmental costs to consent and bias issues, all of which she expertly unpacks in her full discussion. Curious about how Dr. Luccioni proposes we navigate the present-day complexities of AI and build a more responsible future? Dive into the full video to explore her pragmatic approach to tracking, measuring, and disclosing AI's true impacts.

    Your Art is Not an All-You-Can-Eat AI Buffet

    Dr. Sasha Luccioni isn't here for the doomsday debates; she's focused on the tangible harms AI is inflicting *today*. One glaring issue? The way AI models gobble up our creative work without a second thought. This segment peels back the curtain on how your art—and even your face—might be fueling AI’s engines, often without your consent. Sasha playfully points out that when she asks an AI to generate "a woman named Sasha," she often gets bikini models, a weird side effect of how her own images (and those of other Sashas) are scraped from the internet. But it's not all fun and games. For artists like Carla Ortiz, whose entire life's work was used without permission to train AI, the consequences are severe. This kind of egregious data ingestion forms the bedrock of class-action lawsuits, turning artists’ intellectual property into an unwilling "all-you-can-eat buffet" for tech giants. Thankfully, it's not a free-for-all. Sasha highlights crucial efforts, like the partnership between Spawning AI and Hugging Face, to develop vital opt-in and opt-out mechanisms. These tools empower creators to control whether their work contributes to AI training, offering a much-needed layer of transparency and respect. This isn’t just about protecting artists; it’s about shaping an AI future built on fairness and consent for everyone. Ready to dig deeper into the real, immediate impacts of AI and what we can do about them? Dive into Sasha’s full talk to understand how we can collectively steer this technological beast towards a more ethical future.

    AI Thinks All CEOs and Scientists are White Men

    Dr. Sasha Luccioni is on a mission to steer the AI conversation away from sci-fi doomsday scenarios and toward the very real, immediate impacts happening right now. One such impact she powerfully highlights is the pervasive bias embedded in our AI systems. Ever wonder what an AI thinks a scientist or a CEO looks like? Spoiler alert: it's probably not you, unless you're a white man. Sasha, with her characteristic wit and no-nonsense approach, introduces her "stable bias explorer" – a brilliant tool designed to shine a light on the deeply entrenched biases of image generation models. Her challenge is simple: picture a scientist. What do you see? For most of these AI models, it's men in lab coats and glasses, almost exclusively white. And it’s not just scientists. When asked to generate images of lawyers or CEOs, these powerful algorithms churn out male-dominated, often white, representations nearly 100% of the time. This isn't just a funny quirk of AI; it's a serious problem. As Sasha points out, even when compared to real-world statistics from the US labor bureau, these models perpetuate stereotypes that are wildly out of step with reality. They reflect and amplify existing societal biases, reinforcing the harmful notion that leadership and intellectual roles belong to a select few. For policymakers, technologists, and anyone interacting with AI, this raises critical questions about how we train these systems and what implicit messages they are sending about who belongs in certain professions. Understanding and addressing these biases is crucial for building truly equitable AI. Want to dive deeper into how we can build more responsible, less biased AI systems, and tackle other urgent issues like sustainability and consent? Watch Sasha's full talk for more practical solutions and a hefty dose of reality.

    The Hidden Environmental Cost of Training One AI Model

    Forget the science fiction scares about AI ending humanity. As Dr. Sasha Luccioni powerfully argues, the real drama — and the real danger — is unfolding right now, with measurable impacts on our planet and people. One of the most critical, yet often unseen, issues? The staggering environmental footprint of training AI models. You might think a few lines of code wouldn't leave much of a mark, but think again. Dr. Luccioni takes us behind the scenes of the Big Science Initiative, where a thousand researchers collaborated to build Bloom, an open-source large language model (think GPT, but with a conscience for ethics and transparency). Her team's groundbreaking study revealed that just training Bloom sucked up as much energy as 30 homes consume in an entire year, spewing out 25 tons of carbon dioxide. That's not just a statistic; it's the equivalent of driving your car five times around the planet! And here's the kicker: while Bloom was designed with sustainability in mind, other comparable behemoths like GPT-3 can emit a mind-boggling 20 times more carbon. This isn't a future problem; it's happening today, with every new model trained. For policymakers grappling with green initiatives, technologists building the next generation of AI, or anyone simply caring about our planet, understanding these hidden costs is non-negotiable. It’s a vivid reminder that AI doesn't operate in a vacuum—its carbon emissions are very real and demand our immediate attention. Want to dig deeper into these critical, measurable harms and discover what we can *actually* do about them? Dive into the full video to see how Dr. Luccioni champions evidence-based guardrails and actionable solutions for a more sustainable and equitable AI future.

    AI's Future Isn't a Done Deal. We're Building the Road as We Walk It.

    2 min read242 words

    Forget sci-fi apocalypses for a moment. In a particularly punchy segment from her insightful talk, Dr. Sasha Luccioni confronts a startling email claiming her work will ‘end humanity.’ Her response cuts straight to the chase: fixating on AI’s far-off existential risks is a colossal distraction from the very real, very tangible impacts happening *right now*.

    Sasha, a leading AI scientist and the AI & Climate Lead at Hugging Face, doesn't mince words. She asserts that while AI is undoubtedly moving at warp speed, its trajectory isn't some pre-destined plot twist. Far from it. We aren’t passive observers of an AI future that's already written; we are the architects, actively building the road as we walk it. This pivotal moment underscores her broader call to action throughout the video: to shift our collective focus from vague, speculative fears to the immediate, measurable harms AI is already inflicting on our planet and our societies – from environmental costs and consent violations to pervasive bias.

    Her message is an empowering rallying cry for policymakers, technologists, artists, and everyday users alike: we can collectively decide the direction AI takes. It’s a powerful invitation to take responsibility and steer this technology towards a future that prioritizes sustainability, rights, and fairness. Want to understand how we can actually do that – by measuring impacts, demanding transparency, and building essential guardrails? Watch the full video to uncover the concrete steps and practical tools Sasha champions for shaping AI's path.