< NAV >
By Anne Westin - Halenews.com   2026-01-22 08:03:00
Opinion | Editorial

AI companies accused of copyrighted infringementThe open letter signed by Scarlett Johansson and hundreds of other creatives didn’t land because it was polite. It landed because it said the quiet part out loud: training commercial AI systems on copyrighted work without permission is theft dressed up as inevitability. The industry has spent years insisting this was a philosophical debate about progress. It isn’t. It’s a business choice—and a corrosive one.

The letter, organized through the Human Artistry Campaign, rejects the soothing language that has insulated AI companies from accountability. “Learning,” “absorption,” “transformation.” These words are doing legal work, not descriptive work. They exist to blur a simple fact: vast libraries of human labor were ingested because asking permission would have slowed growth and required payment. Speed won. Consent lost.

That should concern more than artists.

When an industry normalizes uncompensated extraction at scale, it doesn’t stop with one class of workers. It metastasizes. The precedent being set is not “machines can learn,” which no one disputes, but “ownership only applies until a powerful enough intermediary decides otherwise.” That logic doesn’t just cheapen art. It hollows out consumer trust and destabilizes markets that depend on fair exchange.

The tech sector’s favorite defense is inevitability. The internet existed, therefore scraping was unavoidable. The models are complex, therefore attribution is impossible. Progress demanded shortcuts. This argument collapses the moment licensing enters the picture. We know permission is possible because companies pursue it when the rights holders are large, organized, and capable of withholding access. That’s why deals exist with major studios and publishers, while individual creators are told to accept “fair use” as a fait accompli.

Take OpenAI. Publicly, it frames broad ingestion as necessary for capability. Privately, it negotiates selective licensing when risk or leverage requires it. That contradiction matters. It reveals that the debate is not about technical feasibility but bargaining power. If consent were truly incompatible with innovation, no licensing deals would exist at all. They do—just not for everyone.

This two-tier system has consequences. It concentrates cultural influence in the hands of those already large enough to negotiate, while the long tail—the freelancers, midlist writers, illustrators, voice actors, and independent journalists—becomes raw material. Their work is used to train systems that will then compete with them, depress prices, and flood the market with synthetic substitutes. Consumers experience this not as liberation but as degradation: lower-quality information, homogenized aesthetics, and an explosion of convincing-but-wrong content.

Supporters counter that AI outputs are “transformative,” that no single work can be traced, that harm is speculative. This misses the point. Market harm doesn’t require one-to-one copying. It requires displacement. If a system trained on unpaid labor produces outputs that replace paid labor, value has been extracted without compensation. That is not an abstract legal puzzle. It is a transfer of wealth.

The Johansson angle sharpened public attention because it personalized the risk. Voice, likeness, style—these are not interchangeable widgets. They are identities. When people hear that a synthetic voice can shadow a real one closely enough to confuse audiences, the stakes become obvious. Consent stops being an artist’s luxury and starts looking like a baseline civil right.

The industry response has been to promise safeguards “later.” Watermarks. Opt-outs. Transparency reports. But retroactive ethics are not ethics; they are damage control. Once models are trained, the extraction has already occurred. Opting out after the fact is like asking to be paid after your labor has been used to build the factory.

Some policymakers are beginning to grasp this. Labor groups such as the Writers Guild of America have pushed for contractual protections. Regulators are asking harder questions. Lawsuits are multiplying. But enforcement lags behind deployment, and AI firms are betting—correctly so far—that the public will grow accustomed before accountability arrives.

There’s also a quieter consumer issue lurking underneath the spectacle. When companies don’t pay for inputs, they don’t just save money—they distort competition. Firms that attempt ethical sourcing face higher costs and slower timelines, while those that scrape freely sprint ahead. This rewards bad behavior and punishes restraint. Over time, the market selects for the least accountable actors. Consumers are left with fewer choices and less trustworthy products.

Even partnerships touted as responsible, like selective studio deals involving companies such as Disney, expose the imbalance. Large rights holders can extract favorable terms; everyone else is told to accept the new normal. That’s not a sustainable settlement. It’s a truce between giants, paid for by everyone smaller.

The core question isn’t whether AI should exist. It’s whether we are willing to accept a future where consent is optional when scale makes it inconvenient. History suggests that once that door opens, it doesn’t close on its own. Railroads, oil, social media—every transformative industry has claimed exemption from existing rules. Each time, the bill arrived later, paid by workers and the public.

The open letter matters because it refuses the exemption. It insists that innovation does not nullify ownership, that progress does not require erasure, and that markets function only when rules apply evenly. That stance is not anti-technology. It is pro-accountability.

What should happen next is neither radical nor utopian. Training data disclosures that are real, not cosmetic. Licensing frameworks that include individuals, not just conglomerates. Penalties for unauthorized use that outweigh the profit from ignoring the law. And a cultural shift away from treating “everyone else’s work” as a free natural resource.

The industry can still choose a different path. It can build systems that learn with permission, compensate contributors, and compete on quality rather than extraction. Or it can continue insisting that theft is just how the future works.

The letter’s message is blunt because the moment requires it. Innovation that depends on ignoring consent isn’t brave. It’s lazy. And it’s time we stopped pretending otherwise
No comments yet.