explainer

The Global Race to Label AI Content

sig-share··10 min read
regulationeu-ai-actcaliforniachinac2paai-labelingprivacy

August 2026: A Deadline for the World

On August 2, 2026, two landmark regulations take effect simultaneously. The EU AI Act's Article 50 begins enforcement, requiring that AI-generated content carry machine-readable provenance metadata. On the same date, California's SB 942 (the AI Transparency Act) takes effect, requiring AI providers to embed hidden watermarks and offer detection tools.

China is already ahead: its AI labeling regulations and national standard GB 45438-2025 have been in effect since September 1, 2025.

The global convergence is unmistakable. The question is no longer whether AI content will be labeled, but how.

The EU AI Act

Article 50 of the EU AI Act establishes that providers of generative AI systems must ensure their outputs are marked in a machine-readable format and are detectable as artificially generated or manipulated. Deployers using generative AI for professional purposes must clearly label deepfakes and AI-generated text on matters of public interest.

The European Commission published the first draft of a Code of Practice in December 2025, establishing shared standards ahead of the binding August 2026 deadline. The final Code of Practice is expected by May–June 2026.

California SB 942

California's AI Transparency Act applies to providers of generative AI systems with over 1 million monthly users. It requires:

  • Dual disclosure: Both a visible label (opt-in for creators) and a hidden watermark (mandatory, applied automatically)
  • Embedded metadata: The watermark must contain the provider name, AI system details, timestamp, and a unique identifier
  • Detection tools: Providers must offer a free, publicly available tool for checking whether content was generated by their system

Penalties are $5,000 per violation, enforced by the Attorney General. The law was amended by AB 853 in October 2025 to align its effective date with the EU AI Act.

China's Approach

China's regulations, released March 14, 2025, go further than either the EU or California:

  • Explicit labels: Visible indicators that inform users content is AI-generated, required for dialogues, synthetic voice, face generation, and immersive scene generation
  • Implicit labels: Machine-readable metadata and watermarks for programmatic detection
  • Traceability: Provider name, content ID, generation time, and user/device information must be embedded, enabling regulators to trace content to its origin

The traceability requirements go beyond labeling into surveillance territory — a stark contrast to the EU and California models that focus on content-level transparency without requiring user identification.

C2PA as the Compliance Standard

Across all three regulatory regimes, C2PA Content Credentials are emerging as the leading technical implementation. The C2PA manifest format can embed exactly the metadata that regulators require: creator identity, AI system details, timestamps, and edit history.

Government endorsement is building. In January 2025, the NSA, Australian Signals Directorate, Canadian Centre for Cyber Security, and UK National Cyber Security Centre jointly published guidance recommending Content Credentials as a best practice for metadata preservation.

Hardware and software support is accelerating:

  • Devices: Google Pixel 10, Samsung Galaxy S25, Sony Alpha cameras, Canon EOS R1/R5 II, Leica M11-P
  • Software: Adobe Photoshop, Lightroom, Firefly, Premiere Pro; Microsoft Paint; DALL-E 3; Midjourney
  • Platforms: TikTok (first major platform to auto-label), LinkedIn, Meta (AI Info labels)

The Gaps

Regulation and standards alone are not enough. Several structural challenges remain:

Metadata Stripping

Most social media platforms strip metadata during upload, including C2PA Content Credentials. While TikTok and LinkedIn have started preserving credentials, the gap between "signing at creation" and "verifying at consumption" is still wide.

Privacy Concerns

A Fortune investigation and the World Privacy Forum raised concerns about the data embedded in Content Credentials. Timestamps, geolocation, editing details, and identity links can be "replicated, ingested, and analyzed across platforms." The tension between transparency and privacy is real.

Independent Creators

The C2PA trust list model may disadvantage independent creators, small outlets, and journalists who lack recognized certificate authorities. If only large organizations can get their signatures trusted, the system risks reinforcing existing power imbalances.

Where sig-share Fits

The regulatory landscape reinforces the need for open verification infrastructure:

  • Open transparency logs provide an auditable record independent of any single platform or government, creating a check on both content creators and the platforms that host them.
  • Keyless signing lowers the barrier for independent creators — anyone with an email address can sign their work without navigating certificate authority bureaucracy.
  • Privacy-preserving options like pseudonymous signing and selective disclosure address the tension between compliance requirements and creator privacy.

As regulations converge on requiring machine-readable provenance, the infrastructure for creating and verifying that provenance becomes critical. sig-share aims to ensure that infrastructure is open, auditable, and accessible to everyone — not just large platforms and well-resourced organizations.