March 5, 2026
Date: March 5, 2026
Netflix acquired Interpositive, the AI post-production company founded by Ben Affleck, signaling that streaming-native studios are moving to own proprietary AI pipelines rather than license from third parties like Runway or OpenAI. Interpositive is not a generative content tool — it learns the visual language of a specific film’s production to assist with coherence, lighting, color, and missing shots. Netflix rarely makes external acquisitions, making this move deliberate. The contrast with legacy studios is sharpening: Disney licensed 200+ characters to OpenAI for $1 billion, while Netflix internalizes the tooling. Mid-tier studios without capital for either path face the sharpest pressure. Affleck framed the technology as preserving human creative judgment — the “assist vs. replace” distinction that Hollywood is still negotiating.
Meta is facing a lawsuit after footage captured by its Ray-Ban smart glasses was sent to workers in Kenya for data annotation — a process required to train the glasses’ object recognition and scene interpretation models. The subcontractor, Nairobi-based Sama, had previously ended its content moderation work for Meta in 2023 following reports of worker trauma and alleged union-busting, then quietly pivoted to computer vision annotation. Meta discloses in its AI terms of service that data may be subject to human review but does not specify that reviews occur overseas. The glasses are marketed as “built for your privacy.” The gap between that marketing and the actual data pipeline is consistent with Meta’s documented pattern across Cambridge Analytica, GDPR fines, and FTC scrutiny. The intimacy of the data — wearable camera footage from daily life — makes this iteration notably more sensitive than prior controversies.
Sam Altman told employees this week that their opinions on military AI use “are not taken into account” — a direct response to internal concerns about autonomous weapons and civilian surveillance. More than 100 OpenAI employees signed an open letter demanding the company refuse the Pentagon’s terms. The statement follows Anthropic being designated a national security supply chain risk — a classification typically reserved for foreign adversaries — with OpenAI moving quickly to fill the gap. A reported 295% spike in ChatGPT uninstalls alongside a significant increase in Claude installs followed, as users responded to the contrast in stated positions. Altman’s strategic logic holds: walking away means the contract goes to xAI with no guardrails. The management problem is different — OpenAI recruited its talent on the promise of building something safe and beneficial, and telling that talent their conscience is irrelevant to classified contracts is a trust asymmetry that is difficult to walk back.
Three signals in the AI infrastructure and governance layer: Hollywood’s internal AI tooling is consolidating toward ownership by well-capitalized streaming platforms; wearable AI’s data practices are entering the same regulatory scrutiny cycle that mobile and social already passed through; and the OpenAI-Anthropic positioning gap on military AI is now public, employee-visible, and beginning to move consumer behavior. All three trends will accelerate in 2026.