Residuals are an important part of how actors get paid. They are checks that actors receive long after they finish shooting a film or TV show because the audience is still watching, perhaps in a different country or on a new channel. Residual checks can range from pennies to thousands of dollars. Currently, Hollywood’s labor unions for writers and actors, the WGA and SAG-AFTRA, are on strike, negotiating streaming residuals as one of the main deal points.
Beyond Hollywood, there is a larger conversation to be had about residuals. Actors, writers, and other professionals in the film and television industry may be some of the first to have their jobs threatened by artificial intelligence (AI). AI has the potential to do human-level work at a fraction of the cost. However, behind the curtain of AI lies the cost of human labor required to produce the training data. People produce the training data, and they deserve residuals.
If an AI program could generate a complete feature film with the click of a button, it would require an enormous amount of past movies to be used as training data. Therefore, the people who made those past movies should receive a substantial piece of any revenue generated by the AI. This principle applies to all professionals whose work could be replaced by AI.
Renowned technologists and economists, such as Jaron Lanier and E. Glen Weyl, have argued that Big Tech should not be allowed to monetize people’s data without compensating them. This concept of “data dignity” becomes more important as AI becomes prominent.
Implementing a residuals program for human data producers would be technologically challenging. AI systems would need to track and attribute each piece of training data to verified humans, and there would need to be a payment channel for residuals. The law would have to require tech companies to build the necessary software.
The issue of copyright ownership arises when it comes to training data. In the film and television industry, the copyright is owned by the big studios, not the creators. This raises the question of who reaps the benefits when AI uses the creations as training data.
Overall, there is a need for tech companies to build software that attributes human authorship to AI-generated outputs and ensures that the people who produced the training data receive their fair share of residuals. The copyright ownership and monetization of intellectual property used as training data should be reevaluated to provide economic relief to those whose jobs are threatened by AI.