Are Those Movie Posters Really Generated by AI? Let's Talk About It!

April 22, 2024

Image utilized to market A24's Civil War

For anyone who has been on social media the last couple of months, it has become abundantly clear that AI generated images are no longer a cool curiosity, but are a major conversation in the public eye. Gone are the days when we would ooh and ahh at an image of a sci fi landscape that was clearly not human created, but was beautiful nonetheless. These days, we rightly worry that any image we see was manipulated by artificial intelligence, and justifiably levy accusations at publicists, brands, and individuals who seem to use artificial intelligence tools with no transparency at all. Perhaps the most clear example is the apparent use of AI Image generation tools for the creation of promotional materials for the A24 film, Civil War.

There are a number of legitimate concerns here, ranging from the lack of transparency of which tools were utilized to the loss of work opportunities for talented artists and content creators. While I have much to say about the negative impact this lack of communication may have on the field generally, this is not my domain to speak upon. Rather, as an expert in AI and DeepFake technology, I hope to today discuss the various tools and potentials that AI can and will bring to the industry, and to implore media corporations to more thoughtfully and critically consider their use of these tools in a way that honors and respects the talents of artists and content creators across the globe.

To begin, let’s discuss the topic at the forefront of this issue, the use of AI image generators to provide promotional and artistic materials for Film and Television.

The Controversy Surrounding AI-Generated Movie Posters: A Call for Transparency


The controversy surrounding the posters for the A24 film "Civil War" has brought the issue of AI promotional materials to the forefront of public dialogue, as viewers spotted obvious artifacts of AI generation in the artwork. These telltale signs included landscapes of real cities structured in incorrect ways, cars with crazy numbers of doors, and broken and misshapen buildings. 

While some may argue that the use of AI in this context is not inherently problematic, the lack of transparency surrounding these practices is a cause for concern. When studios and production companies use AI-generated images without clearly disclosing their origin, it raises questions about the fairness and integrity of the creative process. For example, are the individuals responsible for using these AI generators being properly compensated for their skills and expertise? Or are talented professional artists being laid off in favor of cost-saving measures? 

Moreover, if such little care is given to the authenticity and quality of the promotional materials, can we trust that the content of the film itself will be genuine and thoughtfully crafted? 

To address these concerns, it is essential for media corporations to approach the use of AI image generators with greater thoughtfulness and critical consideration. This means being transparent about the tools and techniques used in the creation of promotional materials, and clearly distinguishing between AI-generated images and those created by human artists. 

It also means actively engaging with and supporting the artist community, recognizing the value of their unique skills and perspectives. This could involve collaborating with artists to develop new ways of incorporating AI into the creative process, or investing in training and education programs to help artists adapt to the changing landscape of the industry. To generate realistic and powerful AI generated images, unlike those used in the promotional material for “Civil War”, takes talent and training. Talented artists working in collaboration with AI tools can create genuine beauty, and to see these tools used for cheap cost cutting techniques is a genuine shame.

AI in Documentaries: Navigating the Fine Line Between Protection and Deception

Another avenue in which accusations of AI manipulation have been justifiably levied are with the Netflix documentary "What Jennifer Did". Glitchy and seemingly inhuman images of Jennifer Pan smiling appeared in the documentary. The filmmakers have since been accused of using AI-generated images to alter the narrative of a true crime story. While such an egregious abuse of AI technology would be deeply troubling, experts in the field have suggested that this may not have been the case in this particular instance. Deep Media ran these images through our state of the art detectors, and they do not appear to be generated using AI.

While we expect more information to be released soon, the artifacts that some viewers believed to be signs of AI manipulation may have actually been the result of image super-resolution networks, which are designed to enhance the clarity and usability of low-quality images. When used responsibly, these AI tools have the potential to revolutionize the documentary space, allowing filmmakers to uncover new insights and present compelling stories in ways that were previously impossible.

Though it is not certain that these exact tools were used, it is clear that editing in some form was exercised on content within this documentary, and the lack of transparency has led to rightful speculation about the nature of these images. This controversy highlights the need for greater transparency in the use of AI in documentaries. To avoid issues and maintain the integrity of the documentary form, studios and filmmakers must be proactive in communicating their use of AI tools to the public, being clear about the specific techniques and technologies employed, as well as the reasons behind their use. 

It is crucial that the use of AI in documentaries does not come at the expense of talented human artists and professionals. As the documentary industry continues to evolve and incorporate new technologies, it is essential that studios and filmmakers approach the use of AI with thoughtfulness, transparency, and a commitment to ethical storytelling.

Concluding Thoughts

As AI becomes increasingly prevalent in the entertainment industry, it is crucial that studios and filmmakers approach these tools with transparency and a commitment to ethical storytelling. The stories surrounding the use of AI in promotional materials and documentaries will not be isolated incidents, and it is genuinely encouraging to see random individuals as well as experts in the field scrutinizing the application of AI in entertainment.

Moving forward, transparency must be the foundation upon which AI is integrated into the creative process. By being open and clear about the ways in which AI is being employed, studios can foster trust with audiences and collaborate with artists to develop innovative and ethical applications of these technologies. Ultimately, we envision a world in which the astounding power of AI empowers talented human creators to tell stories that push the boundaries of what they ever thought possible.

To achieve this vision, we must continue to hold studios accountable, advocate for policies that prioritize transparency, and support the development of tools like DeepFake detection that can help maintain the integrity of the media we consume. The brilliant sculptor El Anatsui tells us that “Art is a reflection on life” and that “Life isn't something we can cut and fix”. May we use El Anatsui words as inspiration to use the tools at our disposal to tell stories about being human, rather than working to fix something that was never broken to begin with.

by Ryan Ofman, Head of Science Communications and ML Engineer at Deep Media