This Week in AI (February 19th — 25th, 2024)

February 25, 2024

Photo by Steve Johnson on Unsplash

Hello hello to all techies, philosophers, or anyone in between! This week we at Deep Media are going to be starting a new series of blog posts, lovingly referred to as This Week in AI. Now, I can hear you yelling at your screen, but what about my weekly Deep Media updates! Well worry not my totally not fictional friend, Deep Media update blogs will continue to release every other week, while this new series will release in between.

While we are certainly going to be exploring the specifics of new technologies that have been released, we hope to focus more so on the rapidly evolving ecosystem of generative AI development, and carefully consider how new technologies impact our shared experience of interacting with our content driven world. This series will also provide us with an opportunity to explore the ethical implications of new technologies, and see in real time the way in which these powerful technologies are changing the way we think about how we interact online, and by extension in person.

Now, with the longest preamble of my career out of the way let’s check out This Week in AI!

OpenAI’s SORA: A Text-To-Video Revolution

There is no doubt that the biggest news coming out of AI this week is OpenAI’s introduction to their new text-to-video model, SORA. SORA uses solely text prompts to generate videos of an unbelievable quality and realism. While what we saw was merely a hand selected sample from OpenAI, and access to the model is harder to get than Taylor Swift tickets, SORA has awe inspiring potential and I would be lying if I said I wasn’t excited to see more. I’ll keep this short as Deepmedia just released a full article about SORA, read here!

Google’s GEMMA Model: A New Frontier in Accessibility

Google’s latest contribution to the AI community, the Generative Model for Media Applications (GEMMA), looks to be a powerful and versatile tool that will have a significant impact on the generative AI community. GEMMA is designed to create high-quality, diverse media content, including images, videos, and audio, from simple text prompts. The model leverages advanced deep learning techniques to understand and interpret complex prompts, translating them into stunningly realistic media outputs. GEMMA serves as one of the first generative models to promise high quality results in image, audio, and video format.

GEMMA’s potential applications are vast, ranging from content creation for entertainment and marketing to educational tools and virtual reality experiences. However, as with any powerful technology, GEMMA’s capabilities come with a set of ethical considerations. The ease with which it can produce realistic media content raises concerns about misinformation, controversial deepfakes, and the potential for misuse in spreading false narratives. As with all new generative technologies, the ability to use GEMMA to enhance people’s natural creativity is vast, but must come with mechanisms to prevent the potential harms of a versatile and accessible model like this. We’re personally looking forward to the opportunity to try out GEMMA, and more importantly to test our Deepfake detection solutions on the outputs!

While GEMMA looks to be a powerful new addition to the generative AI landscape, this week also featured the tease of a new model from one of gen AI’s most popular players: Stable Diffusion V3

Stable Diffusion 3: The Evolution of Text-to-Image Generation

The release of Stable Diffusion 3 marks another milestone in the evolution of text-to-image generation models. Building on the success of its predecessors, this latest version boasts improved image quality, greater diversity in outputs, and enhanced user control over the generation process. The model’s ability to produce highly detailed and accurate images from textual descriptions has implications for various industries, including advertising, gaming, and film production. With this enhanced image quality of course, comes the ability to create more realistic and deceptive Deepfake images. We are particularly interested in using this new model as a test of the generalizability of our own detection solutions, as we have spent particular effort in the last months to ensure our detection platform can handle data that it was not specifically trained on. This new Stable Diffusion release promises to push generative AI detection to its limits, and we can’t wait to get our hands on the new model.

The advancements in Stable Diffusion 3 highlight the ongoing challenge of ensuring ethical use of AI-generated content. As the line between real and synthetic images becomes increasingly blurred, the risk of misuse in creating deceptive or harmful content grows. This underscores the need for robust regulatory frameworks and ethical guidelines to govern the use of such technology, and these need to range from social media platform policies to governmental regulation.

A Wrap Up:

As we move forward, it is crucial for the AI community, policymakers, and society at large to engage in open and constructive discussions about the responsible use of these technologies. Developing clear guidelines and standards will be essential to ensure that the benefits of AI are harnessed for the greater good, while mitigating the risks associated with its misuse.

In the near future, we can expect to see advancements in AI models that offer even more refined and nuanced content generation capabilities. These models will likely focus on improving the coherence and context-awareness of generated content, allowing for more complex and believable outputs. Additionally, we can anticipate the emergence of AI tools that enable more seamless integration of generated content into various media formats, enhancing the user experience and expanding the range of applications.

Furthermore, the intersection of generative AI with other technologies, such as augmented reality (AR) and virtual reality (VR), promises to create immersive experiences that blur the line between the virtual and the real. As these technologies converge, we can expect to see groundbreaking applications in gaming, education, and social media.

However, as the generative AI landscape continues to evolve, it is imperative to remain vigilant about the ethical implications and potential societal impacts of these technologies. Ensuring transparency, accountability, and fairness in AI development and deployment is the only way to maintain public trust and continue to foster responsible innovation.