Tech Release

Silicon Valley will empower its users to identify images with AI

Google, OpenAI and Meta are three of Silicon Valley's flagship companies working to make their users easily detect when an image was created with AI. These are its functions.

In an increasingly digital world, the ability to distinguish between real images and those created through Artificial Intelligence (AI) has become a crucial skill. For these three companies, Silicon Valley works to empower their users and make this job easier for them.

The proliferation of advanced technologies has made imaging extremely easy and realistic, posing significant challenges in various fields.

The creation of this type of image and the impossibility of detecting whether they are false or not is a great danger to humanity. For example, many of them can be used to create fake news, which often spreads quickly through social media.

These files can influence public opinion, manipulate elections, or generate panic in emergencies. Detecting and tagging these images is vital to maintaining the integrity of the information that circulates online.

Furthermore, the images generated by AI they can be used to deceive people, whether through fake social media profiles, online scams, or phishing. Detecting these images can prevent fraud and protect people from falling into deception.

According to KPMG, nearly 100 thousand AI models are prepared to create fake materials at any time. Therefore, people must have the ability to begin to know whether one is real or not image, audio, or video when they see it on the internet.

The great technological giants of Silicon Valley have undertaken a fight against the deepfakes… These are some of the strategies that companies like Google and OpenAI they launched to mitigate one of the biggest risks in the use of AI.

1. OpenAI

In April 2024 OpenAI joined the steering committee of the Coalition for the Origin and Authenticity of Content (C2PA). In doing so, the company entered into one of the most critical debates about democracy in the age of AI: how will we know that what we see online is real?

What Sam Altman started doing in January is adding metadata C2PA to all images created and edited by DALL·E 3ChatGPT, and the OpenAI API. It also announced the integration of C2PA metadata for Sora, its video generation model.

With this measure, people were still able to create misleading content without this information, however, they couldn’t falsify the files or alter them.

2. Google

In 2023 Google, one of the giants of Silicon Valley, launched a tool designed to mark identity in images created by technology I.A..

The search engine and advertising company US-based Online announced the new tool in a statement.

A team of Google Deep Mind developed the tool, called SynthID, in collaboration with Google Research. SynthID is designed to work with Google’s AI image maker.

This system creates images with cinematic quality from simple text commands. For now, only Image users will be able to use the identification tool with Artificial intelligence.

Synthroid works by creating one watermark digital hidden in images. Watermarks have long been used on paper and money documents as a way to mark them as real or authentic.

digital watermark fulfills the same function but uses technology to embed digitally hidden markers in images. These are difficult to see just by looking at an image. Instead, they need special tools to identify the watermark.

3. Goal

Goal in February 2024 it announced that it is working on a medium to better detect and identify AI-generated images in Facebook, Instagram, and Threads looking ahead to the 2024 elections.

Nick Clegg, president of global affairs at Meta, explained that soon the platforms would inform users when an image they see on their feed has been generated using AI.

Currently, Goal adds a watermark and metadata to images generated with your Meta AI software. Now, however, the company wants to expand that capability to images generated by contemporary companies such as AdobeGoogleMidjourneyMicrosoftOpenAI, and Shutterstock.

Clegg stated that his company is working with Partnership on AI, a non-profit organization made up of diverse people that was born to guarantee that AI has positive results for people and society.

Show More

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button