How can images generated by artificial intelligence be recognized?

How can images generated by artificial intelligence be recognized?

AI-based tools capable of generating images from text are becoming widespread but increasingly difficult to recognize. Errors in producing this content are decreasing as technology advances, but providers of these tools are working to make synthetic content identifiable.

How to recognize images generated by artificial intelligence – PianetaCellulare.it (Credit: metamorworks/shutterstock)

Models of Generative artificial intelligence They are no longer limited to text, and therefore to writing textual content, but they have become increasingly advanced to be able to do that as well Create images And recently, Video and audio. We refer to those tools, free or paid depending on the resource and features provided, that allow you to write a text string to obtain an image, static or animated, that satisfies the request made. For example “Cats play footballMicrosoft recently introduced the free “Designer” tool in its Copilot software based on OpenAI’s Dall-E 3 model. At the same time, errors in producing this content are decreasing as the capabilities of AI models advance. Fortunately, vendors of these tools are working themselves to make synthetic content identifiable.

AI can also generate images, not just text

If tools based on generative AI capable of creating on-demand videos are rare today, such as Sora recently announced by OpenAI, those that allow you to create images starting from text are increasingly at the center of discussions, because they are the reason The spread of false content online. Anyone who knows how to use these tools has the potential to ask AI to create, for example, an image of a famous person placed in a context far removed from reality. Technology also known as com. deepfake, which without adequate censorship measures is proving to be increasingly advanced, making it difficult to distinguish between real content and fake content. It is an increasingly dangerous technology, because it is used by malicious people to create misleading advertising campaigns.

Deepfake technology fuels the spread of inauthentic content online – PianetaCellulare.it (Credit: Tero Vesalainen/shutterstock)

Generative AI is used to create real scams to attract the attention of potential victims. For example, a famous person's image – and sometimes their voice – is often used to create campaigns inviting you to invest online to earn a lot of money easily. It is possible to see this type of fraudulent campaign, especially among ads on social media: so you should be very careful.
While listening to Deejay Radio one morning, we heard that Alessandro Cattelan had also drawn attention to this topic, informing that someone had created a false advertising campaign by replicating his voice using artificial intelligence. Then he was joined by other celebrities, who complained about much the same thing: criminals exploiting their image in scams.

Images generated by generative AI, how can you recognize them

Recognizing ads and content created using generative AI is becoming increasingly difficult. “There are contents that one can easily notice.”Production defects'. For audio content, for example, you can pay attention to language: AI can be forced and unnatural, just as if the speaker were a machine not yet able to repeat certain shades of tones. In recent months, some entertainment radio shows to entertain the public have created content imitating the voices of celebrities: even an untrained ear is enough to recognize the presence of artificial intelligence.

Images generated by artificial intelligence can be difficult to recognize, as creation techniques develop rapidly. However, this depends on the model underlying the generative tool. In fact, less advanced models may not be able to accurately generate details, so by analyzing images that may contain a lot of detail, clues can be found. However, more advanced models could be better, thus making it more difficult to recognize AI intervention. And then there can be images with unlikely contents that are obvious without much thought forged Without a doubt: For example, consider the image of the Pope climbing a mountain.

AI-generated images must be identifiable

If you have doubts about the nature of an image and cannot find elements that could indicate whether it could have been created by artificial intelligence, the same tools that allow you to create this type of content are working to make it more or less easy to identify. For example, Meta, OpenAI, and Microsoft have made efforts to integrate Identifiers and/or metadata to facilitate identification of AI-generated content Their tools include MetaAI, Copilot, and DALL-E. Then there are some tools that generate images with a watermark visible to the human eye in some area of ​​the image itself. Returning to invisible identifiers and metadata, there is a need for tools capable of “reading” this information hidden in the image. As we will see below, there are some that can be accessed online, even for free.

It's worth noting that Meta made an effort to recognize the AI-generated content that users share across Instagram, Facebook, and Threads, and then added a label that clearly tells people that the content they're looking at was AI-generated. Meta is also working on its own algorithm that can recognize AI-generated content even if it doesn't contain standard identifiers. We talked about that here.

There are online tools for image recognition generated by artificial intelligence

When you have doubts about the nature of the image, if it does not have identifiers or visual labels indicating that it was created using artificial intelligence and you do not notice any strange elements or errors in it, you can use some online tools that can help with this. However, be careful: this type of tool can recognize any identifiers and metadata detected in the images, if they exist, but in the absence of identifiers, we do not know whether they can generate or not. Fake positives, that is, referring to the image being generated using artificial intelligence when in reality it is not. Having said that, while searching the web, Sky TG24 And Fan page We recommend these tools to identify fake images: 'Ai or not“,”Luminarte“,”Face hugging' And 'Forensic images'. It is also noted thatSynthide ID(deepmind.google/technologies/synthid/), a Google DeepMind tool currently in beta that allows you to add identifiers and identify AI-generated content.

See also  Death or earthquake on the road? id is doing FPS from long and creative series - Nerd4.life

Leave a Reply

Your email address will not be published. Required fields are marked *