More

    Twitter’s Community Notes needs AI tools to detect AI-generated images

    I applied and was admitted into Twitter’s Community Notes community around a week ago, and sensibly, you have to go through the process of earning your stripes before being able to create new Notes against content.

    Over the past week, I’ve provided lots of feedback on Community Notes by other users, which add important context and source links to claims made on the platform.

    Today, I’ve progressed to the next stage and can now create new Community Notes.

    This new responsibility made me consider the challenge and scale of this task, particularly one item that came in for review, a post of a woman that reported to be the new CEO of Twitter but was in fact a cleavage shot of someone very different, which I appears to be a thinly veiled attempt at getting views/clicks, rather than a malicious attempt to mislead people, but it certainly wasn’t satire. At the time of writing, the post had 4.7 Million views.

    Community Notes are doing their great work and confirming this is not true, but it raised in my mind, the very important challenge that lies ahead.

    As AI-generated images (and soon video) become photorealistic, it becomes incredibly difficult for humans to tell these images apart from actual photos.

    The challenge of fakes is nothing new, but creating one previously meant someone had to have some decent Photoshop skills and a strong motivation to spend hours creating the fake.

    AI-generated images are even more realistic than Photoshop fakes. These images are created by using AI to train a computer to generate images that are similar to real photos. This process is known as deep learning. Deep learning is a type of machine learning that allows computers to learn from data without being explicitly programmed.

    AI-generated images are already being used for a variety of purposes. For example, they are being used in the entertainment industry to create realistic special effects. They are also being used in the medical field to create realistic simulations of medical procedures.

    However, AI-generated images also have the potential to be used for malicious purposes. For example, they could be used to create fake news articles or propaganda videos. They could also be used to create deep fakes, which are videos that have been manipulated to make it look like someone is saying or doing something they never said or did.

    As AI-generated images become more realistic, it will become increasingly difficult to tell them apart from real photos.

    So here’s the problem, the rate at which these images can be created and shared is so great, that even a growing number of Community Notes members will have no chance of keeping up with them. Don’t think in terms of hundreds or thousands per day, but millions.

    This is a serious problem that could have a negative impact on our society. Twitter (and other social media companies) need to develop new AI-powered tools and put them in the hands of groups like Community Notes to deal with AI-generated images.

    At least for now, Twitter should try something like a Tinder-style rapid swiping experience, using almost a game-style interface, asking humans to determine if they believe that image is a photo or AI-generated. This would allow dozens of images to be reviewed in just minutes, while today’s CNs are regularly well through our responses that provide sources to evidence their posts, likely to take many minutes per response.

    This could also help serve as good training data when the vast majority vote that an image is captured, or generated. Individually I’m sure we’d get it wrong from time to time, but collectively, over a large enough user base, I imagine this would make sense.

    Our ability to detect AI-generated images right now is to look for things that look unnatural. For example, AI-generated images often have unnatural lighting or textures. They may also have strange artifacts, such as blurry edges or missing pixels.

    I wonder when the AI gets smart enough to start embedding EXIF data in the file’s metadata to pretend it was taken with x model of camera or smartphone, just to trick the AI.

    Jason Cartwright
    Jason Cartwrighthttps://techau.com.au/author/jason/
    Creator of techAU, Jason has spent the dozen+ years covering technology in Australia and around the world. Bringing a background in multimedia and passion for technology to the job, Cartwright delivers detailed product reviews, event coverage and industry news on a daily basis. Disclaimer: Tesla Shareholder from 20/01/2021

    Leave a Reply

    Ads

    Latest posts

    Reviews

    Related articles

    techAU