Could AI disclaimers on Instagram help you spot AI-generated influencers?

[ad_1]

Person using Instagram on phone

NurPhoto/Contributor/Getty Images

Instagram could be working on a feature that would notify users when AI has partially or totally created a post they come across. According to a post from app researcher Alessandro Paluzzi, posts made by AI will have an accompanying label explaining that AI played a part in the post’s creation.

Also: 5 things to know about Meta’s Threads app before you entangle your Instagram account

But could the labels help people spot when generative AI creates an entire account? 

Three popular influencers on Instagram have amassed over eight million followers and received brand deals worth millions of dollars. However, all three are AI-generated. Lil MiquelaImma, and Shudu are Instagram influencers whose photos are AI-generated.

They each live exciting lives and have partnered with brands like Dior, Calvin Klein, Chanel, and Prada. Despite each AI influencer having a variation of the phrase “digital persona” in their bio and all of their photos having an uncanny valley feel, many commenters and followers believe the influencers are real.

Each digital influencer is the product of a tech firm that employs graphic designers and digital artists to create images of the influencers with the help of artificial intelligence. 

Digital influencers are attractive to brands and marketing companies because they cut the costs associated with travel, eliminate language barriers, and can change their look to conform to any brand at the drop of a hat.

Also: Can AI detectors save us from ChatGPT? I tried 5 online tools to find out

More importantly, digital influencers aren’t a brand risk. They don’t have opinions or political values and don’t have any questionable tweets from ten years ago. There is nothing a digital influencer can do that would jeopardize the integrity of a brand.

Additionally, digital influencers can’t age or do something to their appearance that doesn’t align with a brand’s values. For example. Lil Miquela has been 19 since creating her Instagram account in 2016. Since then, she’s collaborated with celebrities, graced magazine covers, and raked in millions of dollars

Also: How to achieve hyper-personalization using generative AI platforms

But when people quickly scroll past Lil Miquela’s posts, how many can immediately tell she is AI-generated? Experts say not many. Young people are particularly impressionable, and the content they see online shapes their view of themselves and the world around them.

And with digital influencers, they can look perfect and live an ideal life at all times, which could add to the pressures and uneasy feelings teens get from scrolling social media

So, is it up to Instagram to “out” Lil Miquela as an AI-generated influencer, her “owners,” or Instagram users to better judge the content they consume? The government says it should create new agencies to hold Big Tech to stricter standards.

Big Tech says such regulation could stifle innovation, and many users believe the responsibility lies on tech and social media companies. 

AI-generated content labels would not be the first time Instagram has tried to help users better understand the content they come across. In 2020 during the throes of the COVID-19 pandemic, Instagram blocked hashtags that spread vaccine misinformation and provided users with trusted information about COVID-19 and the vaccines.

Also: Instagram feed fix: How to see more of what you want (and less of what you don’t)

But generative AI isn’t as cut and dry as providing links to the National Health Service or the Centers for Disease Control and Prevention. AI can be harder to spot and contain, and tech companies must combat the possible dangers of misinformation propagated by generative AI.



[ad_2]

Source link