AI has created areas so grey, you could write a song about it

[ad_1]

sound waves abstract

Getty Images/maxkabakov

These days, it feels like I needs to put a disclaimer at the top of my stories so no one assumes any part of the content is generated by artificial intelligence (AI). However, only human brains were involved in the making of this piece — one writer’s brain and two editors’ brains, to be precise, and none have been implanted with a chip.

Unfortunately, we’re now at a stage where we can no longer easily distinguish between humans and robots. We’ve not reached the all-controlling level of Skynet yet, but the power of AI today marks a pivotal point for a technology that had been chugging along — mostly in the backend — for years and is now finally more accessible, and understood, by the general public. 

Also: AI could automate 25% of all jobs. Here’s which are most (and least) at risk

This emergence is all thanks to generative AI platform, ChatGPT, which has fascinated many of us with its ability to mimic humans and assist with various tasks, including software coding, creating travel itineraries, and composing email messages and essays. Venture beyond ChatGPT and you’ll find other AI-powered applications that can produce images and songs “inspired” by popular artists and writers.

And herein lies the crux of a debate over where the lines should be drawn on how AI is used in some industries. 

For me, in my work as a journalist, the lines are crystal clear. Factual inaccuracy and plagiarism are big red flags. It is for these reasons that tools such as ChatGPT have absolutely no role to play in my craft, not even as a research assistant. 

Also: How to use ChatGPT: Everything you need to know

I’m guessing lawyers probably share my concerns, particularly after one of their peers in New York was called out for citing judiciary decisions of past cases that never existed. Yep, he let ChatGPT do the research and it generated content based on false sources. 

The lines, however, may not always be as clear. 

AI increasingly is used to create music based on the styles of popular artists, but also produce songs that are “sung” by a voice that sounds very much like a specific pop star. Singapore-based singer Stefanie Sun, for instance, apparently recorded a cover of Avril Lavigne’s Complicated — except she actually didn’t.  

To an untrained ear, the AI-generated voice sounds just like Sun, who has sold more than 30 million records since her debut in 2000. Her fans, though, say her AI counterpart is easily distinguishable because it lacks the emotive nuances of the singer.

That perception, however, could change, Sun herself has acknowledged. In a blog post last week, she joked that her AI persona is enjoying more fame now that her own heyday is over, and that it’s impossible to compete with someone capable of releasing new albums in mere minutes.

Also: How I tricked ChatGPT into telling me lies

Sun says AI has got better at processing and piecing together information to form opinions and thoughts — something humans were once convinced could not be replicated. The singer adds that it may only be a matter of time before AI makes further advancements and is able to mimic human emotions. 

“This new technology will be able to churn out exactly everything, everyone,” Sun writes. “You are not special. You are already predictable and also, unfortunately, malleable.”

The singer’s label reportedly isn’t considering legal action because there’s currently a lack of regulation around generative AI. 

While Sun sees her AI counterpart as a potential competitor, Canadian singer-songwriter Grimes is more open to the idea of music created using an AI version of her voice. That is, if anyone who does so shares the royalties 50/50. Grimes has invited immitators to register their music via her website, where she plans to make her vocal samples available to aid the AI process. “I think it’s cool to be fused with a machine and I like the idea of open sourcing all art and killing copyright,” Grimes tweets.

Others in her industry are less generous with the new revenue model.

Also: ChatGPT is more like an ‘alien intelligence’ than a human brain, says futurist

US rapper Ice Cube said in a podcast interview he would sue anyone who makes a song with his AI-generated voice as well as the platform that plays it.

His comments come on the heels of a song called Heart On My Sleeve, which was presumably created by AI and made to sound like rapper-singer-songwriter Drake and singer-songwriter The Weeknd. The two Canadians are known collaborators. 

Heart On My Sleeve went viral on various platforms including TikTok and Spotify, before it was removed at the request of the singers’ record label. Copies of it still are available on YouTube.

The source behind the song reportedly created it using AI models trained on the artists’ works, styles, and voices. 

There are lawyers who are better equipped — and articles featuring interviews with lawyers — that have already debated the potential legal issues around AI-generated songs like Heart On My Sleeve, so I’m not going to do that here. Suffice to say the song throws up a bunch of questions around fair use and misrepresentation, and some correlation to professional impressionists or impersonators and tribute acts. 

What truly matters as AI becomes everything, everywhere

I am, however, keen to draw some parallels with how human artists and musicians find inspiration for their work. We often hear how great songwriters are influenced by those who came before them. Bruno Mars cites Elvis Presley and the Beach Boys among his music influences, while Billie Eilish points to the Beatles and Green Day.

These artists grew up listening to and learning from musicians, applying what they feel are most in sync with their own styles and creating their own art.

Also: AI can write your emails, reports, and essays. But can it express your emotions?

In some ways, that’s exactly what large language models and generative AI tools such as ChatGPT do. They produce new works based on what they learn from past works. The only significant difference is that human minds are shaped and influenced by works we admire as we grow and learn, while AI models are not inherently partial and have the compute capacity to not be discriminatory over what they choose to learn as they grow. 

So, assuming no copyright was breached and there is no misrepresentation, why should AI-generated content that draws inspiration from famous works be any different from human-generated content that also draws inspiration from famous works? And aren’t most products based on basic foundational structures and best practices anyway? 

That’s roughly the argument that British singer-songwriter Ed Sheeran used in the lawsuit he won against Marvin Gaye’s estate, in which he was found not liable for copyright infringement. Sheeran’s lawyer Ilene Farkas told jurors that similarities in the chord progressions and rhythms used in Gaye’s and Sheeran’s songs in question were “the letters of the alphabet of music“https://www.zdnet.com/article/ai-has-created-areas-so-grey-you-could-write-a-song-about-it/.”These are basic musical building blocks that songwriters now and forever must be free to use, or all of us who love music will be poorer for it,” Farkas said.

Also: Just how big is this generative AI? Think internet-level disruption

Musician and YouTuber Rick Beato says it plainly: “You cannot copyright a chord progression.”

So, where does that leave humans, as AI continues its growth trajectory and its use becomes pervasive? How can we differentiate ourselves when we have to compete against an entity with a greater capacity to process and learn? 

I think we have to continue to innovate and be creative in how we apply our knowledge. We must put our own unique spin on top of these basic foundations and incorporate elements not commonly used by others. 

Just like when the internet emerged and then became popular, we can’t let access to new technologies like AI make us lazy. “Don’t keep repeating things,” Beato says.

I recently moderated a roundtable discussion when I cheekily said my questions for the participants were generated by my human brain, without the assistance of AI. “But why not?” a couple of the attendees asked. 

My response to that question was a no-brainer (pun intended). A generative AI tool like ChatGPT could very well have come up with a list of brilliant questions based on the roundtable dialog, which ironically enough was on AI. However, it would unlikely be able to adapt and modify the questions in real time, as the conversation moved along. 

Also: AI bots have been acing medical school exams

I always have a list of questions ready at the start of every discussion I moderate, but I’m constantly following up with new ones based on insights participants share as the roundtable conversation progresses. I tweak my questions along the way to adapt to the evolving discourse, which is often filled with references to local industry developments and personal anecdotes previously unshared. 

All of these insights, including my cheesy sense of humor, can’t be easily reproduced by an AI model — for now at least. And that’s how I hope my knowledge and skills can retain some relevance in the AI era. 

After all, there is tremendous potential for what AI could bring to healthcare, and there’s even more urgency to address issues around AI ethics and data security — before it’s too late



[ad_2]

Source link