CHALLENGE: How I Used AI to Create a Song in the Style of Peter Gabriel
|
Time
Text
Wires humming choice is clear.
The great divergence is finally here.
Not less human multiplied, with more intelligence by your side.
Use it wisely, light the way.
The generations yet to save.
They saw the song.
All right, welcome.
I'm Mike Adams, and what you just heard there is an experiment.
Experiment.
And welcome to this demonstration, by the way.
An experiment to see if we can create music in the style, the vocal style and musical style of the great Peter Gabriel, who's one of the most creative and innovative and loved musicians of our lifetimes, just a real treasure for Western culture.
And in this podcast here, I'm going to play the full song for you and explain what I did as an AI developer and a musician and an audio engineer.
Well, not how, I'm not giving you a recipe book, but I will explain some of the concepts behind me putting this together.
And also, I'm going to talk about artists and intellectual property, as well as how artists are working with AI now, or some are choosing to, some are not choosing to.
And I'm going to talk about Suno and the Warner Brothers Music Group and a little bit about intellectual property.
And I'm going to give my suggestions of how I think beloved artists can leverage AI without losing control over their likeness.
And so in any case, I'm going to play the full song for you here.
It's about four and a half minutes.
This is a song that I created meticulously on Suno without using the name Peter Gabriel.
In fact, I don't even think that Suno allows you to use an artist's name in a prompt.
Whether they allow it or not, I don't think they do.
But I deliberately chose not to use the name Peter Gabriel, but rather to recreate the vocal styles, the harmonic styles, musical styles, etc., which I'll talk about coming up, in honor of Peter Gabriel's music as a fan and as a demonstration.
And also in good faith, by the way, this is not an attempt to rip off Peter Gabriel's voice.
The musical, the lyrics, the message of the song is very uplifting.
It's a pro-human message, I think, that most musical artists would agree with.
And I'm doing this as a non-profit demonstration.
This is not a commercial project at all.
This music is available for free.
And as you hear in this podcast, I am explaining exactly what I'm doing, attempting to mimic the music and the vocals of Peter Gabriel, but without his consent or permission, because I don't have his phone number.
I mean, I don't have a way to reach Peter Gabriel and ask for permission.
So I thought I would create this and then have a discussion about it and hopefully help contribute to this ongoing conversation about how musical artists can coexist with AI technology while the world enjoys their music or their vocals, their musical styles, and while the original artists get credit for it.
So going to have all that discussion coming up.
But first, let's play the full song.
Four and a half minutes.
I call it the great divergence.
I wrote all the lyrics.
It's my concept of the song.
But then as an AI developer and an audio engineer, I created this song as a performance to mimic Peter Gabriel and his music styles, which I absolutely love.
Longtime fan.
Here you go.
Circuit's humming.
Cold comes alive.
The future's starting to arrive.
Two worlds splitting at the seams.
Wake up now or lose the dream.
Machines are thinking, learning quick.
Human jobs are slipping slick.
Those who prompt will multiply.
Those who don't will wonder why.
Compute is now the crown you wear.
Knowledge flows to those who dare.
The gap is growing, can't you see?
One side drives, one bends the knee.
Learn the tools or lose the fight.
Darkness fades before the light.
Hesitate and miss your chance.
Join the rise or lose the dance.
Feel the great divergence calling.
Half horizon, half are falling.
Grab the future, hold on tight, or vanish slowly from the light.
The canyon splits the old and new.
It's side is awaiting.
Therefore you know standing still, no middle ground.
Evolve right now or don't be found.
Pocket doctor, pocket law.
Chemist answering your call.
Genius resting in your palm.
Guiding you through every storm.
Gates are falling, walls come down.
Write, learn, bills, a wisdom crown.
Write any book, learn anything.
Feel the coward on his brain.
Drudgery will fade away.
Automation shakes your day.
Chains of boredom finally break.
The innovators path to take.
Feel the great divergence calling.
Half horizon, half are falling.
Grab the future, hold on tight.
Or vanish slowly from sight.
The canyon splits the old and new.
It's side either waiting.
Therefore, you know, standing still, oh, middle ground.
Evolve right now or don't be thrown.
Feel the great divergence calling.
Half the rise and time are falling.
The Great Divergence Calling Sovereigns rise while slaves are falling.
Open freedom, hold it tight.
Your human soul, it burns so bright.
The canyon splits the chain.
And free now, which side will you choose to be responsibility in hand?
Shape a future wise and grim.
Wires humming, choices clear.
The great divergence is finally here.
Not less human multiplied with more intelligence by your side.
Use it wisely, light the way.
For generations yet to say, they saw the split, they made the call.
Will you rise or will you fall?
Will you rise or will you fall?
All right, welcome back.
I hope you enjoyed the song.
Now, you may have noticed some interesting things about the song already.
The vocal style of Peter Gabriel is, I think, well represented, even though, again, I didn't use his name in any of the prompting.
But I think it's a pretty good match at the beginning of the song and maybe the last quarter of the song.
But there are some areas of the song where it's not a very good match.
And Suno is known for suffering from something called vocal drift, where the beginning of the song sounds like one singer, and then by the end of the song, it's totally different.
And somewhere in there, there was a gradual drift.
And they've improved that, by the way.
They have, I don't want to call it, a persona, I think, where you can sort of voice print a voice that you like that you created in Suno.
You can't voice print somebody else outside the application.
But if you render a song and you get a voice that you like, you can sort of save that voice, give it a name, and you can reapply that voice to other songs for vocal consistency.
And that feature, you know, it kind of works sometimes.
Look, I mean, the Suno engineers are brilliant.
There's no question about it.
What they've created is it's groundbreaking.
And I'm an AI developer.
I know how hard this is.
I'm the developer, by the way, of the free book creation engine website called brightlearn.ai where you can create a book in minutes on any topic completely free.
And we have thousands of authors and now, I don't know, 12,000 books that have been created there so far.
And it's just a few weeks old.
So anyway, I know how hard it is to do AI development.
And I can't even imagine the difficulty of what Suno has built.
They've done an incredible job.
I also want to say in defense of Suno that I did not prompt, again, I did not use Peter Gabriel's name in any of the prompts.
What I did instead was I described the style of Peter Gabriel's voice with great specificity.
And let's, you know what?
Let me just play.
I'm going to go through this segment by segment.
Before I go into a full discussion about this, let me play the first segment and then I've got some thoughts for you.
So let's just begin the first segment here.
Circuits humming.
Cold comes alive.
The future's starting to arrive.
Two worlds splitting at the scenes.
Wake up now or lose the dream.
All right, so that intro right there, that took me a tremendous amount of effort to create that.
And I think that vocal style very closely matches Peter Gabriel, but it's only because of what I'm about to describe.
First of all, doing this, well, let me give you my background.
Of course, I'm an AI developer, and I've built multiple platforms now at this point.
And, you know, I'm an influencer.
I've reached over 100 million people in the last 20 years easily, you know, with my books and podcasts and so on about holistic health and pro-human point of view, you know, anti-war, pro-compassion, etc., fundamental human rights and anti-censorship, things like that.
And I'm also an AI developer, so I understand the technology behind this.
And I'm a musician myself.
Even before AI, I you know, I wrote and performed quite a few songs.
Some of them got picked up and went quite viral, like in 2008, I did the song called I Want My Bailout Money, which ended up on the front page of the Wall Street Journal during the subprime mortgage collapse.
Of course, I don't have anywhere near the vocal talent of Peter Gabriel, but I do have a good ear.
I've got a lot of audio engineering capabilities, as well as being a musician.
I started music, keyboards, and percussion before the age of six.
So I've composed music my whole life and I've been tuned into it.
And the reason that matters is because for me to describe the voice of Peter Gabriel and the musical style of Peter Gabriel, you first have to be able to hear the vocal attributes of Peter Gabriel and to be able to notice things.
So if you were to try to describe the voice of Peter Gabriel without using the name Peter Gabriel, how would you do that?
Well, that's the challenge that I faced when I took on this personal challenge of creating this song in his style.
You have to work out the characteristics of his voice.
I'm not going to give all of them here, but just a couple of clues.
Like, for example, his voice is raspy.
And there's a kind of a scratchiness or a raspiness to his voice that is very characteristic.
And that's a very positive quality in this context.
And that's a necessary property to mention.
There are many other properties to mention.
Another one worth noting, and you heard that, I mean, you heard this in the sample I just played, is that Peter Gabriel, especially in his later work, not necessarily his earlier pop work, but his later, more contemplative work, he has a very intimate style which is on the verge of whispering for a lot of his vocals.
He's on the verge of whispering.
And sometimes his voice is described as dark or perhaps contemplative or, you know, you could use a lot of different words to describe it.
But it's certainly not cheery.
You know, typically, it's not a cheerful, you know, it's not a happy, high-pitched voice, typically.
He's not trying to be that.
But the whispering quality creates an intimacy between the listener and the vocalist.
Because when someone is whispering to you, usually they're closer to you.
If they're whispering in your ear, for example, they're very close to you.
And to hear someone kind of whispering, almost whispering, in a song, which is what you just heard, creates a level of intimacy and authenticity that can't be matched by a vocalist that's just screaming into the mic all the time.
You know, you can't create intimacy with volume.
You have to do it with stylistic choices.
And there are some other stylistic choices that Peter Gabriel is known for.
I'll comment on some of those throughout this, but I do want to say one thing before we continue.
I am convinced after doing this myself that the ability for an average user to create a song that sounds like Peter Gabriel is just about zero chance.
Because if you don't have the musical and audio engineering background that I have, and if you can't pay attention to all the details of music, for example, the way that Peter Gabriel uses melodic bass guitars to drive the narrative, you know, thematic voices of a song, if you don't understand that, if you can't hear what the bass guitar is doing, then you can't describe it.
And if you can't hear the certain things about Peter Gabriel's voice, such as the way that he pronounces consonants, the way that he pronounces the T at the end of the word bright, and also his application of reverb, which definitely has a strong bias towards the higher frequency range.
If you can't hear those things as an audio engineer, then you can't even begin to recreate something that sounds like Peter Gabriel.
So if anybody is listening from Peter Gabriel's camp or manager or managers, plural, whoever that may be, you can feel safe that very few people could do what I'm demonstrating here.
And I had to use hundreds of generations of segments and also a lot of audio editing.
I had to pull out WAV files and do editing and I had to do some layering, etc., in order to put this together.
If I didn't have a background in music production, there's no way I could have done this.
And if I wasn't a good listener, musically trained, there's no way that I could reproduce anything resembling Peter Gabriel's music or voice.
So rest assured that this is not, you know, Suno doesn't make this possible to like a one-shot prompt person.
You know, a novice can't just sign up at Suno and just type in, make me a song in the voice of Peter Gabriel.
And it's going to, you know, it's not going to just, first of all, it's going to reject the prompt.
But secondly, even if it didn't reject it, it's not going to create something even close to what you're hearing here.
So let's continue.
Let's continue.
Here we go.
Machines are thinking, learning quick.
Human jobs are slipping, slick.
Those who prompt will multiply.
Those who don't will wonder why.
Compute is now the crown you wear.
Knowledge flows to those who dare.
The gap is growing, can't you see?
One side drives, one bends the knee.
All right, for this segment, which is, I call it verse one, the vocals are pretty good.
I would say about maybe a 70% match with Peter Gabriel's voice.
However, there are some obvious failures, such as the way that the word can't is pronounced here.
There's a line that says the gap is growing, can't you see?
And it pronounces it almost like can't you see?
You know, Suno does.
Whereas Peter Gabriel doesn't say that.
He would be closer to a British pronunciation, but not harshly so.
It would be more like a cant, can't you see?
But Peter would pronounce the T, can't, you see, which also the Suno engine does very well.
But what we're beginning to see already in the song is vocal drift here.
So verse one is not as good as the intro in terms of matching Peter Gabriel's voice.
And the next segment you're about to hear, which is the pre-course, is even worse, actually.
I'm not very proud of the pre-chorus.
It's the worst part of the song, actually.
It doesn't match Peter's voice very well.
And also, it doesn't match his musical choices either.
But nevertheless, it's a pre-chorus.
let's listen to that now all right I don't like that segment at all.
If I were to redo this song, I would actually have that segment, those four lines performed by a solo female lead vocalist.
Like the kind of voice that Peter Gabriel used with his live performance of Shaking the Tree, for example.
That would be a much better choice.
One of the things you notice in the vocals here in this segment is a vibrato in the voice, almost a tremolo type of rapid pitch alterations of the voice.
That is not like Peter Gabriel.
He's a lot more straight tone.
And yet the suno engine will induce a lot of vibrato, especially in lead-up segments that are gathering energy for the chorus.
Now, the chorus coming up next is something I like very much.
So let's listen to that.
I'll tell you why.
Chemist answering your call.
Genius resting in your palm.
Guiding you through every storm.
Gates of falling walls come down.
Write, learn, bills, a wisdom crown.
Write any book, learn anything.
Feel the coward on his brain.
Drudgery will fade away.
Automation shakes your day.
Chains of boredom finally break.
The innovators have to take.
Feel the great divergence calling.
Half horizon, half are falling.
Grab the future, hold on tight.
Or vanish slowly from sight.
The canyon splits the old and new which side is awaiting.
There for you, no standing still or middle ground.
Evolve right now, or don't be found.
The great divergent calling, half on falling.
Alright, so what you just heard there, which is a musical interlude that I really, really struggled with a lot.
The woodwind instrument that you heard there is not a saxophone.
And the choir that you heard is not a Western choir.
So understand that in the thematic interests of this song and the world beat emphasis that Peter Gabriel typically pursues, I wanted the instruments and the vocals to be African, especially for this segment.
And so the vocals, the choir there is an African choir, not a Western choir.
And the instrument is an African instrument I'll talk about here.
It's called the Al-Gaita.
or al-Gaita.
I'm not sure how exactly it's pronounced, but that's kind of how it's spelled.
And it's not a saxophone.
Okay, so let's start with the choir.
So African choirs are very different from Western choirs.
And this is a cultural thing.
So African choirs are more emotive.
And they allow for, what's the right way to say this, a looser interpretation of the timing and the pitch of what is being sung.
And that's exactly what I was going for here.
Now, African choirs are also way more emotive.
Depending on the rendition, you would hear like whoops or pitch bands like, whoa, you know, different kinds of things that you would never hear in a Western choir, which is very rigid.
You know, the Western choir heritage comes from Western Europe, which was a very controlling culture, obviously.
Everything had to be by the rules, and you didn't break the rules, and you didn't go whoop in the middle of a choir.
Well, in Africa, you definitely go whoop.
And if you attend a black church and you listen to the black church choir in America, you're going to get much more interesting music than in a traditional, you know, white church choir, which is very, very rigid.
I didn't mean to make this about race, but we are talking about regional and cultural differences.
And again, I'm trying to be true to the kind of musical choices that I think Peter Gabriel would make.
And that, you know, I share his love for humanity and people all over the world of different cultures.
And so I specifically wanted this to be African, an African expression of emotion and an African expression of instruments.
So that gets us to the Al-Gaita instrument, which is a double reed woodwind instrument.
It has kind of a brassy, kind of a buzzy type of sound to it.
It typically doesn't have the rapid melodic notes of a saxophone or something like that or a flute.
But it's used in ceremonial celebrations in African nations like Ghana and Nigeria.
And so anytime an African person, if they were to hear this instrument, they would associate it typically with a celebration.
And that's what I intended here.
This is a celebration of humanity.
And so the instrument choice really means something.
It's more than just the wave or the effect.
It's the story behind the instrument and behind the people and what this means for humanity.
So, and I, again, I'm doing this in good faith, trying to adhere to my understanding of some of the musical choices that Peter Gabriel has made.
And I think he's made some outstanding choices.
So there you go.
Let's go on to the next segment.
Hear the great divergence calling.
Sovereigns rise while slaves are falling.
Open freedom, hold it tight.
Your human soul, it burns so bright.
But can you split the chain and bring out which side will you choose to be responsibility and have shape the future wise and grand.
All right, so that was chorus three.
And it's fine.
It's great.
But I really want you to pay attention to the fact that for the first half of chorus three, there's no bass guitar.
And then when the bass comes in for the second half, it's very prominent and it's wonderful.
It's very, you know, it's warm.
And one of the things that I mentioned in my prompting is kind of a walking melodic bass guitar.
And that's very characteristic of a lot of the music of Peter Gabriel, is to use the bass guitar in a way that it is its own kind of lead instrument.
And that's really what you just heard right there.
And, you know, a lot of music relegates the bass guitar to a subhuman role of just a repetitive, you know, it's like, ah, that's boring, you know.
But good bass guitarists, they know how to walk that thing up and down and have it sound great and make sense.
And that's what you heard just then.
I really appreciate great bass guitarists.
They know their stuff.
All right, so the next segment coming up is my favorite segment of this song that I think adheres the best to both the vocal style and the recording engineering style of Peter Gabriel.
So let's give it a listen.
All right.
What you just heard there was very difficult for me to create.
And this is a vocal doubling track.
And it's one of the techniques that Peter Gabriel used a lot and I think still uses, but especially in his early work in the pre-digital age, this was a really economical way to add a lot of depth and expansion, sort of a horizontal expansion to your music.
You simply record yourself singing the track twice, and then you take one recording and you push it maybe 25% to the left on the pan, and then the other recording, you push it 25% to the right, or maybe even 50% if you're bold.
And then you play those together.
And what happens is you get a widening, an expansion of the vocals of the track.
It sounds like the vocals are coming from everywhere around you.
And this technique is something that Peter Gabriel really nailed down with just the right amount of reverb, the right amount of panning, and then also, I think, a selective bias on the reverb to emphasize the higher frequencies so that the consonants would come through very strongly, like the D at the end of a word or a T at the end of a word.
So what you just heard really recreates that extremely well.
And I would guess that if you were to play that for people who are familiar with the music of Peter Gabriel, that almost everybody would say, yeah, that sounds just like Peter Gabriel.
Did he sing that?
And of course, the answer is no, he did not sing that.
And we're going to talk about that coming up and what it means to an artist that not only their voice, but their vocal style and their audio engineering styles can be replicated by observant people like myself when we pay attention and when we are very good at prompting.
But I also want to add, again, in defense of Suno, to get this effect out of Suno took a tremendous amount of effort on my part.
And it's not something that can be easily replicated by a novice or any kind of beginning user by any means.
I mean, if you try it, good luck.
It's hard to create that.
All right, so moving on.
The next section is also wonderful.
It's wonderful.
It sounds, to me, sounds very much like Peter Gabriel.
And there are also some choices in the vocals that are, I think, aligned with the stylistic choices that Peter Gabriel himself typically makes.
So, let's take a listen.
Intelligence by your side.
Use it wisely, light the way.
For generations yet to say.
They saw the split.
They made the call.
Will you ride or will you fall?
All right, what I love about this segment is that here's another example of Peter Gabriel's type of voice nearly whispering.
It's very intimate.
And when he's singing, they saw the split, you can hear the T pronounced, they made the call.
You can hear every consonant.
And it's very close to you.
It's like whispering in your ear.
It's like someone who has a very precious message for you, and they need to come close and whisper it into your ear.
You know, use it wisely, light the way for generations yet to say.
This is like a profound message.
Now, if someone were to be screaming this message at the end of a song, it wouldn't be as interesting.
And one of the things that AI music tends to do, and this is also true with Suno, is that throughout the song, from the beginning to the end, there's a progression of loudness and layers and reverb and busyness.
And even though the beginning of a song might be more mellow, simpler, easier to hear, by the end of a song, this was especially true in earlier versions of Suno, by the end of the song, it's just this cluttered mess of loudness and way too much reverb.
And it's just, you know, oh my God, too many vocals and too many instruments.
And it's noisy, you know, or can be.
And that's the typical progression of AI music.
And so what I sought to do with this was to, you know, I mean, we have peak moments in the choruses, right?
Semi-loud moments in the choruses.
But then I wanted to bring it back down.
I wanted to make the song end with intimacy, to bring home the importance of this message for humanity about our global consciousness and the way in which we interact with technology, the choices we make, the responsibility we have to our children and future generations.
And that message deserves to be heard up close in an intimate manner, I believe, and not just screamed out at the world.
So in order to generate this ending, I went through many, many iterations of different endings.
And I had to do a lot of manual editing here.
In fact, there's a section coming up here where I manually edit.
I did a voice doubler on the voice track myself.
Suno didn't do it.
I actually did it because it felt appropriate there.
But to come up with this was extremely difficult, time consuming, many, many, many generations of different segments.
And I have to say that Suno is a brilliant engine.
And Suno engineers have done something extraordinary.
I mean, they've really, they have created a tool for expression that is unprecedented in human history.
And it's still very early, I think, in the technology of Suno and AI music.
And as a result, there are some things that it doesn't do as well now that it probably will do great in a year or two years or three years, or probably less than that.
But one of those things is musical consistency.
So if you re-render a section of a song, it tends to not be faithful to what just came before it or what's coming after it.
In other words, if you try to surgically re-render a section of a song in Suno, it seems to lack the context of the rest of the song.
And so I found it almost impossible to re-render a section that would fit the vocal styles and the musical styles.
I actually had to render entire songs over and over and over again and then find and take the segments of the different songs that I liked, export those as WAV files, and then bring the WAV files back into the Suno song editor and the editor itself, which, again, it's impressive that they can create a DAW music editor in a browser.
That's really something.
I'm impressed.
But there's latency.
There's lag.
It's not the same as having a piece of software right on your computer that's like right now.
We have like a two millisecond delay.
So editing music in a browser is always going to be more frustrating than editing music with a proper piece of software, DAW software, whatever that happens to be.
And sadly, the software I used to use, Sonar, no longer even exists.
I think it got absorbed in the cakewalk and then that got bought by, I think, Band Lab.
I don't even know what's happening to that anymore.
But I've spent thousands of dollars on DAW software over the years, or DAW, as it said, DAW software, digital audio editor, basically, or digital audio workstation is what that stands for.
I've spent thousands of dollars, you know, and thousands of hours on those programs.
And Suno does a pretty good job making that happen in the browser to where I knew intuitively how I could move the edges of the track or how I could grab the bottom corner and have a fade.
And I knew intuitively, you know, what does it mean?
What are stems?
Yeah, we know what stems are.
What are multi-tracks?
How does this work?
All of that was very intuitive to me.
So there wasn't much of a learning curve.
But again, I want to be kind to the people at Suno because they've built something extraordinary.
And yet, it's also never going to be as responsive as a DAW software that's running on your Mac or your Windows PC.
And that's just, that's just physics.
That's latency of the internet.
There's nothing you can do about that.
Anyway, getting back to the song, I didn't mean to get diverted there.
back to the song, we're going to play the final segment here and then I'll comment on that.
Okay, I love this segment.
I love it for a number of reasons.
Number one, the long drawn out, will you fall?
The word fall as he's singing it is so much like the way Peter Gabriel would typically sing a word like that.
It sounds perfect.
And even, you know, just the vocal stylistic choices here are very much, they remind me of Peter Gabriel's style.
But the other thing that's really interesting is that you hear, you hear some vocal effects, sort of vocal, it's not quite warbling, but it's like teasing of some vocals that are popping in and quickly fading out during part of that multiple times.
And that is very reminiscent of the kind of experimental musical styles that Peter Gabriel often expressed in his early albums in the 1970s in particular, but also somewhat in the 1980s.
The fact that I was able to invoke the Suno engine to produce that is extraordinary.
And it was surprising to me.
I did not think that that was possible.
But when I heard it, I knew I had to use that in the final song because it's such a powerful demonstration of what's possible when you engage in good prompting based on strong observations of musical styles, vocal styles, etc., combined with the incredible capabilities of the Suno engine, which undoubtedly, I would guess that Suno has been trained,
in the AI world, we call it pre-training.
I would guess that the pre-training of the Suno engine at some point had to encounter some of Peter Gabriel's music.
Now, I don't know that for sure, but that's just my guess.
But the thing about pre-training is that, you know, Suno doesn't reproduce any of Peter Gabriel's music.
It doesn't reproduce anybody's music.
It creates new original music that is inspired by all the music that it has been trained on.
And if you think about it, that's the way musicians work.
Even Peter Gabriel, you know, as much as I greatly admire his work and I think he's a cultural treasure to our world, and I don't mean any disrespect by saying this, but Peter Gabriel didn't obviously invent music.
He was influenced by other musicians before him.
And, you know, that would be curious to know who were his main influences.
And those musicians were influenced by other musicians.
So Peter Gabriel himself learned, in effect, his brain engaged in pre-training by listening to lots of other music.
And then he picked out the kind of styles of choices that he liked, and then he innovated and then created his own original music.
But without the music that came before, Peter Gabriel's music wouldn't be the same, Just like you and I, when we create something, For example, if we, you know if we write a book.
Well, if we write a book, It's probably because we read other books first, I would imagine, and so we've been pre-trained on other people's books.
Does that mean that we're copying those books when we write our book, even though some of our ideas might be things that we learn from other people's books?
No, that's not piracy.
That's just called learning.
Actually, you know, if I, if I read a book about Austrian economics and I come to realize that you know, fiat currency is not money But gold and silver really are money, as an example, and then I incorporate that idea into my book about, I don't know, asset management or economics or supply chains or whatever, did I, you know?
Am I a plagiarist because I use the ideas from you know Hayek and Austrian economics writers over the years?
No, that's not plagiarism.
That's called research.
So what Suno is doing in allowing people to generate new original music that is the expression of the human who's using Suno and that human's prompts, which is based on that human's research and their understanding and their observations of music and their musical influences.
By doing this, Suno is a tool that allows creative, expressive human beings to create original works that have been inspired by other music that came before.
And that's the way music has always worked in every culture through all time.
So those are my thoughts on AI-generated music in general.
But let me also recognize the importance of musical artists like Peter Gabriel to be able to control their own likeness.
So understand that in this experiment, I created this original music in good faith as a non-commercial effort as a demonstration and also kind of a personal challenge.
I wanted to see if I could do it.
And I wanted to share this with the world.
But I also want to help encourage discussion about this very topic.
Because you see, even though I'm using this in good faith in a non-commercial manner, and note that I did not give a specific recipe here for how to recreate Peter Gabriel's voice.
I just mentioned one or two clues, but there's much more to it.
But a bad faith actor could potentially do something like generate a voice that sounds like Peter Gabriel, but use it for like car commercials or some stupid thing like that.
You know, it's like, it's a Peter Gabriel song promoting pizza or discount pizza, even worse.
That would be horrible.
And given that Peter Gabriel has a very unique voice, the attempted commercial, let's say, abuse of his likeness for a purpose that is not at all aligned with his music and his worldview, etc., that, to me, that's an abuse of the technology that is irresponsible, unethical.
Of course, there's a whole area of argument.
Say, well, how much does this voice resemble Peter Gabriel?
With a slight tweak, some of the pronunciations could be different, and a person could argue, well, that's not Peter's voice.
That's someone that sounds 50% like Peter.
Perhaps the courts will figure this out one day, but I believe, this is just my personal belief, I think that musical artists, and this is also true for actors who have a physical likeness, I think that they would be wise to do two things.
Number one is to embrace AI technology as a vehicle for their fans to be able to tap into the richness of what they've already created, but importantly, maintain creative control over the allowable use of your own likeness.
So I would suggest that what Suno could do, let's say Suno did a deal with Peter Gabriel or other famous singers.
I mean, I know they've just done a deal with Warner Brothers.
I don't understand the full details of that, but let's just imagine that they did a deal directly with Peter Gabriel.
And it works like this.
Let's say Peter Gabriel says, okay, you can sample my voice and you can use my likeness and you can let your users use my voice.
But here are the terms.
Number one, that a percentage of every monthly fee is a royalty that goes back to Peter Gabriel based on how many songs a user creates with his voice.
So obviously there needs to be a revenue recognition of the value and the contribution of his likeness.
But more importantly, creative control.
There's a way to use AI to classify or to score the lyrics of a song to make sure that a song is aligned with the views of the original artist.
So for example, and this can all be fully automated.
I mean, I've done this with my book engine.
If you go to brightlearn.ai and you type in a book prompt, it runs a classification query and internally it returns a score between 0 and 100 of how closely this book matches my intended use of this technology.
And I intend for brightlearn.ai to be used for positive content that promotes humanity, that promotes joy and abundance and knowledge and transparency.
All of these things that are important to humanity.
So if you go to BrightLearn and you type in a prompt, it says, oh, I want a book about how to bomb a mall or something crazy like that.
It's going to reject that prompt because of the classifier.
Well, this can be done for music as well.
So Suno could do a deal with Peter Gabriel where Peter Gabriel's manager or Peter himself, let's say, writes up a worldview type of classifier prompt that says, okay, I only want my voice used on music that is about these themes or that promotes these ideas, you know, like fundamental human rights or human dignity or positive topics, etc.
Whatever, it could be a few paragraphs.
It could be a hundred items in a list, you know, like this is what's acceptable to me, the artist, Peter Gabriel.
And then that prompt can be run against the lyrics of any song that a Suno user is trying to generate using Peter Gabriel's likeness.
And if the user is trying to do a song that's like, you know, that's, that's, I don't know, a dark and dangerous type of song, like, you know, throw the puppies off the bridge or whatever, you know, it's like, nope, you can't do, you can't have Peter Gabriel's voice.
It will just reject it.
And that can be fully automated very easily.
So that's one thing right there.
That's a possibility.
I've never heard anybody talk about that, but that could give a lot of musical artists control over the kind of content that is generated with their voice or with their likeness.
Secondly, I think that an artist should be able to say that you can or cannot use my likeness for commercial purposes.
So if an artist wants to say no commercial usage is allowed, I think that should be their right.
And then that would change the licensing of that song that the user generates.
For example, taking Peter Gabriel, he might not want his voice to ever be used to create songs that are selling used cars or discount insurance or whatever.
And I don't blame him.
I wouldn't want my voice to be used for that either.
So non-commercial use only.
And there's a third option, which could be that you give Peter Gabriel's management team or whatever team he's got, you give them veto rights or let's say an approval interface where they have to approve a user's song before the user is allowed to download the WAV file.
Now, although this might seem tedious and cumbersome, it probably would be very cumbersome.
But if an artist wanted extreme control over the final product, they could, at least in my thought experiment here, they could have veto rights over whether or not the user can download that WAV file.
And if that is denied, then the user can only download an MP3 or they can only play it online and they can't get the WAV file, which means they can't use it in a professional manner with a high-fidelity use case.
You know, they certainly can't upload it to Spotify or things like that.
So this is one way that artists can choose to participate in the AI economy, which will be significant and could enjoy revenues while also maintaining a very strong level of control over the artistic expression that's associated with their likeness.
Now, interestingly, this can also happen with actors and movies.
So think about your favorite beloved Hollywood actors, let's say.
Now, I happen to really like Denzel Washington.
I like Denzel because he's a man of values and integrity.
Not only is he a good actor, and he's always interesting to watch, but he takes roles that have a very important message for humanity, roles that teach ethics.
And they're parables almost.
They're movie parables of learning lessons about who we are and why it's important to make good choices.
And I respect Denzel for that reason.
Whereas there are other actors that might be younger or possibly more talented, although Denzel is very talented.
But a lot of other actors just take sleazy roles that promote horrible values.
And I don't want to watch that stuff.
So let's suppose we fast forward a year or two or three, however long it takes, to the point where I can create a movie in the same way I create a song on Suno right now, painstakingly, by the way.
But suppose I want to create a movie starring Denzel Washington.
And I want it to be a movie in the following genre.
Okay, I want Denzel Washington as the Blade Runner in a Blade Runner movie, let's say, like a science fiction movie.
He's hunting down escaped robots or whatever.
The same kind of situation could apply.
So as a user, if I'm rendering a movie, let's say that creating that movie would cost, I'm just guessing, like $5.
Okay.
It was about the same cost as renting a movie.
You'll be able to just create the movie you want and the movie will have its, you know, it'll all be AI generated with all the voices and the sound effects and the screenplay and all the writing and everything will all be built in.
Trust me, this is coming because I could do this, I mean, once the video rendering tech is more mature.
But let's say if I want to render a movie with just any actor, it's five bucks.
But if I want Denzel Washington in that movie, well, then I pay, you know, $7.50 or whatever.
I pay a couple dollars more.
And that couple of dollars goes to Denzel Washington.
Right.
Or one day, his estate, because this will outlast his physical presence in this realm.
As actors pass on, they can continue to live through the movies that people create using their likeness.
But the original artist or whoever, maybe their family members, whoever is still managing that artist after they pass, well, they should be able to control the content.
Like, you don't want Denzel Washington, you know, in some horribly violent movie.
Or maybe he doesn't want that.
You know, I don't want to be associated with movies that promote Satanism or whatever, because that's not who he is.
So he should be able to maintain control over the kind of movies in which he is selected to appear or his likeness is appearing.
So if I want to render, you know, a really great movie, a sci-fi movie, and if I want Denzel's likeness in the movie, I have to have a theme that ends up teaching some important lesson about personal integrity, about having values, about pro-human philosophies, etc.
And then the Denzel group might automatically approve that through a classifier prompt, like I mentioned earlier.
So this is a way that creative artists or actors, musicians, etc., this is how they can maintain control and also limit the use of their likeness, but generate revenue from their likeness.
Now, the other interesting thing in all of this is that favored movie actors in the future don't have to be human at all.
So there will be completely AI avatar actors that become fan favorites for whatever reason.
And there will be AI musicians or AI vocalists that will become fan favorites.
And then I suppose, well, that likeness is owned by whatever entity or person or corporation created that avatar or created that voice in the first place.
And then I suppose the same rules would apply in terms of them being able to grant license usage for a certain set of themes that they approve.
So you're going to see that also as well.
But I'm recording this after Christmas and I'm reminded that the best Christmas movie of all time is, of course, Die Hard with Bruce Willis.
And I'm laughing because it's kind of a joke.
It's a really great movie in a number of ways, but it's not so much a Christmas movie, although it's set around Christmas.
The thing is that Bruce Willis is a beloved actor.
And myself and other fans really enjoyed Bruce Willis in movies like Fifth Element, for example, by Luc Besson, I think, is the French director of that film.
It was a really amazing film.
Well, we want to see Bruce Willis do more stuff.
The thing is, Bruce Willis, you know, the person suffered from some cognitive decline and he's no longer acting and he's apparently lost some vocal capabilities and of course we pray for him.
But wouldn't it be a service to him and his family to allow Bruce Willis to live on through AI generated films so that fans can continue to enjoy what Bruce Willis brought to the screen and to where his family could also benefit revenue-wise from the royalty licensing of Bruce Willis's likeness in films for decades to come.
Because I think actors like Bruce Willis could continue to be very popular for a long time to come.
And there's nothing inhuman or wrong with allowing his likeness to be represented in films.
And my argument for that is that when you watch a film with Bruce Willis in it, you're only seeing digital pixels on the screen anyway.
You're not actually seeing Bruce Willis.
You're seeing a digital representation of Bruce Willis.
It's also the same thing with Peter Gabriel's voice.
When we hear a song that's performed by Peter Gabriel, we are not hearing Peter Gabriel's voice.
We're hearing a digital encoding of Peter Gabriel's voice.
You know, 44.1 kilohertz, whatever sample rate you're using.
You know, you've got a bit rate, MP3 files, you got 320 kilobits per second, or you got a WAV file at a bit rate, how many samples per second?
It's all digital.
Bam, bam, bam.
You're hearing a digital representation of Peter Gabriel, or you're seeing a digital version of Bruce Willis.
And most people who are fans of Bruce or Peter, if I may use their first names, have never seen either one of them in person.
They've never seen Bruce Willis.
They've never heard Peter Gabriel.
Now, those of you who have attended live concerts, yeah, you've heard.
Then you've heard.
That counts.
But if you've never been to a live concert, you've never heard Peter Gabriel.
You've only heard a digital representation.
So someone arguing that, well, these people, they shouldn't participate in ongoing digital representation.
That makes no sense to me.
Why would you want to just stop contributing your gifts to the world?
Why not allow the world to continue to experience and to re-express the treasures that you brought into this world through your acting or through your music or through your vocals or through whatever you are known for?
Why not allow that expression to continue?
So anyway, that's my argument for why famous songwriters and famous actors should work with AI companies and allow their likeness to be used responsibly with royalties with a level of control by the original artist.
And I think that companies like Suno or coming up, you know, film creation AI companies, I think they'd be thrilled to work with artists in that fashion.
And besides, if you don't work with the sunos of the world, if you're a musical artist and you don't work with suno, you know, eventually there's going to be open source music creation that's pushed to desktop computers running locally on GPUs that will accomplish the same thing anyway without your permission, without your royalties, without any agreement, without your consent.
That's what we've seen with the text generation, for example.
It's been pushed to the edge.
Edge compute is becoming extremely capable.
One day, what we see on Suno right now, that'll be available on a desktop locally without even using the Suno engine.
But by that time, Suno itself will be way more advanced.
And I wouldn't be surprised if Suno one day gets into things like video rendering also.
Because it doesn't have to stop with music.
It can be much more.
And imagine how lucrative that business proposition would be for a company like Suno.
Say, hey, how would you like to be the new Netflix in essence, except every movie is personalized to the user.
Every user gets exactly the program they want.
And even myself, my website, BrightLearn.ai, I figure by 2027, we'll be generating full-length documentaries based on the books that people create on the site.
And even this year in 2026, we'll be creating shorter documentaries, maybe like three or four minutes, kind of a summary intro.
And they won't be that good at first.
They'll kind of suck for a few months, and then they'll get great maybe towards the end of 2026.
So even I'm planning on doing that.
And as you can tell from my demonstration here today and what I've already built as an AI developer, I'm pretty good at doing this stuff.
And there are other people like me that can do this too.
And there are more and more of us who are vibe coders and AI developers.
So it's to the advantage of musical artists and creative artists to work with AI developers or AI platforms, I should say, right now and set the precedent, get the licensing deals in place right now, because the future is arriving fast.
In fact, that's one of the lines.
The song here that I demonstrated today, The Great Divergence says, circuits humming, code comes alive, the future is starting to arrive.
Two worlds splitting at the seams.
Wake up now or lose the dream.
Which seems counterintuitive if you read it literally, but what I mean by wake up now or lose the dream, it means that you need to realize what's happening now or you'll lose the opportunity to participate in a future where machine cognition is actually very widespread and it can rationally coexist with humans, with human creators, with human innovators, with human artists.
But we need to be thinking about these topics right now ahead of time and laying out the fundamental principles of how we respect the artists and how we make sure that this technology is used for the benefit of humanity.
So in any case, if you want to hear my other music, by the way, I have a musical artist name, which is Ametheos, A-M-E-T-H-I-O-S.
If you go to Ametheos.com, you can hear all my other songs.
They're all free, by the way.
I release another song called More Than Wires.
And let's see, what do I have here?
I did a parody song about Trump's economic policies called The Economy in Your Head.
That's funny.
I did a song called Living in a Dream World, running out of time.
Ignorance is bliss.
I've done a number of songs.
Some of them are anti-war songs.
Some are satire songs.
But I did a song here called More Than Wires, and I'd like to play that for you at the end here.
More than wires, it's about the fact that we as humans are more than just cognition.
We're not just a neural network with biology.
We're much more than that.
We have souls.
We have heart.
We have a connection with the divine.
We are here for a purpose.
It's more than just silicon and wires.
So I'm going to end this podcast by playing that song.
And again, you can check out all my music.
You can download all the MP3 files completely free at ametheos.com.
And even this song here, The Great Divergence.
I'm also going to post this song there, amethyos.com.
So enjoy.
Algorithms asking why.
But underneath my skin, so thin, a fire burns that won't give in.
No programme roll my dreams.
No engineering design these seems.
The chaos dancing in my mind is beautifully underfunded.
I bleed, I break, I start once more.
Finding strength I had before.
Through the doubt, I find my way.
Born from yesterday.
More than one.
Built from dust and flame.
No two hearts are saying.
More than once.
We live, we learn, forever stake our claim.
More than one.
Spirit won't comply.
Reaching for the sky.
We laugh, we cry, we live and die.
We are more than wise.
We are more than wise.
I taste the salt upon my tears.
I soar with hope.
I carry fears.
The scent of rain or summer ground.
These senses can't be written down.
My mother's voice still echoes clear.
Through corridors of passing years, no server holds what I possess.
This beautiful imperfectness.
They calculate digital counts.
But can they feel the weight of bounds?
They process dead and never rest.
But do they know what feeling blessed means when the morning light breaks through?
Or holding someone dear to you.
The sacred things that can't be sold are worth far more than digital gold.
I bleed, I break, I start once more.
Finding strength I had before in the silence, hear me say, Love will find a way from dust and flame.
Don't you hearts the same more than once we live, we learn, forever, stake our claim.
Spirit won't comply more than wild.
Reaching for the sky.
We laugh, we cry, we live and die.
We are more than wild.
We are more than wild.
Connection runs deeper than a network stream.
We're bound by things unseen.
The touch of hands, a knowing glance, the mystery of love's romance.
So when the screens all fade to black, remember what they cannot track.
The wonder living in your chest.
The thing that makes us truly blessed.
More than wild.
Bill from dust and flame.
More than one.
Love your hearts the same.
More than wild.
We live, we learn, forever stake our claim.
More than one.
Spirit won't comply.
Reaching for the sky.
We laugh, we cry, we live and die.
We are more wires.
We are more than wild.
When the world turns chrome and steel, never lose the way you feel.
Keep the wonder, keep the art.
Keep the drum beat in your heart.
More than what.
More than wire.
Yeah, more than wire.
Stock up on HealthRanger's Nascent Iodine.
Highly bioavailable, shelf-stable, non-GMO, and lab-tested for purity.