
Lately, AI has been all over social media in different forms—creating art, generating portraits, having conversations through chatbots, writing poems, and so much more. And, of course, I’ve been curious about it all. But one question stands out above the rest: Could AI eventually replace me?
Let me be clear right off the bat: I’m not saying AI isn’t capable of amazing things. It can certainly perform some pretty impressive tasks. But when it comes to creating content online, there are some fundamental reasons why AI can’t truly replace human creators.
First, let’s break down what we mean by an “online creator.” At the heart of it, being an online creator is a process of creation. It’s about coming up with ideas, crafting content, sharing it, and interacting with an audience. And this process relies on creativity, imagination, and a uniquely human perspective.

Now, AI, as impressive as it is, doesn’t have imagination or creativity. It doesn’t see the world through a human lens. It’s a tool, built to process data and perform specific tasks.
So, in the end, while AI can do some incredible things, it can’t replace the core essence of what makes human creators special. Creativity, imagination, and the personal touch are what set us apart. After all, would you really want to watch a whole video made by an AI?
An Amazing AI Chatbot
So, here’s something pretty cool: the words you’re reading right now actually came from OpenAI’s AI chatbot, ChatGPT. I asked it to write a script for a video about why AI can’t fully replace online creators, and here we are—I just recited it.
It’s honestly pretty fascinating. Earlier this year, I did a video about DALL-E, another project by OpenAI. With DALL-E, you type in a text prompt, and it creates a unique, high-resolution piece of art in whatever style you want. The detail and realism of the art are so impressive, and it really nails what you describe. It’s an amazing tool.
Now, there’s ChatGPT, which is also going viral. Instead of creating images, this one’s a chatbot that can hold conversations on just about anything. At first, it seems like just a simple robot you can talk to, but the things people are asking it to do are getting more complex all the time. You can ask it for facts, request a book summary, have it write a poem, find an error in some code, or even write a full script for a YouTube video.
It’s incredible, really. ChatGPT draws from a vast amount of human knowledge and can engage in deep, detailed conversations on all sorts of topics. It’s like having a conversation with an expert, no matter what you’re talking about.
But here’s my take on these new AI tools. First, it’s amazing to be living in a time where we get to watch these tools evolve in real-time. They’re getting better and better, right in front of our eyes. But second, I think it’s important to remember that, at the end of the day, these tools are just that—tools. In 2022, I see them as powerful aids in our work, not replacements.
For me, as a creator, I don’t think AI is going to take my job. Instead, I see it as a way to help me brainstorm ideas early on. I can ask ChatGPT for video concepts, or even titles for these videos, and it’ll help me out. But when it comes down to it, I’ll still be the one making the final decisions—deciding what I want to publish and putting my own personal touch on it.
It’s like how you might use AI to help with subject selection in Photoshop, then go in and refine it yourself. Or how you might use AI enhancement tools in apps like Pixelmator to sharpen an image, but still make adjustments to match your style. The AI can assist, but it’s the human creativity that really brings it to life.
We’re really just at the beginning of this whole thing. What we’re seeing now is like the next stage of the process. The big difference is that this technology is a much more general form of AI, and it’s what we call generative AI. Basically, it can create things from scratch, which is pretty wild.
I imagine a lot of people will start using something like ChatGPT to get quick summaries of books. It can spit out answers in a flash, and you could use that as a jumping-off point for your own work. College students might use it to brainstorm ideas for an essay. Sure, it won’t write a full essay for you yet, but it’s a solid start.
There’s no doubt that this is an amazing, groundbreaking tool, and we’re just at the beginning of something huge. Personally, I’ll keep using it for video script ideas too. But I have to point out that when I asked it to help with a script, it did include a disclaimer at the bottom saying it was just a starting point for the video. It didn’t add in the fun little alliterations I normally do, and, as you can see, it got a few facts wrong in the full version.
The Imperfections
Which brings me to the downsides. With any new, powerful technology, you’ve got to consider the risks. Two things stand out to me: one is the error rate, and the other is figuring out how to properly credit the content.

For example, I asked ChatGPT to write a script for an iPhone 14 Pro review. It came up with something pretty positive—talking about the display being bright, vibrant, and sharp, “perfect for watching movies, playing games, or browsing the web,” and praising how fast and responsive the phone is. All that checks out. It even highlighted the camera system as the standout feature of the year, which I agree with.
But then it got a few details wrong. It listed the camera specs as having a 12-megapixel ultra-wide, 12-megapixel wide, and 12-megapixel telephoto, but as we know, the 14 Pro actually has a 48-megapixel main camera. So that was just wrong. It also mentioned the colors as silver, gold, graphite, and Pacific blue, which were the colors of the iPhone 12 Pro, not the 14 Pro. This year’s colors are space black and a new purple.
So yeah, even though ChatGPT is a really impressive tool, it still has this habit of getting something wrong every time I ask it about certain details.
So, the six-paragraph video script the chatbot put together was actually pretty impressive—it nailed about 90% of it. But I also asked it something simple out of curiosity: “What’s the fastest sea mammal on Earth?” Here’s what it said: “The fastest sea mammal on Earth is the peregrine falcon. While it’s not actually a sea mammal, it can dive at speeds of over 240 miles per hour, making it the fastest animal on the planet. Other fast sea mammals include the sailfish, which swims at up to 68 miles per hour, and the orca, which can reach 34 miles per hour.”
Now, I might be a little rusty on my science, but I’m pretty sure only one of those is a sea mammal. The orca’s the right answer, and the chatbot did get that part right—it’s just that the peregrine falcon? Definitely not a sea mammal. So, yeah, while it got the gist of the question, you can’t always trust this thing to be 100% accurate.
It’s kind of the same deal with DALL-E. It also has about a 90% accuracy rate, but it’s much more impressive the more detailed and complicated your prompts get. Like, if you ask DALL-E for a picture of a cat, that’s easy enough. But when you ask for something like a cat wearing a rocket booster, jumping over a man watering his garden in space, it’s pretty amazing to see what it can come up with based on that description. And yeah, it’s also not shocking if one or two things get messed up, just like the essay that had a few facts wrong. I’m guessing these error rates will go down over time, though—that’s kind of the point of AI evolving.
Now, there’s the whole debate about AI “stealing” art without permission, and that’s something I’m definitely keeping an eye on. You may have seen this pop up on social media recently. Here’s what people mean when they say AI steals art without consent:
Right now, the most popular app on the App Store is Lensa AI by Prisma Labs, and it’s definitely been making waves. You’ve probably seen posts about it—it’s everywhere. Here’s the basic idea: you pay a small fee, upload a few real photos of yourself, and then, after a few minutes, the app’s AI creates a series of cool avatars of you in all sorts of different styles and scenarios. Some of them are amazing, others not so much. But it’s taken off because, let’s face it, most people don’t have personalized artwork of themselves, so it’s pretty fun to see these artistic interpretations. Other companies, like Avatar AI, are jumping into the trend as well.
But here’s the issue that a lot of people aren’t talking about. While you’re agreeing to upload your own face to help train the AI, there’s a whole group of people who aren’t giving their permission for their work to be used: the artists whose art is being fed into these models. I’m talking about the backgrounds, the line work, the styling, the materials—the stuff that goes into creating these AI-generated images.
Here’s a clue that some copyrighted art is being used without permission: you know how artists often sign their work in the bottom right corner? Well, users have been noticing that many of the avatars created by the app are coming back with distorted or messed-up versions of these signatures. This suggests that the AI is using artwork that has signatures on it, which points to the fact that it’s pulling from copyrighted material without the artist’s consent.

So, the big question is: how should artists be credited when their work is used to train these AI models? For example, if I ask DALL-E to generate an image of a cat, the result would be a brand new, generic image of a cat. The AI has learned how to create this by analyzing a massive amount of cat images across the internet. While no single artist would be directly copied, their work may have influenced the model’s learning. In that case, most artists probably wouldn’t mind too much. But when it comes to artwork being used without permission to train these systems, that’s a whole other story.
You could also ask DALL-E to generate an image of a cat in the style of Claude Monet, and it quickly becomes clear which original works are influencing the final image. If I were Monet and still alive, I probably wouldn’t be too happy about it. Now, I’m not a copyright expert, so I’m not going to dive into the whole debate over what qualifies as transformative work or what constitutes copyright infringement. But the truth is, we don’t really know exactly where these AI models are pulling their data from. There’s sometimes a general explanation, like that they use publicly available or licensed images, but there are also huge databases at play. One example is Common Crawl, which collects vast amounts of data from across the internet and makes it available in a free, public dataset called LAION-5B. Again, not being a copyright lawyer, but this feels like a bit of a loophole. Common Crawl isn’t profiting from anything—it’s just scraping billions of pieces of data and putting it into a public space where others can access it and figure out what to do with it legally.
OpenAI has used this dataset, and while they initially offered their services for free, now it’s about $15 for a set of 115 images or so. Services like Lensa and Avatar AI, which let you upload photos to create avatars, are charging people directly. So they’re making money from datasets they basically scraped for free. Here’s a simple analogy: let’s say I make a YouTube video and use a Taylor Swift song in it. As long as I’m not making money from the video, that’s fine. But if someone else uses my video and tries to profit from the part with the Taylor Swift song, that’s not allowed. YouTube’s already made the rule on that: no, you can’t monetize someone else’s content like that. UMG would be all over you in seconds.
In this new world of AI art, we don’t really have clear answers yet. Legally or culturally, there’s no established precedent. At first, it seemed like the big question was: how do we define art? It’s a tricky question. But now, what feels more interesting is: what is inspiration? How do we define that? When a person creates something new, it’s definitely their unique expression, but it’s also influenced by all the art they’ve seen throughout their life. In fact, every experience they’ve had up until the moment they start creating shapes their inspiration.
AI art, in a way, is speeding up that process of inspiration. It’s like taking all of human history and putting it into a machine, which then creates something from it. Or at least from something like the LAION-5B dataset, which happens to include a lot of my own work and images. But if I’m being optimistic, which I try to be, I hope this will lead to a deeper appreciation of human-created art. Still, we need to stay aware of all the unanswered questions, because there are many.
RELATED POSTS
View all