We're in a world where you can dream up an app and have AI create it for you, just by having a conversation with a chatbot. That's vibe coding.
Just about anyone can do it, and there's no real learning curve. The final app you create is only as good as the prompts you give it, so while all of your projects might not turn out perfect, it's easy enough for anyone to pick up.Â
I've played around with vibe coding quite a bit, creating random projects here and there. I've mostly tried vibe coding out for proof of concept or chatbot testing, but I've rarely used it to make something crucial or functional enough to use daily.Â
This particular project came to be by accident. I'd been in the market for an e-reader, looking to disconnect from my overly connected iPad. Regardless of the motivation (but mostly in defiance of Amazon), it prompted me to try to vibe code a fix -- but with a twist.
I wanted to see if I could vibe code a functional e-reading application with all the features I wanted. Even knowing that if I was able to get this thing up and running and knowing I likely wouldn't use this daily, I still wanted to add some flair.
The question became what AI chatbot to use. I tested three -- Gemini, Claude and ChatGPT -- to create what I wanted and then checked to see which produced better results.
As it turns out, the model didn't matter as much as the prompt.Â
The prompt
Comparing chatbots is hard, especially when trying to mimic the same conversation or vibe in a coding project. Believe me, I've tried. I wanted to make sure all the tools I tested used the same prompt, but first, I wanted to refine it to get the best results, so I came up with a strategy to help me do that.
First, I built the entire project from the ground up with Gemini. Once I liked where the project was (a successful, functional proof of concept), I asked it to create a prompt so I could add it to any other chatbot. Gemini generated the prompt, I saved it as a file, and I uploaded it to Claude. I went through this process again, allowing Claude to catch and fix things I hadn't thought about when building the project and Gemini. Once that process was complete, I asked it to create another prompt so I could add it to ChatGPT.
The idea was to have all three chatbots contribute to the actual creation of the project and, in turn, to the final prompt. Once the prompt was created, I uploaded it to all three chatbots in a separate chat to see how consistently they performed.
The project: The Tome Reader
I wanted to create an immersive e-reader web application that would read your books aloud (with real-time text highlighting), whether you pasted text or uploaded a PDF or EPUB file.Â
This project was born out of my frustration with Amazon's Kindle devices. Anyone who likes to read and listen to their books can do so with real-time highlighting in the app for iOS or Android, but after all this time -- nearly 20 years -- you can't do this on a Kindle.Â
In fact, it wasn't long ago that users gained the ability to achieve real-time text highlighting with the assistive reader playing, which is so close to the app's functionality. As of right now, you can only read or listen to an audiobook on Kindle, not both, which is laughable, and so was the idea of Amazon owning all my books. I got to thinking that I could just vibe code a solution.Â
I call it the Tome Reader.
In addition to reading the text aloud, the web app would create background music depending on the content of the text in a subset of categories (neutral, gothic horror, sci-fi, nature, fantasy, underwater, western, mystery), and generate additional sound and visual effects when certain trigger words were spoken in real-time. The entire project was created in a single HTML file so it could run within a web browser without additional dependencies.
Building with the chatbots: The first round
Gemini
Gemini made all the features I wanted from the Tomb Reader with relative ease.
Gemini allowed me to figure out how far I could stretch this web app's functionality, and thus most of it comes from Google's chatbot. It allowed me to hash out some small issues in the beginning, which prevented the TTS voices from loading. Instead, it created an initialization screen that would force the voices to load after clicking on an "open" screen to the application. Without this type of know-how, the project wouldn't have gotten off the ground.
Slowly but surely, the project's functionality began to grow. Because live sound effects for certain words can be distracting, I added an option to turn them off, along with the background music. After I got a base of the application working, I asked Gemini to create a prompt I could share with other chatbots so I could build it elsewhere if I wanted, and that's what it did.
Claude
Claude's project gave me the most success in some areas and more trouble in others, but is my personal favorite of the three test projects.
Claude made fantastic refinements to the underlying function of the trigger words in this project. Claude expanded the vocabulary and enhanced visualization when a trigger word was spoken aloud. That said, Claude made a call that I didn't ask for, though the logic indeed made sense.
Initially, I thought the project wasn't working because when I tested its functionality, only the first trigger word would produce the desired effect in a string of nearly 10. It took some time for Claude to finally reveal that it had decided to allow the sound and visual effects to trigger only once per sentence, so as not to "spam" the user. This made a lot of sense, but the project was more of a proof of concept than a functional reader, and Gemini and ChatGPT generated sound effects for every keyword, which was the expected functionality.
All that said, there was no specific instruction in the prompt about how many times the sound and visual effects played. While it wasn't necessarily what I wanted, I did appreciate the consideration of the overall user experience in making this call. Then, after all those refinements were coded in, Claude updated the prompt, and I took it to ChatGPT.
ChatGPT
ChatGPT failed to create additional features when I asked it to at times, but still managed to recreate the project perfectly when I gave it the final prompt.Â
By the time I had created the updated prompt with Claude, there wasn't much else I could think to do when I uploaded it to ChatGPT. Luckily, OpenAI's chatbot created the project with ease, though it was the slowest at generating code. The one function I did ask ChatGPT to add to the project -- to create a dedicated volume slider for the background music so it could be turned off completely if one just wanted a dedicated e-reading experience -- failed consistently. Eventually, I went back to Claude to ask for this functionality and recreate the prompt.
Round 2: Recreating the same project
Despite using Gemini 3 Pro to initially build the project and the free versions of ChatGPT and Claude, all three created the project, but not without issues.Â
I had spent most of my time with Claude refining the project, and it was responsible for creating the final version. So it was incredibly surprising to find out that when uploading that prompt into a new chat, the project wouldn't load past the first "initialization" page. Despite having no issues at all with previous iterations, it took 11 (yes, really) additional full rebuilds to figure out what was going on.
Recreating the project with both Gemini and ChatGPT worked flawlessly. All functions, basic and advanced, worked as they should, including file uploading, test highlighting, text-to-speech output, and both audio and visual effects when trigger words were spoken aloud. Going back to the models, I saw very little difference in function or performance when giving the same prompt to each chatbot.
Chatbot quirks
Acquiring the test file was always easy with Claude. Not only did it offer a preview of the project so you never needed to download the HTML file, but if you wanted to (which I often did for testing), it was available for download directly. This option was sometimes offered with ChatGPT, while at other times, I could only copy the HTML and save it myself.
Despite having the fewest errors and overall qualms with Gemini, it always required you to take the long route and do it this way. All that aside, the fact that ChatGPT would sometimes offer to let me download the file as HTML directly was peculiar and a little frustrating.
The winner depends
Defining a winner for this type of test is tricky, as all chatbots have pros and cons. In a sense, they all win. Each was able to create a functional version of the project at some point, but it often took repeated efforts.
Ultimately, the winner is the user. It goes to show that, regardless of the model being used, a solid set of instructions can get you far. I was unable to distinguish differences in performance or function between the app created by Gemini 3 Pro versus the free versions of ChatGPT or Claude.
This actually goes directly against what I found when having a similar conversation with both the Gemini pro and free models. While that was another day, another project, and another model, it shows that a solid prompt can get you incredibly far in the world of vibe coding.


