I thought I was ready. The app was vibe-coded and working. The domain and mail services were configured. Even the marketing video was done. There was only "one little problem" - the value wasn't there. Oh, and AI was hallucinating.
People who know me or read my blog are aware of the strong opinions I hold towards competitor research. And yet for one of my side projects, I chose this topic and decided to create a tool for fellow product managers and marketers to monitor their competitors.
Configuration >> Coding
It was also my first serious attempt to create a working product via vibe coding. And it started pretty well, too. I chose Firebase Studio to kick things off (mainly cause they promised me some free credits to host the app on the Google Cloud). The prototyper agent inside Firebase Studio is actually pretty good with visual stuff, so I had the front-end for my app ready in a few days. I was expecting more difficulties with the backend, and I was right. Prototyper wasn't able to deliver the functionality I needed, so I had to switch to something a bit smarter - ChatGPT. My first attempts were really pathetic; I was just copying and pasting the code from Firebase Studio to ChatGPT and back. Things progressed at a snail's pace until a colleague mentioned Codex to me. That was the first major embarrassment - Codex was right there in ChatGPT, but I failed to try it out.With Codex, the backend of my app was up and kicking within a few weeks. Frankly, it took me way more time to figure out the quirks of Firestore security rules than actually iterate through a backend. That's another major learning of this whole endeavour - writing code is an easy part for agents, configuration and integrations are the difficult ones. The most frustrating part is how inept Google's own LLM was to help fix problems with their other services. You'd think it should work as it's a part of the same ecosystem, but alas, I received better tips on how to do stuff in Google Cloud Console from ChatGPT than from Gemini, which is built into the console.
Too good to be true?
I steadily went through my to-do list until all must-have items were done. Not only were they done on paper, but I was also testing the app, and it was working. Or so it looked like.My app was about competitive research. The promise was really simple: as a product person, you'd put it in the names of your competitors, and the app would produce a share-ready report outlining their recent product changes, financials, hiring and strategic news. Then you'd be able to set up a monitoring schedule to receive an updated report regularly. As a product manager myself, I knew this was valuable. All of us are doing competitive or market research, and it always takes time that could be spent on something more valuable. I calculated that my app could roughly save a PM about 8 hours a month in market research activities.
So I was happy with the state of the app and started dogfooding it. That's where the first cracks started to appear. The information in the report my app generated was believable and realistic, but... the links it showed as a source for the report information weren't working. Links looked genuine and led to reputable sources, but the actual claimed content wasn't there. That's when I realised the AI was hallucinating.
That was the second and most embarrassing moment - how could I not notice it earlier? I guess the answer is pretty simple - confirmation bias. I really wanted it to work and got too excited when I saw the early signs of it "working". The good part was that I noticed hallucinations before I shared the app with my friends and the wider world. It spared me even bigger embarrassment.
This setback didn't put me off the idea. I still believed there was value and opportunity there. So I started fixing the hallucinations. It was long, and it was painful, not to mention expensive. I've gotten to the state where it was semi-working, but it no longer looked as neat as before. Because, surprise surprise, real-world data is never neat and polished.
And the last nail in the coffin of this idea? There was something already much better, cheaper and more accessible than my app - pure LLM call. Try it if you haven't already. Go to your LLM of choice and ask it to create a competitive report on your competitors. Choose the "deep research" option if you can. You'll get a neatly formatted document with real data and surprising insights into your market. All that is either free or for a moderate fee of a "pro" subscription.
That was the moment when I decided not to launch Advantage Scout. I still don't consider my efforts spent on it a waste, as I learned a ton. However, I had to admit - the value wasn't there.
Not launching yet =! not launching ever
The timing is as important for the success of any idea as the execution. Even though, at the moment, Advantage Scout won't be launching, I am still not abandoning this idea. I might return to it later, in some other form, maybe with a completely different value proposition.The main learnings for me in this embarrassing failure story are the following:
- Share early what you're working on and how you're working on it, so other people, with different experiences, might give you valuable, sometimes paradigm-shifting feedback
- Writing code is an easy part for an LLM - prioritise platforms with convenient tools for running and debugging your app
- Try hard to "break your app" and don't fall for confirmation bias - when something looks too good to be true, it probably is
- Think deeply about the value you want to deliver, and if something already delivers that value more conveniently or more cheaply
- Don't discard ideas just because the original attempt at realising them failed
- Be mindful of the timing for product launches - it could make or break your product
