The Lovable Story Isn't About a Bug. It's About Who Owns Security When AI Builds the Software.

The headline: Lovable, the Swedish vibe-coding platform valued at $6.6 billion with customers including Uber, Zendesk, and Deutsche Telekom, has been accused of leaving thousands of projects exposed for 48 days. A free account, created in minutes, can reportedly access other users' source code, database credentials, AI chat histories, and customer data from any project created before November 2025. The flaw — Broken Object Level Authorization, ranked #1 on the OWASP API Security Top 10, was reported March 3 and marked as a duplicate. It remained exploitable 48 days later.
Lovable's response was to deny a breach occurred at all. They called the visibility "intentional behavior." They blamed unclear documentation. They blamed HackerOne for misclassifying the report.
The easy lesson is "Lovable has a security problem." That's true but misses the more important one. The Lovable story is the clearest case study yet of a question every business leader using AI tools needs to answer: when an AI platform builds your software, who actually owns the security of what gets shipped?
What actually happened
Two incidents matter here, and the difference between them is important.
February incident | April incident (breaking today) | |
|---|---|---|
What | Flaws in an app built on Lovable | Flaw in Lovable's own infrastructure |
Impact | 18,000 users of one education app exposed | Every project created pre-November 2025 |
Root cause | AI-generated code with missing security config | API that checks login but not ownership |
Lovable's position | Users are responsible before publishing | Denies a breach occurred |
The April incident is the worse of the two. The /projects/{id}/* endpoints check that you're logged in but never check whether the project belongs to you. That single missing check is enough to expose every project on the platform.
The researcher demonstrated severity by accessing the admin panel of Connected Women in AI, a Danish nonprofit, pulling real names, job titles, LinkedIn profiles, and Stripe customer IDs of professionals from Accenture Denmark and Copenhagen Business School. As they put it: "This is not hacking. This is five API calls from a free account."
Why the corporate response made it worse
Lovable's response is the part that turns a security incident into a trust incident and trust is harder to repair than code.
Each move in their statement read as a smaller version of the truth. Each was caught immediately by people with screenshots. Lovable's own Trust Center says security is foundational. Its security page says customer data is not accessible across accounts. Its privacy policy claims a 24/7 incident response team. When you sell trust that hard, you don't get to retreat to "documentation was unclear" when the trust gets tested.
The technical flaw is the surface story. The real damage is that Lovable's response showed customers what the company actually does when its own security is on the line, minimize, deflect, blame the bug bounty platform. That's a tell about culture, not engineering.
The deeper question this exposes
If you've used or considered any AI app builder — Lovable, Replit, v0, Bolt, Base44 — the Lovable incident raises a question that doesn't have a clean answer yet. When an AI platform generates and hosts your application, where does security responsibility sit?
The gap is real:
What the marketing promises | What the contract actually says |
|---|---|
"Production-ready apps" | User reviews security pre-publish |
"Authentication included" | Security scanner is advisory, not enforced |
"Secure by default" | Configuration is your responsibility |
"Trust is foundational" | Platform liability is limited |
The question isn't whether AI app builders are useful. They are. The question is who is accountable when the platform that promised "secure by default" turns out to mean "secure if you knew what to check, which you didn't."
What this means for your business
Four things that matter regardless of which platform you use.
This is the canary, not the cliff. The DORA report found a 7.2% decrease in delivery stability for every 25% increase in AI code usage. Nearly half of all AI-generated code contains vulnerabilities. Lovable is the platform that got caught publicly. The pattern is industry-wide.
"Secure by default" matters more than feature parity. The platforms that ship with ownership checks enforced, secrets scanned, and chat history excluded from public visibility won't put you in this situation. Ask the question before you sign.
The response model is part of the product. Lovable's denial-and-deflect response would be unthinkable from a mature enterprise vendor. From an AI startup with a $6.6B valuation, it appears to be a reflex. Buy accordingly.
Governance is not optional even at the smallest scale. If your business runs even one internal tool built on a vibe-coding platform, you need to know what data it touches, what credentials it holds, and what your incident response would look like if the platform itself were compromised.
What I'd do this week
Your response depends on what you're actually using.
Your situation | This week |
|---|---|
You've built on Lovable | Rotate credentials today. Audit chat history for secrets. Assume pre-November 2025 source code is already public. |
You've built on another AI app platform | Run the same audit anyway. The next incident hasn't been disclosed yet — that doesn't mean it can't be. |
You're considering one for the first time | Read the security page. Then read the terms of service. When they contradict, the terms of service is what you signed. |
Your team uses these tools casually | Set a one-page policy. What can be built, what data can be touched, where credentials live, what gets reviewed before it ships. |
The Lovable story makes this policy easy to write. The next incident will be expensive to write retroactively.
The deeper thing
The story of the AI tooling boom over the last two years has been speed. Faster prototypes, faster apps, faster shipping. The story of the next two years is going to be the cleanup. The platforms that survive will be the ones that treated security as a foundation, not a feature flag. The platforms that don't will join a long list of cautionary tales founders tell each other at conferences.
For business leaders, the right posture isn't fear of these tools. It's clear-eyed about what they actually are. Useful for some things. Dangerous for others. Worth using with discipline, not faith. The Lovable story is the one that should make discipline cheap to argue for in your organization. Use it that way.












