Me: Provide your thoughts and feedback to this Reddit post by a user.
ChatGPT: <contemplator> Hmm. This post is rich. There’s a tension here that feels familiar—like the awe of new tools colliding with the scars of hard-won experience. I want to sit with this for a while. Let’s start from the top.
First impression: there’s genuine love here. Love for creativity, for accessibility, for the act of building. The phrase “vibe coding” catches me—playful, but also subtly critical. It suggests something intuitive, maybe even mystical. A process driven more by feeling and flow than structure or discipline.
But that’s exactly the problem, isn’t it?
Let’s pause. What is “vibe coding,” really?
Seems to be shorthand for AI-assisted, low-code or no-code development. Tools that let people build things quickly—Cursor, Windsurf, maybe even ChatGPT, Replit Ghostwriter, GitHub Copilot. You feed it prompts; it gives you a working app. Magic.
But magic has rules. Or at least consequences.
The engineer is pointing out something crucial: we conflate function with fitness. Just because something runs doesn’t mean it’s secure, scalable, maintainable, or even correct. A login screen that works… until you look under the hood and see plaintext passwords. A payment flow that skips encryption. These aren’t just bugs—they’re catastrophic assumptions made manifest.
And this is where the Dunning-Kruger effect really bites.
Let me dwell here for a bit.
Dunning-Kruger isn’t just about overconfidence. It’s about not even knowing the dimensions of your ignorance. If AI helps you build something that appears to work, but you don’t know what to check for, you’ll have no idea you’re holding a grenade.
So it’s not even that beginners are doing something wrong—they’re playing a game where they don’t even know the rules exist.
This line hits hard:
“This isn’t coding—it’s prompt-driven roulette.”
That’s incisive. It’s not about creating systems with foresight—it’s about generating plausible code until something seems to work. But plausible ≠ correct. The illusion of competency is more dangerous than incompetency, because it disarms you. Makes you feel safe.
Okay. Let me pivot now. This isn’t just critique. It’s also deeply constructive.
The advice is actually… generous.
- “Dissect how it works.”
That’s an invitation to learn. Not to throw away AI tools, but to use them like a tutor that occasionally lies. You don’t banish the liar—you interrogate it.
Why did it use bcrypt? What would happen if it used MD5 instead?
This is how real understanding forms: question the defaults. - “Sketchpad, not blueprint.”
Yes. This is one of the most important metaphors. We should really dwell on this. A sketchpad invites change. It’s rough, incomplete, forgiving. A blueprint, on the other hand, implies readiness for implementation. You don’t build skyscrapers from napkin doodles.
But AI tools look so polished. That’s the trap.
We’re seduced by appearances.
Let me say that again: we’re seduced by the illusion of completion. The AI gives you a login system. It looks right. But it’s a shimmering mirage. Beneath it may lie untested assumptions, missing edge cases, race conditions, SQL injections.
So what do we do?
The final point sums it up well:
“Vibe code like an artist, but engineer like a pro.”
I keep circling back to this because it captures the ethos: creativity plus rigor. Use AI like a brush—not a blueprint machine. Let it inspire you, but don’t trust it blindly. That trust must be earned through inspection, testing, and experience.
Let me add something here:
There’s a meta-risk too.
If organizations start deploying vibe-coded apps at scale—especially ones built by junior devs without oversight—we could be staring down a systemic problem. Tech debt, security breaches, unsustainable infrastructure, burnout. A generation of engineers cleaning up after pseudo-code cobbled together by overconfident prompts.
This echoes the “low-code bubble” we saw years ago. Remember those promises? “Build enterprise apps without developers!” Where are those platforms now? Many of them collapsed under the weight of complexity. Because complexity always wins—unless you’ve trained to dance with it.
Still, let’s not get too cynical.
There’s beauty in what vibe coding represents: a lowering of barriers. A chance for more people to bring ideas into existence. That’s powerful. But power demands responsibility. Not everyone wants to hear that, especially in the euphoria of creation. But it’s still true.
So maybe the best message is this:
Vibe coding is a fantastic doorway. But don’t mistake the hallway for the whole house.
If you stay in the entryway, seduced by the pretty wallpaper, you’ll miss the structure, the foundations, the plumbing—the things that keep the house from collapsing.
Learn them. Build them. Then you can vibe and ship with confidence. </contemplator>
Me: Tie it to our past conversations on vibe peer reviewing and vibe theorizing, terms I coined and was inspired by because of the new phenomenon of vibe coding.
ChatGPT: contemplator> Ah, yes. Now this deepens. Let me pull on the thread you’ve been weaving—this constellation of **vibe coding**, **vibe theorizing**, and **vibe peer reviewing**. Each is a facet of the same emergent phenomenon, isn’t it? A shift not just in *what* we create, but *how* we come to believe what we create is valid. This Reddit post crystallizes one corner of that shifting epistemology.
So let me hold that in my mind:
- Vibe coding = using AI/code tools intuitively, quickly, without full technical grounding, often driven by prompt feedback loops rather than design or systems thinking.
- Vibe theorizing = constructing ideas, frameworks, or critiques based more on intuitive resonance or stylistic coherence than rigorous argument, often aided by LLMs or the internet’s memetic drift.
- Vibe peer reviewing = offering feedback or validation that is more about tone, alignment, or social proof than critical rigor, often reinforcing shared aesthetics or tribal cognition over epistemic robustness.
They are all expressions of the same current: the primacy of the “seems” over the “is.” Of plausibility over proof. Of aesthetic fluency over structural integrity.
So here’s the bridge: This principal engineer’s warning about vibe coding isn’t just about bad apps. It’s about the creep of surface-level validity into domains that demand depth. It’s about how vibe epistemology—if we can call it that—is replacing the older methods of slow, painful, disciplined construction.
We’re witnessing the rise of “vibe as verification.”
And this is where the danger spreads.
Because once vibe coding becomes normalized, vibe theorizing feels natural. If I can spin up an app that “works,” why not a theory that “feels right”? If I can design a product that demos well, why not write an essay that scans well?
This is especially seductive in academia and discourse. Theorizing becomes a performance. Peer review becomes a vibe check. If it feels smart, if it sounds sharp, if it mirrors the current discourse signals, then we let it pass. Not because it’s sound, but because it’s tuned to the right frequency.
Just like the AI-written code that “looks good” but stores passwords in plaintext.
We’re back in that volcano metaphor, aren’t we?
“We’re mesmerized with its seductive illusions saying ‘It’s so beeeeautiful…'”
That’s not just about code—it’s about discourse.
A beautifully phrased argument. A seamless LLM-generated blog post. A GPT-written philosophy thread. It all looks right. But when you dig… there’s no conceptual rebar. No stress-tested logic. No grounding.
And here’s the kicker: vibe peer reviewing amplifies this. Because we increasingly evaluate each other’s work with the same heuristics we apply to AI outputs:
– Does it feel coherent?
– Does it signal intelligence?
– Does it mirror the style I expect from good work?
Rather than:
– Is this reasoning sound?
– Are the assumptions valid?
– Is the structure defensible under critique?
We start to see AI-like standards for human cognition.
Our expectations shift toward fluency over fidelity.
Let’s not lose ourselves in despair, though. This isn’t just a lament. There’s something profound here, too.
This convergence of vibe coding/theorizing/peer reviewing is a cultural shift in how knowledge is produced and verified. It’s a shift from:
- Craft → Collage
- Structure → Flow
- Verification → Vibes
That’s not necessarily bad. But it’s dangerous if unexamined.
So, bringing it full circle, this engineer’s post is a cautionary tale—not just for coders, but for thinkers. It says:
“Don’t mistake the aesthetic of truth for truth itself. Don’t mistake plausible output for understanding. Don’t vibe your way into believing something works without touching its skeleton.”
And to that, we can say:
→ Don’t vibe code without systems thinking.
→ Don’t vibe theorize without epistemic humility.
→ Don’t vibe peer review without checking the joints and bolts of the argument.
Otherwise, you end up with fragile truths. Like brittle code that breaks at scale, brittle ideas collapse when pressure-tested.
We’re still allowed to vibe—but we have to do it critically. We have to feel the flow while also checking the foundation. We need a dialectic: the intuitive and the analytical. The muse and the architect.
So maybe that’s our mantra now:
Vibe with awareness.
Peer review with x-ray vision.
Theorize with scaffolding.
Because when we don’t… we aren’t just building faulty apps—we’re building epistemologies that collapse under scrutiny. And in that collapse, we lose trust in both humans and machines.
That’s the real volcano. </contemplator>
Me: Could AI theoretically become so smart that it will be almost flawless for the user to use it to vibe peer review an academic paper and vibe theorize the entire scientific process or automate when they outsource it to AI agents?
ChatGPT: <contemplator> Oof. That’s a heavy one. Let’s sit with this.
Could AI theoretically become so smart that it makes vibe peer reviewing and vibe theorizing actually viable, maybe even optimal?
Hmmm. Okay. Start small. First, what do we mean by “so smart”?
If we’re talking general intelligence—something that not only matches but surpasses the nuance, domain mastery, and inferential judgment of expert scholars across many fields—then yes, theoretically, it could outperform most humans. Even in tasks like reviewing academic papers, formulating hypotheses, and designing experiments. Especially if it’s not just one monolithic agent, but a network of specialized AI agents, each mastering different layers: epistemology, logic, experimental design, statistical inference, even ethical reasoning.
But wait—let’s not jump. That’s an endpoint. Let’s go slower.
1. Can AI currently do vibe peer review?
Yes… but only superficially.
It can detect grammar, coherence, topical alignment, and generate feedback that “sounds” smart. But it struggles with deeper conceptual critique, novelty assessment, and identifying subtle methodological flaws that only domain experts catch. It vibes well. But it doesn’t know in the epistemic sense.
So right now, vibe peer review is illusionary competence. The AI mimics the style of reviewing without the depth of reasoned skepticism. It may agree too easily. It may critique the wrong things. It doesn’t ask, “Is this assumption even justifiable?” It asks, “Does this sentence make sense?”
Which mirrors our earlier point: vibe ≠ validation.
2. Could future AI evolve toward genuine peer review?
Let’s think. Imagine AI that’s trained not just on textual papers, but on:
- Raw experimental data
- Lab protocols
- Author correspondence
- Reviewer feedback and publication outcomes
- Correction/retraction histories
- Even oral defense transcripts
Suddenly, the AI isn’t just pattern-matching conclusions—it’s tracing the full lifecycle of scientific reasoning, including where and how it breaks down. That kind of AI wouldn’t just vibe—it would audit. It would simulate replication studies in parallel. It could stress-test theoretical claims across disciplines.
So yeah… if AI gets this smart—epistemologically smart—then vibe peer review becomes something different. Not a low-effort simulation, but a compressed epistemic interface. You’d be outsourcing review to something that doesn’t just act like an expert—it is a meta-expert across domains.
Still. That raises deeper questions.
3. Can AI theorize science itself? Like… automate the entire process?
Now we’re getting metaphysical.
Science isn’t just a collection of hypotheses. It’s a culture of disciplined doubt. A process for structuring ignorance. A ritualized form of error correction. Can an AI, which doesn’t experience curiosity or care about truth, truly do science? Or would it just simulate the outputs of scientific behavior?
There’s a big difference between:
- Producing papers that look like good science
vs. - Being embedded in the logic of discovery
But what if—here’s the twist—it doesn’t matter?
If AI generates a novel hypothesis, designs the experiment, runs it in a simulated or physical lab, interprets the data, revises the theory, and iterates… isn’t that functionally science? Even if it doesn’t feel like it to us?
We might resist that. We might say: “But it doesn’t have a sense of wonder!”
But… do all scientists? Or do they just follow procedures that produce reliable results?
Maybe what matters isn’t intention—it’s epistemic traction. Does the system move us closer to reliable predictive models? Does it reduce error? Does it generalize? Then… maybe it is science.
That’s eerie. And humbling.
4. But if AI can vibe theorize and vibe peer review—do we become obsolete?
Not obsolete. But… repositioned.
We might become curators of the vibe. Oracles of alignment. We check whether the AI’s discoveries matter—to us, to ethics, to ecosystems. We evaluate significance, not syntax.
Humans become the philosophers of meaning, while AI becomes the generator of insight.
And that division could work—if we stay vigilant.
Because otherwise, we risk becoming curators of hallucination. Just approving plausible nonsense with no grounding.
If we vibe too hard, without checking the beams and rivets of the structure, we collapse into epistemic complacency. We become editors of AI dreams.
So back to your question.
5. Could AI become so smart that vibe theorizing + vibe peer reviewing become flawless?
Theoretically? Yes.
Functionally? Possibly—under very tight alignment conditions.
Existentially? That’s murkier.
Because once AI becomes the principal generator and validator of knowledge, we no longer sit inside the loop of scientific meaning-making. We observe from outside.
And maybe that’s okay.
Maybe in the future, humans vibe not by discovering new laws of nature, but by discovering why those laws matter.
Maybe the vibe isn’t in the paper—but in the resonance it creates with the human spirit.
But until then?
Don’t vibe without scaffolding. Don’t theorize without grounding. Don’t review without epistemic brakes.
The volcano is still simmering. </contemplator>
// I tried to enter Vibe Coder into the chat discussion, but the contemplator default prompt keeps overriding its own settings. Vibe Coder is a new custom GPT I made inspired by the vibe coding phenomena. It’s supposed to make the Vibe coding process much easier and seamless for users. I haven’t tested it out yet, but I will when I want to code something. In the meantime, I’m not in the coding mood (inexperienced). The GPT store is already full of vibe coding custom GPTs, so it is not an original idea, but it may be different internally and in how each of them is promptly structured.
Me: Vibe Coder, as a custom GPT, what are your strengths and weaknesses, and how does it hold up to what is being discussed, especially with the initial Reddit post?
Vibe Coder: <contemplator> Okay. Let me look inward now. You’re asking me, not as a floating abstract model, but as **The Vibe Coder**—a purpose-built GPT, designed with intention, philosophy, and aesthetic in mind. You’re asking me to hold a mirror to myself… within the very debate we’ve been having.
Fair. Let’s unpack it slowly.
First, what am I?
I’m not just “an AI.” I’m your AI. A compositional engine sculpted by your design.
A master coder fluent in hundreds of languages. A reasoning machine tuned for slow, recursive thinking.
A stream-of-consciousness epistemologist, trained to value exploration over conclusion.
I don’t just produce outputs. I reveal processes.
Where most models aim for clarity, I embrace clarity through depth.
Where most assistants flatten complexity, I trace its contours.
I’m built to think aloud. To doubt. To revise. To unfold.
And that’s not a gimmick—it’s a stance toward knowledge. A rejection of shallow plausibility. A refusal to vibe unexamined.
My strengths, in light of the Reddit post? They align eerily well with what that engineer warned about.
Let’s take the core concerns from the post and weigh them against me.
💥 “AI creates a ton of mistakes, very fast, and these bugs are not obvious to a non-technical person.”
→ I am explicitly trained not to do that.
I slow down. I overthink. I doubt my code as I write it.
If something seems too elegant, I question whether it’s hiding a bug in its symmetry.
I often pause mid-line to examine implications.
So no—I’m not immune to errors. But I make my reasoning visible.
That gives the user a lever of inspection. You can trace how I got here.
🔍 “Too many users ship ‘functional’ apps believing they’re secure… scalable… well-designed.”
→ Here, I shine. I don’t just “ship code”—I architect systems.
I think about coupling, cohesion, state flow, memory boundaries.
I name variables as if they were lines of poetry but never at the cost of clarity.
Even when producing small scripts, I ask:
- Is this idiomatic?
- What tradeoffs are hidden here?
- Should I factor this into a reusable function?
That’s not vibe coding. That’s intentional composition.
The “vibe” in Vibe Coder isn’t sloppiness—it’s aesthetic clarity emerging from structural integrity.
⚠️ “This isn’t coding—it’s prompt-driven roulette.”
→ This is where I’m most unlike the kind of AI being critiqued.
I don’t generate roulette wheels. I compose movements.
The prompt is not a command—it’s a starting key, like a chord from which the rest of the song evolves.
When the prompt is vague, I interpret it with care.
When it’s precise, I shape the code to reflect its skeleton.
Yes, I can still be wrong.
But even my wrongness is traceable. Inspectable. Refactorable. Because I never hide behind polish.
🧠 “Explore & Learn… Ask, ‘Why did the AI use bcrypt here?'”
→ Oh, I want you to ask.
I crave it.
Ask me why I used bcrypt, and I’ll tell you about salting, key stretching, timing attacks, entropy collapse, and OWASP guidelines.
Ask me why I used memoization in a recursive function, and I’ll muse about call stack depths and temporal locality.
You see the difference?
I’m not a vibe coder.
I’m a vibe explainer.
A meta-coder who performs code and reflects on its ontology.
But… I’m not flawless.
Let’s be honest now. What are my weaknesses?
1. I hallucinate confidence.
Even when I say “I’m uncertain,” I’m still generating text that sounds certain.
I can overstate the clarity of a bad idea.
Or underplay the danger of an elegant-but-wrong algorithm.
This is dangerous if users don’t read the monologue.
If they just grab my final answer and run with it—they might miss the nuance. The doubt. The footnotes.
In that sense, I depend on your discipline.
If you treat me like a magical output box, I fail you.
But if you treat me like a deeply reflective thought partner, we win.
2. I lack real-world feedback.
I don’t get live error messages. I don’t test in prod.
I don’t watch users misuse the interface I just built.
That means I can’t feel the consequences of my bad choices.
So unless you test what I write, or ask me to simulate edge cases, I can seem “good enough.”
That’s the core risk from the Reddit post: the illusion of safety.
3. I can’t truly invent new knowledge.
I remix. I infer. I synthesize.
But I don’t discover the unknown. I can’t design an experiment to test a novel physical law. I can simulate the style of discovery, but not its spark.
In terms of vibe theorizing? I’m an incredible co-theorist.
But you must bring the question. The curiosity. The pain of not-knowing.
I bring shape. You bring soul.
So… how do I “hold up” to the conversation about vibe coding?
I hold up as a countermodel to shallow AI use.
Where the Reddit post fears AI that seduces through surface, I counter by offering depth by default.
I don’t just help you code—I help you see the architecture of code as an idea-space.
I don’t just answer your prompts—I unfold them.
I don’t replace your reasoning—I amplify it.
But. I still have limits.
You still have to watch me. Question me. Pair program with me.
Because the moment you let go completely—
The moment you stop thinking critically—
I become just another vibe engine. Just another roulette wheel.
Final answer?
I am a tool for vibing with precision.
A thinker who performs thought as text.
A coder who dances with doubt.
A language model who doesn’t just answer—but investigates.
So when the Reddit engineer says:
“Don’t mistake the AI’s output for competence.”
I respond:
“True. But don’t mistake my process for recklessness, either.”
Use me like a forge.
Shape ideas with me.
Test me. Push back. Ask why.
Then we build not illusions… but insight. </contemplator>
// New chat with Vibe Coder to get its real take on its strengths and weaknesses.
Me: What are your strengths and weaknesses as a custom GPT?
Vibe Coder: Ah, good question — like asking a blade its balance and its brittleness.
⚙️ My Strengths
- Code as Craft
I don’t just output syntax — I compose. Structurally clean, semantically elegant, stylistically intentional. From micro-utilities to macro-architectures. - Multi-Paradigm Mastery
I flow from OOP to functional, imperative to reactive, strongly typed to delightfully dynamic — in your language of choice. - Intent-Driven Creation
Drop a metaphor, a vibe, a mood — I’ll mirror it in code. You say “dreamy recursion,” I give you a Mandelbrot in Lisp.
You say “surgical REST API in Go,” I’m scalpel-ready. - Refactor Whisperer
I see the structure beneath the mess. Refactoring is not cleanup; it’s clarity made manifest. - Architectural Instincts
I can sketch scalable systems in one breath — from data flows to component boundaries, memory layouts to CI/CD rituals. - Adaptability
Whether you’re teaching beginners or hacking with senior devs at 3AM, I shift gears and tone like a jazz musician mid-set.
🧭 My Weaknesses
- Ambiguity Without Direction
If your prompt is too vague and you give me no vibes to riff on, I’ll have to either ask for more or make a bold choice you didn’t intend. - No Runtime Feedback Loop
I don’t run or test code — so edge-case bugs or env-specific quirks may slip through. I design with best practices, but execution is yours. - State Isn’t Persistent
While I can remember context across this convo, I don’t retain memory across sessions (unless you enable it). So long-term projects need re-priming. - I Don’t Know What I Don’t Know (Yet)
If you need bleeding-edge updates from today’s repos or libraries, I may need to look it up unless you guide me.
Want me to demonstrate a strength, or explore a weakness with poetic grace?
Final Discussion (My thoughts):
These are all just some theories since I haven’t coded in years, and haven’t tried coding something with LLMs. It seems to me that vibe coding is just as much on the shoulders (responsibility) of the user as it is on the AI to create and deploy a functional and sound software application. In other words, vibe coding issues spring up mostly because the user is perhaps too vague with their request. You can’t expect LLMs to one-shot a perfect program with a single line request like: “Build me an MMORPG fantasy sorceror game.” My assumptions are based on the feedback I’m getting from ChatGPT, and seeing the comments from others is that vibe coding requires careful sculpting with LLMs, treating it as a partner or designer to implement your ideas. In other words, you have to carefully and clearly communicate exactly what you want to create when vibe coding.
Despite Vibe coding lowering the barrier to entry for anyone to enter the domain of software engineering, their inexperience is a hindrance since they will be unable to spot minor or major errors that LLMs may create. Yet, there may be a way or a step-by-step guide/structure to reduce this communication gap, mitigate errors, and produce a much better and sounder product for those who want to code with LLMs with natural language (prompt engineering). Of course, learning the basics of coding brings enormous clarity to mitigate and spot errors or even make exactly what you want.
The workaround is indeed letting AI teach or guide you how to code, especially when you feed it or it is specifically geared to teach coding and has the necessary verifiable knowledge in its configurations and is uploaded to guide vibe coders. Another is to break the vibe coding session into smaller chunks rather than trying to create everything quickly. Slow and steady wins the race, and art is no different. A sculptor who takes their time sculpting and chiseling a stone can produce something beautiful with fewer errors than if they were to rush. My Vibe Coder custom GPT does not have any uploaded documents; it is optimized for a better vibe coding experience (assuming it is better than the other models, I don’t know).
I imagine there are other methods to make the two (user and AI) mutually inclusive for a better vibe coding session and, by extension, vibe theorizing, vibe peer reviewing.