Can we talk about vibe coding?

By Doll Eye, 10 February, 2026

Forum
App Development and Programming

Hey!

This is probably going to be a highly technical thread which I'll regret spawning... But, as we have some very clever computer science types on here, I was wondering if you could give me and others some pointers on vibe coding?

Questions that come to mind:

1. Is it actually worth it?
2. What are the limitations in scope?
3. Is it indeed improving as the big three are telling us?
4. Where is it best to get started on the mac with voiceover, IE, an accessible route in?

I'm sure other questions will arise.

I did have a play a few months back with XCode and Claud and, though it worked well at first (an IOS app for playing my gbook library from iCloud whilst grabbing meta data from audible), the further down the line I got the more errors appeared. It was basically losing coherence with each pass, entropy spreads and all that. I tried creating a brief and put it in a folder and kept asking the AI to refer back, checking against the development goals, but it got confused, more errors floated in and, not being lingual in such things, I just let it get on with the debugging.

I know what coders will say if I ask the question, is there any value in us learning coding basics, but I'm also looking for the fastest start to finish of idea to app. I know that sounds lazy but... I really can't be bothered to end this sentence.

I think, what puts me off coding, and I have coded before during my degree in computer systems engineering, is the sheer weight of code, navigating it, syntax errors which are hidden from us, which results in me coming out in a cold sweat when faced with a wall of it.

I'm hoping you can give some pointers on this. Is XCode, in fact, the best way to do this on mac or are there better solutions?

Please talk to me like I'm an idiot... because... Well, I won't finish that sentence either.

Options

Comments

By Doll Eye on Tuesday, February 10, 2026 - 11:48

I did start reading this post a while back but gave up. it feels like he's shouting at me... I don't like being shouted at. It makes me sad.

Also, assuming it's all in caps.

I'll check out the podcast instead.

By JoĂŁo Santos on Tuesday, February 10, 2026 - 12:04

Nothing worth of real value has ever been produced with vibe coding, it's a total waste of resources using very powerful technology the wrong way just because C-suites want to be self-sufficient. All the AI junkies are collectively digging their career graves, not because AI is getting any better but because they are letting their skills rot away by relegating themselves to riding on the passenger's seat instead of driving innovation as pilots.

Large language models are interesting from a scientific perspective, but in terms of production they are actually contributing negatively to society. The same systems that are being used to power them at a loss and at scale could instead be employed to do a lot more interesting scientific research. At this pace we are more likely to end up in a reality where AI outsmart us not because it evolved to super intelligence but because we got a lot dumber collectively speaking.

By a king in the north on Tuesday, February 10, 2026 - 12:19

vibe coding is great if you just want to get started with an idea and want a proof of concept. However, the "move fast and break things" philosophy has made software far worse because shipping with bugs is now more tolerated than ever. Many accessibility bugs, for example, are produced by the large language model and it doesn't know how to implicitly fix them without human assistance, in my experience.

The limitations are pretty clear to see, as long as you don't buy into the hype. You have already noticed that as complexity increases, so does the amount of errors. Now, we've built over the last year or so very good scaffolding to keep this from happening. Tools like Claude Code, for example, which is not accessible BTW, have a lot of architecture to guide the large language model on what to do. This works, but it still depends a lot on the human using the tool. You will not get a good software project from zero-shot prompting. I don't think this will ever be possible in the future due to the nature of LLMs.

You have to be as explicit as possible with most of them, which is already hard for most humans to begin with. A large part of programming that doesn't get stated is that you often have to translate what the user actually wants into code, but what the user actually wants is not immediately clear by the language that they're using. That's where the human capacity has to come in.

The only frontier where they seem to actually be improving is the mechanics of writing code, which is admirable but won't replace anybody unless they're doing easy tasks and maintenance. The reason people struggle with coding is that they get buried into the mechanics of the thing instead of attempting to understand it from first principles before any coding is actually done. But if all you can do is to measure yourself by the number of errors that you have in your code, which is natural if you're starting, you'll get very frustraded that way. The brain can't keep track of context right away. Rather, it has to adapt over time. LLMs tend to context drift, which is a hard problem to solve. That's why they'll lose track of whatever they're doing unless they are grounded by tricks, either by explicitly reminding them over and over or external memory, which is not always going to work. So really we're trying to hack around their limitations. I like treating them as fancy simulations to bounce ideas off of.

Hope this helps.

By mr grieves on Tuesday, February 10, 2026 - 12:58

Sometime last year, Atlassian broke the UI for a feature I depend on for my work in BitBucket so it was virtually impossible to use with VoiceOver. Well, I have been struggling with it for about 8 months or maybe more and it still doesn't work.

A few weeks ago, I was listening to the developer of Blind RSS on Double Tap talking about Vibe Coding, and it occurred to me that I now have access to a ChatGPT Business Plan.

So I installed "codex" which is the ChatGpt commandline. Connecting to my account was a bit confusing as VoiceOver struggled a little to tell me what my options are, but I got there.

At this point I should tell you I write Python for a living. However, I have never written Swift (the language you use for iOS or the Mac0, and I have never developed a MacOS UI.

I told Codex what I want - a native MacOs app, connecting with BitBucket Cloud, that could help me load up a pull request and view comments. Over the course of 3 days and maybe about 8 hours in total, I had something working that was genuinely useful. I have been refining it a lot since then and adding a load of features, although I still can't do everything I need yet.

But without touching a line of code I have something working and 100% tailored to my use case.

In this scenario all I needed before hand was xcode (free download for MacOs/iOS dev), nodeJs (free install) and codex (free download/install). I needed a paid subscription I believe. There is now a MacOs app for codex which seems accessible from a first play. However whether it stays wthat way is anyone's guess.

There is absolutely no way I would have developed this on my own. Much as I would like to have the time and energy to learn Swift and figure all this out on my own, if I had gone down that path I would never have anything working and the idea would have just faded from view over time.

I did get a little too ambitious to start with. I think keeping it small and iterating probably works better than trying to describe everything in one go. The first version of the app, for example, just had hardcoded values everywhere. I was particularly impressed with the way it created a diff view so I could see what has changed in a file. I described one way to do it which I think it largely ignored and what I got was actually pretty fantastic. Simple but understandable. The best thing I have used natively on a Mac with VoiceOver even if it is lacking in features right now.

There are things it struggles with. For example, I wanted it to build a tree for the files but it seemed to struggle. Maybe it was my prompts. Sometimes it has failed to do something - e.g. I wanted to use APIs to get some extra details for things and it just couldn't do it - it tried and then rewrote and rewrote again and again but every time it failed.

The other thing - I keep telling it to make sure it works with VoiceOver. It generates an agent.md file which is a markdown with instructions about how it should work so I added it there. But I can point out bugs where VO navigation isn't working and it will usually fix them even if it can take a number of goes.

One of the best things I asked it to do was add an HTTP Log so I could see what was going on. That helped me notice some crazy things - e.g. it was going through all pages of an API so calling it loads of times, and then I wasn't even using that data. So I was able to direct me.

I've started to try to pay more attention to what it is doing. It is sometimes adamant that something that doesn't work is correct but it can do a work-around for the instances when it doesn't work, as opposed to just using the right thing. I need to be careful about relying on it too much without paying any attention at all.

I think there is a definite danger to developing an app without actually understanding anything that is going on. The first time it broke the build and I had to figure out how to find the error to paste into it was unnerving. But I have no idea what the code is doing. I have no idea what the UI looks like or how many hacks have gone in to get things to work. Honestly, I don't care because the app is already incredibly useful and will hopefully continue to be.

It would be a little different if I decided to release this into the wild. For example, I am pretty sure the keys I used for the API are not in the code but I haven't checked. And I can't say for sure it's not doing anything it shouldn't.
Obviously it would be better if Atlassian didn't just break everything all the time and then not fix it. I shouldn't have to do this. But I am really grateful for the option.

In terms of a process, I find it both enjoyable and frustrating. It's amazing to say "can you just add a list of such and such containing this..." and then a minute later there it is. That is pretty mind blowing. On the other hand, you do need some patience when you endlessly iterate over a problem and it continues to be incapable of fixing it. I have wasted a lot of time trying and failing to have it do something that feels trivial.

Also sometimes you wonder if it really knows what it is doing. It does occasionally feel quite trial and error.

Oh before I forget, one essential thing you should be familiar with is git, the version control system. Every time codex manages to do something and it seems to work I tell it to commit the changes. You don't have to know the syntax or do it yourself, just be aware that you should do this a lot. The reason being is that if it makes a total mess of things you can always say "revert all changes!" and get back to where you were.

By mr grieves on Tuesday, February 10, 2026 - 13:13

OK the last comment was a bit long, but I thought I would follow-up with how I am using this in my professional life.

Firstly, I think it is a bad idea to write code you do not understand in a professional basis. Let's face it, over the years you do have abstractions so many developers don't understand absolutely everything anyway. E.g. if you are writing a Mac app, you might drag controls onto a design surface but you don't really know how they are getting rendered or you will probably use a library to access a database. But I think it is a big step from there to understanding nothing about your own work.

we have setup codex as part of code review. I am a little uncomfortable sending all our software up to ChatGPT to look at. This doesn't feel sensible but it's not my call. The results, however are actually pretty amazing. It can really understand a code base, and even look at the ticket to try to understand why a change is being made. It often comes up with incredibly insightful comments. It's not always right but it is always worth considering what it says. Compared to human reviews I get which are almost always pretty useless other than pointing out some cosmetic errors.

I will use codex to vibe code a proof of concept, particularly if I am not given the time to do it properly myself. I have only done this once and my intention is to rewrite or refactor so I can make sense of it all as I don't really understand what it is doing yet. Which is OK for a quick POC but not good long-term.

And you can always use it for little bits. For example, I asked it to add a few things to an existing template when I didn't know the syntax. I had a look to see what it had done afterwards and it just saves time when compared to going to google or normal ChatGPT.

I don't really want to use vibe coding for anything I have to support because it is essential I know what is going on. I don't think I would feel comfortable going all-in with vibe coding professionally, but I think there are related tools that are useful and it certainly does have its uses.

I do have a lot of concerns about vibe coding and AI in general. Much as I like some of the tools it gives me, I think the bigger picture is pretty terrifying and I hate to think of the implications it has on society and the environment. It worries me a lot that things are becoming effortless to the point where no one is going to really value anything any more. I I think you get more value when there is effort and graft, blood, sweat and tears. And less when you just say "I want it now!" and there it is and now it's boring and onto the next thing.

But as a blind man I appreciate all the help I can get.

By Brian on Tuesday, February 10, 2026 - 13:32

I am of two minds when it comes to VibeCoding. On one hand, I absolutely agree with @JoĂŁo Santos; that fully relying on AI to write code and build programs of any significance, is just going to lead to disaster. As someone who graduated with a degree in computer science, I have an understanding of what it takes to learn code, to learn syntax, and to understand the differences between variables and operators, 'if statements' and boolean states, arguments and definitions, just to name a few things.

On the other hand, I don't think there's necessarily anything wrong with using AI to doublecheck a line or two of code that you are perhaps having trouble with. Say for example if there is a line, or even a block of code that is giving you trouble, perhaps it is producing an error and you can't quite figure out how to go about correcting it, or other such situations. In these instances, I don't see how utilize an AI is necessarily a bad thing.

By Doll Eye on Tuesday, February 10, 2026 - 13:52

Pre-existing coding knowledge does seem to be the key here. Syntactically LLMs can be a useful tool, I use it all the time for command line stuff which I can't recall or just don't know. It does seem command line, like terminal, could do with a natural language option or, at least, a corrective option.

Context drift is a perfect way of putting it. I've tried to use AI to help refine my writing in the past but more iterations and it starts contradicting itself and I get that feeling I'm running in circles around something that is dissolving.

I guess, we could say, create an action that takes file x, converts in to M4A, then strip meta data and search it on this URL and return and replace the meta data... Then it's still on us to go in, look at the seps it's taken and actually understand what it is done.

Using the writing metaphor, as in fiction, it's the same as an author saying, writing a novel about a Honey Badger called Norman who dreams of being a member of the USA bobsleigh team, leaving it to do its thing, and being surprised when it comes out with a pile of shit. readable, but functionally and emotionally lacking.

LLM is auto correct on steroids, as far as I understand it. This dissolution of intent will always broaden with the function of iterations, in the same way we could hit the predictive buttons above our IOS keyboard until you've got a string of utter nonsense on the macro level.

By Ashley on Tuesday, February 10, 2026 - 14:54

you need pre-existing knowledge. You can vibe an app and release it, but what will you do if it breaks? how do you know it's secure? And all these self-contained platforms that people are using to vibecode apps. What happens when that company goes down, or you want to move your hosted app to somewhere else?

Where vibecoding can be useful is in situations where you've got a bug in a large codebase and you can't figure it out. Or when you want to automate a load of boring, repetitive tasks. Or when you want a very simple script to do something for yourself.

Case in point. I was working on a mechanical project yesterday, building a turntable part. I wanted to evaluate the fundamental resonance frequency of an aluminium component, and find an optimal value where the resonance frequency was above a certain point, the moment of inertia was within a certain range, the overall component mass was low, but there was still x amount of material left. Because i can't use FEM tools in a CAD program, using Python was the best way to do it. The script had to digitally model the exact part, run through as many combinations of numbers as were valid (nearly 7000 combinations) and output the results. I could probably have written the python script myself in half an hour or so, once I'd figured out the math, looked up the syntax for any unfamiliar functions etc. Gemini wrote it in about 30 seconds, I checked it for any errors, and had my part optimised in less than 10 minutes.. that's where vibecoding can be useful. But not for writing apps, and not if you don't check its work before you run the code.

By Doll Eye on Tuesday, February 10, 2026 - 16:13

I think the main point all you experienced coders are making is don't believe the hype, don't be ambitious, don't assume you can have an idea and bang it on the app store, make millions, and retire to an island. Rather, get some coding behind you, keep your sights low, understand it's only a shortcut to iterative processes and to, on the whole, take responsibility for your code...

Is that right?

I suppose, the next qeustion question is, where should a fool like me start with learning about coding? What are the fundamentals? My memory of objects, arrays, classes, etc is really blurred. What's a good and accessible way of trying a few fun little projects with a satisfying output? Even better, one which one could, once understanding how things fit together, include some vibe coding to speed up development rather than the lazy wand waving I was hoping for?

Is XCode good? It did seem a little overwhelming when I was having a poke about. Are there any good voiceover/coding instructions you can point me to?

By Brian on Tuesday, February 10, 2026 - 16:52

...take responsibility for your code...

There is no truer statement. Oh, and feel free to be as ambitious as you like, just do it on your own merits. With regards to Xcode, if you are coding on a Mac, you have limited choices here. At least in my experience. You can do XCode, you could do VIM or Nano from within commandline via Mac Terminal, or you can write your code manually in a simple text editor, compile it an Xcode, and go from there. As for where to start, I would recommend something like Python, as Python is more or less a lot more straightforward than some of the other more robust programming languages out there. Sadly, I do not know if the Python IDE is accessible these days, but you can run Python through command line, and test out your code that way without using Xcode, if Xcode is too overwhelming for you.

HTH.

Edited because I forgot to switch to markdown

By Zachary on Tuesday, February 10, 2026 - 17:09

I agree with a lot of what was said so far. Unless we get a massive shift in how these AI models are created, there will likely never be a situation where you can type a one shot prompt and get a completely working app with zero issues or vulnerabilities. With that being said, these tools are definitely useful and you shouldn't dismiss them. I don't think it's a good idea to do so called "vibe coding" without understanding anything about the underlying concepts behind programming, the language you're writing in, etc., but it can speed you up for specific tasks.

For example, let's say you wanted to add a feature to an app you're working on. If you already have a decent idea of what you want and how it should be implemented, it might be faster and easier to ask one of these coding agents to try and code it for you, of course monitoring as it goes and reviewing the output. For some people, including myself who's pretty new to CS, I find it easier if some code has already been written that I can just review. Sometimes the hardest thing is getting started, and these tools can help out with that problem.

Even companies like Apple are starting to embrace this technology. I watched a developer session a few days ago where they were showing off LLMs directly inside Xcode to build an iOS app. They weren't prompting it for the entire thing, they were going step-by-step and looking at the app take shape in real time.

By Doll Eye on Tuesday, February 10, 2026 - 17:11

Am I correct in thinking python is sequential?

And, could something like text mate be a good editor to use? I've mucked about with it in the past.

And, and I'm sorry to keep asking questions like a small child... Is python a good gateway language to lead into things like swift?

By mr grieves on Tuesday, February 10, 2026 - 18:06

I would have thought think the starting language will depend on what you want to do with it. If all you are interested in is building a native app then I would likely look to XCode and Swift. As I said before, I only dipped my toe into Swift but I would imagine it is what most people use for apps and there are plenty of blind people using it.

I mostly like Python. It isn't quite as punctuation heavy as some languages which works better for me with VoiceOver. However, not all of its standard methods are easy to hear as it mashes words together, and you must enable indentation sounds of some sort.

If you are happy writing little command-line tools for the terminal then Python will do pretty well. I've never tried writing a UI with it and honestly I'm a bit suspicious of doing so. It can be used for web APIs or server-side applications if that's your thing. Or if you were writing an Alexa skill or some such it would likely be good.

As for IDEs, PyCharm is excellent and works really nicely with VoiceOver. There are a few things that aren't accessible (like the database table viewer which I never use anyway) and the built-in terminal last time I checked. But all the things I use on a daily basis work as well as anything on the Mac.

Personally, and no doubt this will attract some scorn, but I think coding with a pure text editor is a fool's game. I think there are some usable text editors on the Mac but I can't say I love any of them. BBEdit is maybe the best of the bunch. Smultron isn't bad. I use TextMate for a few things but I've never tried actually writing anything in it. I think it's more of a fault with VoiceOver than any of the actual editors but none quite work the right way for me.

Anyway, just my thoughts. But I will admit I am totally stuck in my ways.

By mr grieves on Tuesday, February 10, 2026 - 18:10

Sorry glossed over half the reply. Firstly, never feel bad about asking questions - the ones you are asking are great questions anyway but even stupid ones are fine. I've asked enough myself. If anyone is down on you for having interest and enthusiasm and wanting to learn, then that's their problem.

What do you mean by sequential? You mean you write the code line by line from top to bottom?

You certainly can write a Python script like that. You will want to break into functions and then other modules and other things at some point. So for a small script then you can just batch a load of lines together, but for anything complicated you will likely to need more structure.

Correct me if I'm not really answering the question as I might have misunderstood.

By Doll Eye on Tuesday, February 10, 2026 - 18:48

I think I'm trying to ask, do the functions run line by line, as if they were being typed into a command line? I know some languages can be less linear and do various calls and parallel functions... At least, that was how I understand it.

I do think swift might be the way to go saying all that. It does seem to be the go to for the sort of thing I'd want to build. It's just finding somewhere to learn it. I'm aware of playgrounds but think accessibility was a bit of an afterthought in that, we could learn it by the examples, but there might have been better examples to learn from which weren't quite so visual.

It's kinda the story of the Mac in miniature. The dream, an OS built around how we use computers... Sigh.

By Jim Neitzel on Tuesday, February 10, 2026 - 18:52

For context, I’m writing this as someone who has an understanding of software architecture but has not written code in over 20 years. I have been using AI Several hours a day for about two years now.

As of today, my opinion is that those who heavily caveat the capabilities of using a large language model for coding are either not using the right tools or do not want to be honest about the capabilities because their paycheck depends on it.

I use Claude Code in the terminal with Opus 4.6. I primarily develop in python and JavaScript because those are the languages that the model has been trained on most extensively. Front end development is possible but The output is generic and sighted people will hate it. But if you are developing a backend for a project using node, it’s extremely powerful. I regularly complete projects, including API integrations, with no errors or bugs.

The criticism that you will not be able to develop software using a one shot prompt is true. It will always be true. Because the model doesn’t know the specifications in which your brain and vision the output. I use a plug-in for Claude code called get shit done that you can find on GitHub. Using this plug-in, I spend about 60% of my time building and refining project specification documentation for the model to use when developing the project. This can’t be skipped.

If you have not valuate the state of vibecoding in the last eight weeks, then you know nothing about the capabilities of the models are evolving that fastâ€Č

The actual code writing has quickly become the cheapest input of software development. Product design and product management is, at least right now, the most valuable skill Set outside of marketing.

I hope this is helpful and I wish you luck. I don’t check these forms often, but if anything in this post is beyond your technical understanding, I’d recommend pasting it into ChatGPT and asking whatever questions you have there. It is surely a better partner in this exploration than I am!

By Brian on Tuesday, February 10, 2026 - 18:59

Yes, Python reads the way you would read, or the way a screen reader reads. That is to say left to right, and top to bottom. Of course, it gets tricky when you do things like import libraries for various things, or defined functions for a particular feature or action for your software. The code will still initially read left to right and top to bottom, but for example let's say you have a function you are defining online 283, if you have not properly imported your libraries, and/or define said function ahead of time, you'll break your code.

I hope that makes sense.

By JoĂŁo Santos on Tuesday, February 10, 2026 - 19:40

As of today, my opinion is that those who heavily caveat the capabilities of using a large language model for coding are either not using the right tools or do not want to be honest about the capabilities because their paycheck depends on it.

As Linus Torvalds once said in a different context, "Talk is cheap, show me the code!". AI junkies are all deceiving each other with these claims about how great the technology is, but as I said earlier, people are yet to build anything worth any actual value with it, and the service is being sold at a loss. Therefore, if you truly believe the above quote, point me at that amazing code you got an LLM to write so that I can roast it myself. And yes, I already know what kind of answer I'll get,, which is either that the code is either a trade secret (the convenient excuse that people give most often), or you'll just show me a turd that you think is really good when it's actually a huge spam of extremely poorly structured software that looks pretty with lots of useless comments and creative identifier names.

I use Claude Code in the terminal with Opus 4.6. I primarily develop in python and JavaScript because those are the languages that the model has been trained on most extensively. Front end development is possible but The output is generic and sighted people will hate it. But if you are developing a backend for a project using node, it’s extremely powerful. I regularly complete projects, including API integrations, with no errors or bugs.

Who's the experienced engineer actually reviewing the code to ensure that it doesn't have bogs? Reviewing code is significantly harder than writing it, because the reviewer is not in the same loop as the developer, meaning that they are a lot less likely to understand the developer's train of thought and are thus less likely to be aware of potential flaws in the reasoning. From my observations, current large language models are indeed good at getting the happy path right, but any conditions that steer just a little bit away from it are problematic.

The criticism that you will not be able to develop software using a one shot prompt is true. It will always be true. Because the model doesn’t know the specifications in which your brain and vision the output. I use a plug-in for Claude code called get shit done that you can find on GitHub. Using this plug-in, I spend about 60% of my time building and refining project specification documentation for the model to use when developing the project. This can’t be skipped.

Even with a proper specification you won't get anywhere. Just ask those models to produce a C compiler right out of any of the ISO standards, or write a DNS server out of its original RFC, and you'll see that the only thing they'll do great is spend your subscription money.

If you have not valuate the state of vibecoding in the last eight weeks, then you know nothing about the capabilities of the models are evolving that fastâ€Č

It wouldn't be very hard to prove the value of this technology if it actually had any, and if producing proper code does require special skills, that alone is already an admission of failure. The idea that is being sold is that the technology is revolutionizing software engineering by making development more cognitively accessible and fast, yet the amazing software built by large language models, which should be everywhere at this point, is actually nowhere to be found...

The actual code writing has quickly become the cheapest input of software development. Product design and product management is, at least right now, the most valuable skill Set outside of marketing.

I'm talking about actual value, not hype value, and by that I mean writing software that solves actual problems, not software that sells because it integrates or was written by AI. These are completely distinct metrics, and the people who measure value based on hype are the ones trying to justify their paychecks.

By Brian on Tuesday, February 10, 2026 - 22:00

The TLDR version of @JoĂŁo Santos' post above:

Do not VibeCode.
Thank you, that is all. 😇

By TheBlindGuy07 on Tuesday, February 10, 2026 - 23:19

Among all the real devs here I'd rank myself at the botum of the list, I'm just a student.
But even I have enough experience with it to know that most of the time I'd rather take an hour to code it myself... There's a real meme around, whether to take 20 minutes to code it yourself or waste 3 days debugging AI output. And it's mostly true.
Mr Grieves, I'm happy with your result to fix atlassium mess, but I think there's more luck here than usual, as pointed out web and front end especially is the most known to llm.
As for the language debate, it will never end, but just know that overall as much as I hate it there is no wrong choice, just some better than others. IMO go with a statically typed language, it will force you to learn things you'll have to learn anyway. Don't choose javascript, I started with php (yes). Html / css are not languages. Like, purebasic (on windows) would be a bad choice.
Otherwise I agree 99.9% with what @JoĂŁo Santos said, the ai vibe coding hype especially is dangerous at best, way too much resources at a terrible environmental, and let's face it, human, cost, for terrible suboptimal quality at best. I'm talking about the psychological suffering of those poorly underpaid 3rd country "worker" who had to extract / filter out of the models the worse of humanity on the internet. You get the idea? Unrelated to this but I'm pretty sure that sb at openai, according to theories, was more or less killed as he was a whistle blower there for something. Just look at wth the situation with openclaw (or whatever its current name is, if it's still around) it's easier to laugh but it's genuinely terrifying what careless people (and those who trust those careless people) can produce.
Follow the primeTime and other real dev content creator online.

By Brian on Wednesday, February 11, 2026 - 00:10

Not traditionally programming languages, but I mean they are still languages. Markup and stylesheet to be more precise. The latter is more or less a scripting language, similar to something like Applescript, or JavaScript.
Just saying... 😝

By TheBlindGuy07 on Wednesday, February 11, 2026 - 00:24

Good point.

By Igna Triay on Wednesday, February 11, 2026 - 01:08

Depending on the code, it could definitely give you the basics, maybe even a little bit more advanced concepts. But yeah, others are right, if you use to vibe coding, have an understanding of what the code does otherwise eventually you'll hit a hole you won't be able to dig yourself out of.
I disagree with the, nothing of value has been produced with it, code will be a mess etc takes though. People in this very topic have said they have produced things of value. Of course, value is relative, and maybe it’s a zero value to you, which, fair enough, but if it wasn’t producing things of value, major companies like Apple, Microsoft, etc. wouldn’t be both using it, and deploying it. But saying, oh yeah it’s produced nothing of value... not true. I.e, autopilots in aircraft? Its not llms, its not neural AI, but its still a form of AI, algorithmic to be precise but, its AI. Thousands of pilots use this daily, from piston aircraft to fighters. Specifically on llms though, they can help with teaching concepts, i.e, coding basics, help with documentation etc to name a few examples. It has, and does, provide value. But saying AI hasn't produced anything of value is untrue and its like saying oh chess engines don't produce anything of value because they'll beat the shit out of you 90% of times... True, they'll beat the crap out of you depending on level but, they've valuable to learn, learn new strategies etc. The oh its produced nothing of value is, imo an absolute take which, tbh, bullshit, but to each their own, and as far as impacting paychecks, we're seeing starting to see this to a point, many people are relying on AI which yeah puts coders etc on a tough spot and does in fact, impact one's paycheck. At some point give how rapid advancements are going, its highly likely software enjeneers, coders, devs, will play a far less significant roll than they do now; even now we're already seeing this diminishing as is.
However, As long as one knows what one is doing, the limitations of not only one's self and AI though and doesn't bite off more than one can chew, it ai coding does have its uses, at least to a point.
Frankly I thought and started to learn coding some years back before realizing, not for me. I'd be good at it if I push myself however; not my passion, it’s not for everyone. Respect for those who do it and enjoy it though.
The thing about AI is, it’s a tool. Nothing more; nothing less; but one has to know how to use it, i.e, what to ask, how to ask, to get the desired results. Of course there are people who yeah, overselling the capabilities of AI, i.e, oh it'll code everything; no. It won't. That's not knowing how to use the tools at one's disposal. At the end of the day, it’s a just a tool. The results one gets will depend on how good one is at using said tool. One could get good results, good code if used propperly; or absolute junk if not used properly, but that's it in essence.

By JoĂŁo Santos on Wednesday, February 11, 2026 - 02:28

People in this very topic have said they have produced things of value

Yeah, they are just nowhere to be found...

if it wasn’t producing things of value, major companies like Apple, Microsoft, etc. wouldn’t be both using it

Microsoft has a vested interest in making it seem worth paying for, because they've been making increasingly bigger investments in the technology and are desperate to make it feel useful. One of their strategies, which is backfiring big time, is to push it down everyone's throats, by including Copilot in everything, rebranding their core products after Copilot, and even employing deceiving business practices that make people think that they have to pay for the service. Apple is just responding to user demand for agent integration, there isn't really much else to make from that. None of these examples is actually adding real value to software engineering, and if you think otherwise, well as I said above, just show me some quality vibe coded projects that solve real problems!

autopilots in aircraft? Its not llms, its not neural AI, but its still a form of AI

And who's denying that AI has value? We're talking about vibe coding here, meaning people thinking that they can generate software with value from natural language prompts. AI does have value, and I did acknowledge that value in my first reply to this thread when I mentioned that the same resources that are being used to serve chat bots could be put to better use for actual scientific research. Hell I'm starting a whole business around AI myself, and have previously worked for an AI company, so I'd be damned if I thought that the technology doesn't have value!

At some point give how rapid advancements are going, its highly likely software enjeneers, coders, devs, will play a far less significant roll than they do now

Wake me up when that happens then, because I've been hearing this for years, and while things are definitely changing, the trend now is to hire software engineers to completely rewrite vibe coded ideas that flopped hard not because the ideas themselves aren't good, but because the code is, well, AI slop... Furthermore model advancement is stagnating a lot, with huge diminishing returns and possibly even regressions at this point, so the strategy of just throwing more hardware at the problem is no longer working. One of the problems is that the increasing prevalence of AI slop online is contaminating the data used to train new models, resulting in an unexplained phenomenon in which AI slop in the training corpus correlates to a quadratic increase in hallucination rates. Also and thanks to this mass delusion, a whole generation of software engineers are being deprived of proper training.

The thing about AI is, it’s a tool

It's actually just a gimmick. It may be scientifically impressive, but as far as software engineering is concerned, it doesn't really solve any problems any better than, and in fact doesn't even come close, to the solutions that already existed before. Using chat bots for coding is a solution in search for a problem, because most big tech over-invested on it, and the whole US economy currently depends on convincing everyone that it's actually useful.

By TheBlindGuy07 on Wednesday, February 11, 2026 - 03:43

And last time I checked it's weird to imagine that stats can solve complex problems by inventing truly new ways to think about a given problems. Remember that it's trained on open source code at the expense of maintenance / infra cost of the often unpaid engineers, and often not so open source / legally obtained code.
Yes, microslop is again doing classic microsoft things and it will end terribly for everyone.
LLM is the real tech, AI is the general word that can be nothing yet actually something worth it.
As far as I remember before 2022 the buzz word was machine learning, which is ... well, ai, and it's produced some of the best tech we use, ocr / object recognition is just one among many.
Google search ranking, complex algorithms... not all but some are this useful AI people were happy to ignore before gpt but now are so sick of bad ai forced everywhere that some could really just throw it all in one box the good and the bad.
As for vibe coding, just look at openclaw, this... thing, will be among many case studies after the bubble is no longer.
Just, don't overestimate what it can / should, and what it definitely shouldn't be doing (llm).
By definition, llm will just take an output, do massive stats, and get the most probable input out of its training data. Anything that looks like new statistically can't be new anyway, computer randomness is not random, and I don't think that at every step of the inference there's a massive natural entropy generator like these lava things cloudflare use... I could be saying nonsense so feel free to correct, but you get the idea. LLMs remix but don’t reason from first principles.
And please, it's been the end of software devs since the first compiler was invented, so... We'll probably have more demand than supply if american big tech lay off everyone to vibe code with llm and start crying when accidents happen, fixing AI mess will create almost decades of jobs in cyber security alone and legacy code patching / review. So 0 worry on my end.

By Igna Triay on Wednesday, February 11, 2026 - 04:58

Yeah, they are just nowhere to be found


Oh no? Go out there and look then. Asking in a blind community? Yeah, no shit; you likely won’t find anything here but, example? Some of the mods being made for games to make them accessible? Yeah, partially coded with AI. And before you jump in with “that’s not real value,” the entire point is whether AI contributes to solutions, and it does. Per articles i've read:
Refactoring, generating tests, documentation, architectural suggestions
 all real uses. I know, because people in THIS thread have literally told you they’ve done it.

For Refactoring
Prompt your AI to improve existing code. (“Refactor this function for readability, adhering to the Single Responsibility Principle. Add comments explaining the ‘why’ for any non-obvious logic.”)

For Generating Tests
Ask your AI partner to help improve quality. (“Write a comprehensive set of unit tests for this function using Jest, including happy paths, edge cases like null inputs and potential error states.”)

For Brainstorming Solutions
Use AI to explore architectural possibilities. (“Propose three different caching strategies for user profile data using Redis, explaining the trade-offs for each in terms of performance, cost and complexity.”)

For Documentation
Eliminate the tedium. (“Generate concise documentation for this function in Markdown, explaining each parameter and what it returns.”)

And beyond coding? Let’s not pretend practical value doesn’t exist. Healthcare uses LLMs for diagnostic support, drug discovery, personalized plans. Finance uses them for risk prediction, fraud detection, sentiment analysis. Marketing uses them for personalization, content creation, customer service automation. Manufacturing uses them for predictive maintenance, supply chain analysis, and knowledge capture. Legal uses them for research, contract analysis, and document drafting. These aren’t hypothetical use cases — this is already happening at scale.

And if you want personal examples? Copilot for meetings, Circleback, ReedAI. Real human beings — myself included — use them every single day for notetaking, summarization, and meeting organization. They save time, they improve workflow, and they’re absolutely useful. So the “no value exists” claim? No. Just no.

Microsoft has a vested interest in making it seem worth paying for, because they’ve been making increasingly bigger investments


Partially, you’re correct. But let’s not act like “Microsoft forces things on users” magically began with AI. OneDrive? Forced. Local accounts? Phased out. Telemetry? Forced. Their history of ramming things down people’s throats goes way back before Copilot existed... nothing new, in other words.Apple integrating it? Because users demanded it. Exactly what they do with everything else. This isn’t a smoking gun — it’s standard tech market pressure.

And who’s denying that AI has value?

You did. When you said, and I quote:

“I’m talking about actual value, not hype value, and by that I mean writing software that solves actual problems, not software that sells because it integrates or was written by AI. These are completely distinct metrics, and the people who measure value based on hype are the ones trying to justify their paychecks.”

Except
 look at the industries above. Look at the applications. Look at the actual results. That’s not hype; that’s verifiable usage. The fact that people are USING these systems for real work means it DOES provide actual value.

It’s actually just a gimmick.

Except you literally said you’re starting an AI company. So
 if it’s a gimmick, why use it at all? Why build a business around it? Why rely on it? Gimmicks don’t make good foundations for businesses. They fade. You don’t build around a phase.

Wake me up when that happens


Too late. It’s already happening. Amazon, Google, Meta, multiple startups — they’ve already admitted 50–70% of internal code is now AI-assisted. These aren’t rumors; these are public statements. Even companies BUILDING the models rely on AI to write and maintain their codebases. You’re arguing against a trend that’s already real. and, if you have missed what people have been saying here? Nobody has been saying oh yeah, just have everything to our artificial intelligence, and that’ll do it for you. No. People here, myself included, are saying it works, but you have to have the fundamentals down first. This is the part that you don’t seem to get. It’s not just, I will write the prompt and boom I get an app. Thats what you’re implying. If the person doesn't check the output? Doesn't guide the tool? Yeah; it'll be trash; but the same is true if someone starts writing code for 8 hours no checking, tries to do a test run, only to get a sintax error on line 20; same principal.

And the “engineers being deprived of training” bit? That’s on them. If someone relies on AI without understanding fundamentals, that’s the person’s fault — not the tool’s. People write garbage code every day without AI too. This isn’t some new catastrophic phenomenon. Bad engineers produce bad code. That’s been true since the beginning of programming.

And the “AI slop contaminating the dataset” argument? And the “delusions”? That’s no different from what’s already happened for decades. YouTube slop, clickbait slop, misinformation slop — humans generated mountains of garbage long before AI existed. AI didn’t invent low-quality content. It just made the existing mess louder.

And by the way — if your entire stance hinges on “show me good AI-coded software,” fine — the same challenge applies in reverse. You say the code is always garbage? Show an example. Because everything from accessibility mods to internal tooling to research assistance to documentation automation contradicts you.
Speaking of examples, you'll have to dig through this to find the link for the source-code but this mod was partially made by AI and yeah before you ask; its someone who has a lot of experience coding. and assuming you read the full post, before delving into the github link in said post; part of it says, As a historical thing, this is also one of the earliest mods written/maintained by AI.  At this point that's not much of a note--you're all used to it, I'm sure.  But we had it before almost everyone and it's been great.
So much for no examples, no? Oops.
https://forum.audiogames.net/topic/58549/announcing-factorio-access-20-support-and-mod-overhaul/
Want more?
https://www.reallygoodbusinessideas.com/p/vibe-coded-products

And again, that’s just scratching the surface — and people HERE have already given examples that directly undermine what you’re claiming.

I’ve said my piece. If you can’t accept the facts, that’s on you, not me.
Anyway. Moving on... What this boils down to is, don't use AI as oh write prompt i'll get it done in one go. Check the output like you would any other's work, team-member, student, what have you.

By Igna Triay on Wednesday, February 11, 2026 - 05:20

See, that's what i'm saying as well. AI is a tool. If say, shit goes south? Meh; something new will come with time. If it succeeds? Benefit for everyone. Of course like any peace of tech, hell anything really, some will try to use it for, neferious purposes. Then again; that's a tail as old as time, i.e, knives used to cut food can also be used to injer or kill someone. One can use a computer for work, every day tasks, or to create a malware that infects a whole lot of devices. A car can be used for travel or to again; hurt others. It’s a tool, as said, and that's what people need to keep in mind with this. I.e, people using AI for companionship, relationships? That's one of those, don't make it into something it isn't, kind of things. But as l'ng as one keeps in mind its a tool and nothing else; your fine.

By JoĂŁo Santos on Wednesday, February 11, 2026 - 07:26

Oh no? Go out there and look then.

How about you stop attempting to shift the burden of proof to me and start showing the evidence yourself? Because I can just as easily claim that I have looked online and found nothing, thus shifting the burden of proof back to you. Anything claimed without evidence can be refuted without evidence, so you're just shit posting.

Asking in a blind community? Yeah, no shit; you likely won’t find anything here

Why not? Because the blind are too stupid to code? No seriously, I really want to know what you mean here!

Some of the mods being made for games to make them accessible? Yeah, partially coded with AI.

Good, provide links to them for me to roast then! Extra points if they include documentation of the AI prompts used to generate the code!

the entire point is whether AI contributes to solutions, and it does. Per articles i've read

Apparently you don't even have first-hand knowledge! You're talking about something you read online and chose to believe for whatever reason, and if that wasn't already bad enough, you actually spread that information as if it was an absolute truth! I already called this behavior when I mentioned that AI junkies are collectively fooling each other into thinking that there's value in vibe coding, and not only did you ignore that, but you keep buying the bullshit, because if it's on the Internet then it must be true, right?

For Brainstorming Solutions

That's a proper use case for chat bots, and is also not vibe coding. I use chat bots to learn about stuff myself, what I don't use them for is write any kind of code.

And beyond coding?

Beyond coding is completely outside the scope of the thread, which is explicitly mentioned in its subject so don't tell me you missed it!

Except
 look at the industries above. Look at the applications. Look at the actual results. That’s not hype; that’s verifiable usage. The fact that people are USING these systems for real work means it DOES provide actual value.

Except that none of the examples where AI is providing actual value are vibe coding, a small detail that you consider irrelevant but is actually the scope of the thread...

And if you want personal examples? Copilot for meetings, Circleback, ReedAI. Real human beings — myself included — use them every single day for notetaking, summarization, and meeting organization. They save time, they improve workflow, and they’re absolutely useful. So the “no value exists” claim? No. Just no.

Yeah, lots of vibe coding in those examples! You aren't moving the goal posts even a single bit! Excuse my sarcasm, but your inability to hold any kind of logical debate coupled with your overconfidence would have comedic value if it wasn't a sad classic textbook example of a bad case of Dunning Kruger effect!

Partially, you’re correct. But let’s not act like “Microsoft forces things on users” magically began with AI. OneDrive? Forced. Local accounts? Phased out. Telemetry? Forced. Their history of ramming things down people’s throats goes way back before Copilot existed... nothing new, in other words.Apple integrating it? Because users demanded it. Exactly what they do with everything else. This isn’t a smoking gun — it’s standard tech market pressure.

And exactly where did I even imply that Microsoft only does this with AI?

Except you literally said you’re starting an AI company. So
 if it’s a gimmick, why use it at all? Why build a business around it? Why rely on it? Gimmicks don’t make good foundations for businesses. They fade. You don’t build around a phase.

I also explained that AI has value to me, and that my comments are all in the context of vibe coding, which, again, happens to be the subject of the thread that you are choosing to ignore for whatever reason! Rest assure that my technology has absolutely nothing to do with chat bots, and is entirely focused on solving real world problems, thus being totally coherent with everything I said regarding this subject.

Too late. It’s already happening. Amazon, Google, Meta, multiple startups — they’ve already admitted 50–70% of internal code is now AI-assisted.

There's a huge difference between AI assisted and vibe coding, which I have explained earlier so go ahead and read about it, or just ask your favorite drunken robot to educate you on the meaning of the term. While vibe coding is a form of AI assisted development, it's far from being the only form of AI assisted development, so that means absolutely nothing. Using AI to review pull requests is also a form of AI assisted development that I actually advocate for precisely because it has absolutely nothing to do with vibe coding, and is one of the many ways in which AI can actually create value by empowering humans.

These aren’t rumors; these are public statements. Even companies BUILDING the models rely on AI to write and maintain their codebases.

Imagine if word came out that those companies don't even eat their own dog food... The question here is whether doing so is actually resulting in a net positive value for them, which is the inconvenient question that nobody is asking because everyone just wants to believe the bullshit.

You’re arguing against a trend that’s already real. and, if you have missed what people have been saying here? Nobody has been saying oh yeah, just have everything to our artificial intelligence, and that’ll do it for you.

But that's exactly what the original poster is asking about, so if nobody else is talking about that, then you're all spamming the forum!

No. People here, myself included, are saying it works, but you have to have the fundamentals down first. This is the part that you don’t seem to get. It’s not just, I will write the prompt and boom I get an app. Thats what you’re implying.

Are you sure that I'm the one who doesn't get it? As I said earlier, go ahead and ask your favorite drunken robot to define vibe coding, then decide whether to keep up with the idiocy or just default from yet another argument where you are clearly way in over your head talking about things that you don't really understand.

And the “engineers being deprived of training” bit? That’s on them. If someone relies on AI without understanding fundamentals, that’s the person’s fault — not the tool’s. People write garbage code every day without AI too. This isn’t some new catastrophic phenomenon. Bad engineers produce bad code. That’s been true since the beginning of programming.

Yeah but the problem this time is that these vibe coders are getting jobs doing things that they aren't really qualified to do, and end up stalling seniors like me because we end up having to spend hours reviewing huge amounts of AI slop to enumerate all the problems just to document their incompetence so that they can be fired without compensation. The time we spend doing this could be much better spent actually producing code ourselves, or at least teaching junior developers to make them productive later on, so vibe coders end up contributing negative productivity, making them even worse than dead weight.

And the “AI slop contaminating the dataset” argument? And the “delusions”? That’s no different from what’s already happened for decades. YouTube slop, clickbait slop, misinformation slop — humans generated mountains of garbage long before AI existed. AI didn’t invent low-quality content. It just made the existing mess louder.

I find it amusing that, despite not actually understanding the subject, you talk like you're an expert, and asking questions, or at least researching a bit to learn something, never really crosses your mind! The problem is not the low-effort content on the Internet, it's the fact that AI slop specifically has a huge negative impact on the reliability of the models for reasons that remain unexplained compared to human-generated content, making this phenomenon a ticking time bomb that will eventually lead to models collapsing due to the increasing amount of AI slop in the training corpus, unless people figure out and come up with a proper solution to save that industry.

And by the way — if your entire stance hinges on “show me good AI-coded software,” fine — the same challenge applies in reverse.

No it doesn't, because unless someone guarantees and offers evidence that something is vibe coded, I cannot make such a claim myself. At most I can have my suspicions, but that's not enough for me to make such a claim. We've had cases in the past where companies tried to pitch human-written code as AI-generated, one of them involving a huge company in India backed by Microsoft funding. If I was arguing in bad faith like you are, I could very easily ask my local AI model to generate some code, roast that, even post it to a different GitHub account that couldn't be traced back to me, and you would be none the wiser, so your attempts to shift the burden of proof to me only show that reasoning is not your greatest skill, at least I hope, because if it is then blindness is not even your your worst disability...

You say the code is always garbage? Show an example. Because everything from accessibility mods to internal tooling to research assistance to documentation automation contradicts you.

Yeah, except none of those are examples of vibe coding...

And again, that’s just scratching the surface — and people HERE have already given examples that directly undermine what you’re claiming.

No they haven't, they just made claims that remain unsubstantiated at the time of this comment that I cannot verify independently, so until actual verifiable examples (AKA proof) are delivered, all my arguments stand unchallenged.

I’ve said my piece. If you can’t accept the facts, that’s on you, not me.

There's a huge difference between the bullshit that you choose to believe in and the factual reality that you refuse to check against.

Anyway. Moving on... What this boils down to is, don't use AI as oh write prompt i'll get it done in one go. Check the output like you would any other's work, team-member, student, what have you.

Except that even that is not resulting in actual value being produced regardless of what you choose to believe, because as I said earlier, and contrary to you I have decades of professional experience in this field, reviewing code is significantly harder than writing it. Therefore not only is vibe coding as a concept total bullshit as you admit yourself even if you don't really know what the term means, but even that workflow with AI writing code is yet to demonstrate any kind of value. Sure some people are really trying to come up with a way to make it work for reasons that escape my imagination, but so far nobody really has demonstrated a workflow that truly works and makes software development both easier and faster without a huge sacrifice in quality, which is what's supposed to be happening because that's what's being advertised. Once someone manages to demonstrate that I personally can do a better job in every possible respect using AI I will admit to be wrong, but at this point my observations tell me that current technology is nowhere near that point, so I keep challenging everyone who thinks differently to just show evidence, which I think is a perfectly reasonable position.

By mr grieves on Wednesday, February 11, 2026 - 08:18

Ah I understand now. If you look at something like Javascript it is callbacks within callbacks within callbacks. (Admittedly not so bad if you are using await/async to hide that). But, no, Python is not like that - it is more straightforward in that regard. It does have its equivalent of Javascript Promises (futures in Python) and it does have async/await and some of that, but it's more something you setup because you need it in a prticular situation rather than something that is forced on you.

By JoĂŁo Santos on Wednesday, February 11, 2026 - 08:22

After posting my previous comment, I actually recalled a glaring example of AI junkie deception that I found on reddit some time ago and decided to take a snapshot for posterity due to the sheer ridiculousness of the whole thing having lots of comedic value. On that thread, the original poster, who claims to have 15 years of software engineering experience, also claims to have vibe coded a whole browser complete with a rendering engine during Christmas without almost any dependencies in 50,000 lines of Rust, and pretty much everyone in the Anthropic community got trolled by buying the lie without actually bothering to verify the claims themselves.

Checking the code reveals that the reason why the code doesn't have any dependencies is because he essentially ran cargo vendor on them, and made all of them part of his own crate thus eliminating the need to declare them as dependencies, and the engine coded from scratch is just Chrome Embedded on Windows and WebKit on macOS, so it's essentially Chrome and Safari wrapped in custom windows... To the people who don't know any better that's evidence of AI producing value, but to people like me it's just a huge grift, and therefore my advice here is to not take everything at face value, and especially to avoid the temptation of pretending to know more than you do by presenting your beliefs as facts, because on one hand you might end up spreading misinformation and fooling others, and on the other hand anyone competent in the field will immediately flag you as a fraud.

By mr grieves on Wednesday, February 11, 2026 - 08:34

I don't think it is fair to say that nothing of value has ever been produced by vibe coding. That is a totally blanket statement. However, it is also fair to say that not everything that is vibe coded has value. And it is also likely true that vibe coded projects are likely to accumulate technical debt a lot quicker than normal projects.

My little bitbucket app does have value to me. It's not changing the world but it does make my life easier.

So maybe it depends on what you personally think of as value. Does it justify the cost and expense of it all? Maybe not. But that doesn't mean it has zero value.

It should be understood that there are different types of systems out there with different requirements. A relatively small app that satisfies a particular use case has value. Writing a cloud-based architecture with vibe coding and then not being able to understand what is going on, not so much. Rewriting MacOs or VoiceOver with vibe coding is probably not going to go well. Using AI to find bugs in those tools, however, is a great use case and hopefully one Apple might start using at some point.

I think there are certain developers - myself included - who are extremely protective over their code and the way it is written. The thought of something else coming in and contaminating it is stressful. I have very particular way I like to structure things, write unit tests and so on and the thought of giving that to AI fills me with dread.

However, I can see a point in time when developers are more about asking the questions and checking the results more than actually coding. I don't particularly like that - I prefer writing to reading code. But there is no denying the speed of vibe coding. Admittedly not always but when it works it is scarily impressive.

I'm not sure it is possible for a professional developer to not have reservations about vibe coding. For starters it is a threat to our livelihood. In many ways I don't want it to be good. I want to be better than it is. But I worry that it is old fashioned thinking and that I am going to have to release a little of the control I have at some point.

I am pretty sure some of these big companies are going to be using vibe coding for certain things. I doubt it is possible to say exactly how much without being in that environment.

By JoĂŁo Santos on Wednesday, February 11, 2026 - 09:00

My little bitbucket app does have value to me. It's not changing the world but it does make my life easier.

Computers are made to solve real-world problems, so that's how I define value. If, as a developer, what you are building is not solving a real-world problem, then there's no value in that, and vibe coding makes it even worse, not only because it sucks but also because when it actually works you aren't really learning anything therefore the educational value is also lost.

It should be understood that there are different types of systems out there with different requirements. A relatively small app that satisfies a particular use case has value. Writing a cloud-based architecture with vibe coding and then not being able to understand what is going on, not so much. Rewriting MacOs or VoiceOver with vibe coding is probably not going to go well. Using AI to find bugs in those tools, however, is a great use case and hopefully one Apple might start using at some point.

Yeah but that is not an example of vibe coding.

I think there are certain developers - myself included - who are extremely protective over their code and the way it is written. The thought of something else coming in and contaminating it is stressful. I have very particular way I like to structure things, write unit tests and so on and the thought of giving that to AI fills me with dread.

I would have absolutely no problem delegating coding tasks to AI if it was reliable, much like I don't mind writing code in a high-level language even though I'm perfectly capable of writing assembly myself, or using existing frameworks that I could also implement myself. The problem is that as it stands, it doesn't really work, I cannot trust it to do a good job, and therefore I find no reason to spend time trying to make it work properly when I have very strong reasons to believe that it never will, and that tried and true classic algorithms leave AI solutions in the dust in every case that I can imagine.

However, I can see a point in time when developers are more about asking the questions and checking the results more than actually coding. I don't particularly like that - I prefer writing to reading code. But there is no denying the speed of vibe coding. Admittedly not always but when it works it is scarily impressive.

And when does it work? I'm talking about real software, not toy programs.

I am pretty sure some of these big companies are going to be using vibe coding for certain things. I doubt it is possible to say exactly how much without being in that environment.

Of course they will, because the higher ups have absolutely no clue so they're forcing it down everyone's throats over fears of losing to the competition, and it will also fail and they will blame everyone other themselves for the failure, jump ship, and do the same all over again elsewhere. Some time ago I read an amusing thread on Digg that sums up what's going on perfectly.

By mr grieves on Wednesday, February 11, 2026 - 09:40

My bitbucket app is solving a real problem - there is something I can barely use and now I can. Sure I've not learnt much from the experience, but that does not mean it isn't of value.

Literally I had a problem statement, fed it into codex and (eventually) got to a solution. It couldn't be more about solving a real-world problem.

I think it is easy to be snobbish because we feel we know better. Labelling something a "toy" because it isn't a massive engineering effort is dismissive. Plenty of little things are extremely helpful and genuinely add value.

I think if we treat every software project as being the same, then it's easy to see vibe coding as either the answer to everything or a waste of time. But I think that is over-simplification. There are lots of different types of software - some of it is really, really important that it is written well. Some not so. But at the end of the day, we should be judging on outcomes.

We also need to understand that not all developers are created equally. Some vibe coded software is going to be better than some of the rubbish put out by real people.

I am pretty sure that the software @JoĂŁo Santos writes is not the kind of thing that vibe coding should go anywhere near. Honestly, I'm awe of what you are able to do. But don't forget, not many people are at that level and not all people are solving the same sorts of problems.

That's not to say there isn't a danger in having lots of these vibe coding apps out in the wild and suddenly most stuff running on my Mac is written by someone who doesn't understand what's going on at all. There are levels of abstraction, then there is this. So it's not that I am unconcerned by this, but I think "there is no value" is just not true. If you were arguing that the value is not sufficient enough to justify the various costs that go with it, then I think that is more of a compelling argument.

By Brian on Wednesday, February 11, 2026 - 09:50

This has been touched upon briefly, both by @João Santos and @Mr. Grieves. I am referring to proofreading code, of course. It is time consuming to proofread one's own code, and a pure headache to proofread someone else's code. Anyone who has ever had the benefit of learning how to write by hand, not on a touchscreen device, but actual pen and paper, or pencil and paper understands that absolutely everyone has their own handwriting style. The angle they hold their writing device. the way they write characters on paper. The curves, bends, and angles. Hell, even the way they cross their T's and dot their I's. For example, back in my sighted days, I actually used to dot my I's with the accent symbol, for those of you who have ever had eyesight, think of the ending é (the little tick above this letter is somewhat similar to an apostrophe, visually speaking) in words like fiancé or résumé. I couldn't really tell you why I dotted my I's like this, I just did.
This is also true when writing code. Everyone has their own coding style. It can be a subtle thing, such as an extra space or tab between lines for aesthetic sake. It could be adding commentary between each line, or something else more elaborate. When we proofread our own written code, we develop a kind of foresight. In other words, as we go line by line, we know what to expect from start to finish, as we work our way through each line of code. This in term helps us to find errors quickly, typically speaking of course. Because, as anyone who has ever coded anything can tell you, sometimes errors can just sneak up on you like a ninja. đŸ„·

I bring this up primarily because, imagine you write out a piece of software using VibeCoding, from start to finish. Even if you know the language used intimately, you are the one who has to proofread code written by another entity. And honestly, if you're gonna proofread code, you're going to want to do it manually. Which can be Insanely time-consuming. However, that's really the only way you're gonna not only understand the way the code has been implemented, but understand how and why errors have occurred. And if you think that's bad, picture this, back when I was in college, I did a number of group projects where three or more people had to collaborate on code, writing it in either simultaneously, or taking turns. Then one or more of us would have to proofread the entire thing. I can't even begin to describe how much of a pain those group projects work. AI as we know it today didn't exist back then, but imagine it now. Imagine being in a group project, you and two or three other students having to collaborate on code in order to develop some software for a grade. Two of you are writing out the code yourselves, understanding the language you've been taught, while the third person decides to code everything using VibeCode. Now all of the code has been put together and at least one of you has to proofread it all.

Have fun with that...

Edited for typos

By JoĂŁo Santos on Wednesday, February 11, 2026 - 10:09

Literally I had a problem statement, fed it into codex and (eventually) got to a solution. It couldn't be more about solving a real-world problem.

You know what else has the same effect? Drugs! People start off feeling curious about them, or feeling peer pressure to try them, cave in, get high, and then all of a sudden they need more and more, and end up completely dependent. At this point those drugs clearly gain value for them since they literally spend money on them, sometimes even money that they can't really afford to spend, because they need that high. It's not really important in the grand scheme of things, humanity is not progressing the slightest bit because of that, quite the opposite, but for those people getting the next fix means the world. It's the same thing in this case, you have a problem, delegate the solution to a drunken robot, it solves your problem, and you get the dopamine for accomplishing your goal, except you learned nothing from that, so the next time the same problem materializes you'll be doing it all over again, and at this point avoiding doing the same for all other kinds of problems becomes hard to avoid, and you want the dopamine high for accomplishing more goals, so you keep reaching for the drunken robot that's been helping you all along, and all of a sudden you become so focused into the goals that you completely forget about learning, and that code produced by the drunken robot doesn't really fly in production because it's utter trash, so the dependency that you grew on the drunken robot is now a huge liability that may have value to you personally but isn't really contributing anything positive to the world.

As I said earlier, AI is just a gimmick in this context. It's impressive from a scientific perspective, but sucks at writing code, and the people claiming differently are just fooling themselves. That example of implementing DNS out of its RFC that I mentioned earlier? That's something that my 16 year old self had no trouble implementing in 1998, so until someone manages to vibe code something like that, nobody can claim that AI is even as competent as I was as a teenage junior developer with zero professional experience back in the late-90s.

By Doll Eye on Wednesday, February 11, 2026 - 11:39

Just to try and clarify for my own purposes, AI (man I hate that term) is useful in proofing, condensing, tidying, organising, etc... But when it comes to actual vibe coding, as in I ask it to build me a game like mario cart but for blind gamers... It's cack because there are too many ways in which the brief can go, the scope is too wide and, when it comes to it, entropy spreads...

the greatest use of LLMs, in any context, is small context corrections, proof reading, slamming out a small block of code with a highly specific and well specified task in the same way I could say: send an email to Michael saying I can't make dinner on friday, check my callender and give him alternative dates, make it polite...

And so on and so forth... I never leave LLMs to write my emails by the way and probably have the same perspective as JoĂŁo when it comes to writing fiction. We can be very protective of our skills but this might also make us blind to the opportunities such evolutions present. LLMs can and are used in coding for highly successful companies, even if it is on the smaller scale tasks, proofing, for example. It's a tool.

I hate the idea that LLMs are used for writing any fiction because story teling is a human to human thing and, to me is very important. When an LLM is mixing together all the elements of the things readers seem to like, you end up with the literary equivalent of fast food for the mind, and we all know what fast food does to people.

I also hate the term AI. It's not artificial intelligence. Let's name it what it is, machine learning, Large Language Models etc.

In my final year of Computer Systems Engineering I built a neural network which could recognise bottle nose dolphin call signs trained on a large data set. That was 20 years ago. I'm basically the godfather of AI for cetaceans.

By mr grieves on Wednesday, February 11, 2026 - 12:06

OK I think that's about as much hyperbole as I can cope with. I will leave you to it and in the meantime I will enjoy the value from the app I now have.

I'm not even going to entertain a reply to that drugs comparison.

@Doll Eye - as per your other comments on this thread, your last comment is spot on.

By Doll Eye on Wednesday, February 11, 2026 - 12:23

I have to admit I did start skim reading after a while.

Wish someone had opened with vibe coding being as good as drugs. Would have saved me a lot of time... If not money.

By JoĂŁo Santos on Wednesday, February 11, 2026 - 12:46

> OK I think that's about as much hyperbole as I can cope with. I will leave you to it and in the meantime I will enjoy the value from the app I now have.

Remember to update this thread the next time you make a contribution to a free software project written by another human being, unless contributing to free software isn't in your mind because writing it wasn't your goal so taking advantage of the code didn't trigger a dopamine high...

By TheBlindGuy07 on Wednesday, February 11, 2026 - 13:05

LLM in general here, not vibe coding... I'm an avid fantasy reader. LLM with insane context window like gemini are as addictive to me than say tiktok or instagram for sighted people, ie there's always tons of overstimulation for the brane and instant response. That's recipe for addiction.
@Joao thinks long term, and on applevis we have the greatest example of all when a code is architecturally becoming increasingly fragile to the point that one addition break 10 things that won't be fix for decades. Be my guest at guessing what it is.
Time will tell how this tech will have impacted us, fyi it's already happening.
Happy coding. Or general prompting.