Hey!
This is probably going to be a highly technical thread which I'll regret spawning... But, as we have some very clever computer science types on here, I was wondering if you could give me and others some pointers on vibe coding?
Questions that come to mind:
1. Is it actually worth it?
2. What are the limitations in scope?
3. Is it indeed improving as the big three are telling us?
4. Where is it best to get started on the mac with voiceover, IE, an accessible route in?
I'm sure other questions will arise.
I did have a play a few months back with XCode and Claud and, though it worked well at first (an IOS app for playing my gbook library from iCloud whilst grabbing meta data from audible), the further down the line I got the more errors appeared. It was basically losing coherence with each pass, entropy spreads and all that. I tried creating a brief and put it in a folder and kept asking the AI to refer back, checking against the development goals, but it got confused, more errors floated in and, not being lingual in such things, I just let it get on with the debugging.
I know what coders will say if I ask the question, is there any value in us learning coding basics, but I'm also looking for the fastest start to finish of idea to app. I know that sounds lazy but... I really can't be bothered to end this sentence.
I think, what puts me off coding, and I have coded before during my degree in computer systems engineering, is the sheer weight of code, navigating it, syntax errors which are hidden from us, which results in me coming out in a cold sweat when faced with a wall of it.
I'm hoping you can give some pointers on this. Is XCode, in fact, the best way to do this on mac or are there better solutions?
Please talk to me like I'm an idiot... because... Well, I won't finish that sentence either.
Comments
Check out Stephen's thread
If you want an idea of what is possible with VibeCoding, especially from a blind perspective, check out Stephen's thread below. Oh, and also check out the AppleVis podcast that correlates.
https://applevis.com/forum/assistive-technology/stop-waiting-ai-tell-you-what-see-start-exploring-it-yourself
https://applevis.com/podcasts/applevis-extra112-stephen-lovely-rethinking-visual-accessibility-vision-ai-assistant
Thanks
I did start reading this post a while back but gave up. it feels like he's shouting at me... I don't like being shouted at. It makes me sad.
Also, assuming it's all in caps.
I'll check out the podcast instead.
Nothing of value
Nothing worth of real value has ever been produced with vibe coding, it's a total waste of resources using very powerful technology the wrong way just because C-suites want to be self-sufficient. All the AI junkies are collectively digging their career graves, not because AI is getting any better but because they are letting their skills rot away by relegating themselves to riding on the passenger's seat instead of driving innovation as pilots.
Large language models are interesting from a scientific perspective, but in terms of production they are actually contributing negatively to society. The same systems that are being used to power them at a loss and at scale could instead be employed to do a lot more interesting scientific research. At this pace we are more likely to end up in a reality where AI outsmart us not because it evolved to super intelligence but because we got a lot dumber collectively speaking.
Great as a proof of concept.
vibe coding is great if you just want to get started with an idea and want a proof of concept. However, the "move fast and break things" philosophy has made software far worse because shipping with bugs is now more tolerated than ever. Many accessibility bugs, for example, are produced by the large language model and it doesn't know how to implicitly fix them without human assistance, in my experience.
The limitations are pretty clear to see, as long as you don't buy into the hype. You have already noticed that as complexity increases, so does the amount of errors. Now, we've built over the last year or so very good scaffolding to keep this from happening. Tools like Claude Code, for example, which is not accessible BTW, have a lot of architecture to guide the large language model on what to do. This works, but it still depends a lot on the human using the tool. You will not get a good software project from zero-shot prompting. I don't think this will ever be possible in the future due to the nature of LLMs.
You have to be as explicit as possible with most of them, which is already hard for most humans to begin with. A large part of programming that doesn't get stated is that you often have to translate what the user actually wants into code, but what the user actually wants is not immediately clear by the language that they're using. That's where the human capacity has to come in.
The only frontier where they seem to actually be improving is the mechanics of writing code, which is admirable but won't replace anybody unless they're doing easy tasks and maintenance. The reason people struggle with coding is that they get buried into the mechanics of the thing instead of attempting to understand it from first principles before any coding is actually done. But if all you can do is to measure yourself by the number of errors that you have in your code, which is natural if you're starting, you'll get very frustraded that way. The brain can't keep track of context right away. Rather, it has to adapt over time. LLMs tend to context drift, which is a hard problem to solve. That's why they'll lose track of whatever they're doing unless they are grounded by tricks, either by explicitly reminding them over and over or external memory, which is not always going to work. So really we're trying to hack around their limitations. I like treating them as fancy simulations to bounce ideas off of.
Hope this helps.
It has its place
Sometime last year, Atlassian broke the UI for a feature I depend on for my work in BitBucket so it was virtually impossible to use with VoiceOver. Well, I have been struggling with it for about 8 months or maybe more and it still doesn't work.
A few weeks ago, I was listening to the developer of Blind RSS on Double Tap talking about Vibe Coding, and it occurred to me that I now have access to a ChatGPT Business Plan.
So I installed "codex" which is the ChatGpt commandline. Connecting to my account was a bit confusing as VoiceOver struggled a little to tell me what my options are, but I got there.
At this point I should tell you I write Python for a living. However, I have never written Swift (the language you use for iOS or the Mac0, and I have never developed a MacOS UI.
I told Codex what I want - a native MacOs app, connecting with BitBucket Cloud, that could help me load up a pull request and view comments. Over the course of 3 days and maybe about 8 hours in total, I had something working that was genuinely useful. I have been refining it a lot since then and adding a load of features, although I still can't do everything I need yet.
But without touching a line of code I have something working and 100% tailored to my use case.
In this scenario all I needed before hand was xcode (free download for MacOs/iOS dev), nodeJs (free install) and codex (free download/install). I needed a paid subscription I believe. There is now a MacOs app for codex which seems accessible from a first play. However whether it stays wthat way is anyone's guess.
There is absolutely no way I would have developed this on my own. Much as I would like to have the time and energy to learn Swift and figure all this out on my own, if I had gone down that path I would never have anything working and the idea would have just faded from view over time.
I did get a little too ambitious to start with. I think keeping it small and iterating probably works better than trying to describe everything in one go. The first version of the app, for example, just had hardcoded values everywhere. I was particularly impressed with the way it created a diff view so I could see what has changed in a file. I described one way to do it which I think it largely ignored and what I got was actually pretty fantastic. Simple but understandable. The best thing I have used natively on a Mac with VoiceOver even if it is lacking in features right now.
There are things it struggles with. For example, I wanted it to build a tree for the files but it seemed to struggle. Maybe it was my prompts. Sometimes it has failed to do something - e.g. I wanted to use APIs to get some extra details for things and it just couldn't do it - it tried and then rewrote and rewrote again and again but every time it failed.
The other thing - I keep telling it to make sure it works with VoiceOver. It generates an agent.md file which is a markdown with instructions about how it should work so I added it there. But I can point out bugs where VO navigation isn't working and it will usually fix them even if it can take a number of goes.
One of the best things I asked it to do was add an HTTP Log so I could see what was going on. That helped me notice some crazy things - e.g. it was going through all pages of an API so calling it loads of times, and then I wasn't even using that data. So I was able to direct me.
I've started to try to pay more attention to what it is doing. It is sometimes adamant that something that doesn't work is correct but it can do a work-around for the instances when it doesn't work, as opposed to just using the right thing. I need to be careful about relying on it too much without paying any attention at all.
I think there is a definite danger to developing an app without actually understanding anything that is going on. The first time it broke the build and I had to figure out how to find the error to paste into it was unnerving. But I have no idea what the code is doing. I have no idea what the UI looks like or how many hacks have gone in to get things to work. Honestly, I don't care because the app is already incredibly useful and will hopefully continue to be.
It would be a little different if I decided to release this into the wild. For example, I am pretty sure the keys I used for the API are not in the code but I haven't checked. And I can't say for sure it's not doing anything it shouldn't.
Obviously it would be better if Atlassian didn't just break everything all the time and then not fix it. I shouldn't have to do this. But I am really grateful for the option.
In terms of a process, I find it both enjoyable and frustrating. It's amazing to say "can you just add a list of such and such containing this..." and then a minute later there it is. That is pretty mind blowing. On the other hand, you do need some patience when you endlessly iterate over a problem and it continues to be incapable of fixing it. I have wasted a lot of time trying and failing to have it do something that feels trivial.
Also sometimes you wonder if it really knows what it is doing. It does occasionally feel quite trial and error.
Oh before I forget, one essential thing you should be familiar with is git, the version control system. Every time codex manages to do something and it seems to work I tell it to commit the changes. You don't have to know the syntax or do it yourself, just be aware that you should do this a lot. The reason being is that if it makes a total mess of things you can always say "revert all changes!" and get back to where you were.
Professional Vibing
OK the last comment was a bit long, but I thought I would follow-up with how I am using this in my professional life.
Firstly, I think it is a bad idea to write code you do not understand in a professional basis. Let's face it, over the years you do have abstractions so many developers don't understand absolutely everything anyway. E.g. if you are writing a Mac app, you might drag controls onto a design surface but you don't really know how they are getting rendered or you will probably use a library to access a database. But I think it is a big step from there to understanding nothing about your own work.
we have setup codex as part of code review. I am a little uncomfortable sending all our software up to ChatGPT to look at. This doesn't feel sensible but it's not my call. The results, however are actually pretty amazing. It can really understand a code base, and even look at the ticket to try to understand why a change is being made. It often comes up with incredibly insightful comments. It's not always right but it is always worth considering what it says. Compared to human reviews I get which are almost always pretty useless other than pointing out some cosmetic errors.
I will use codex to vibe code a proof of concept, particularly if I am not given the time to do it properly myself. I have only done this once and my intention is to rewrite or refactor so I can make sense of it all as I don't really understand what it is doing yet. Which is OK for a quick POC but not good long-term.
And you can always use it for little bits. For example, I asked it to add a few things to an existing template when I didn't know the syntax. I had a look to see what it had done afterwards and it just saves time when compared to going to google or normal ChatGPT.
I don't really want to use vibe coding for anything I have to support because it is essential I know what is going on. I don't think I would feel comfortable going all-in with vibe coding professionally, but I think there are related tools that are useful and it certainly does have its uses.
I do have a lot of concerns about vibe coding and AI in general. Much as I like some of the tools it gives me, I think the bigger picture is pretty terrifying and I hate to think of the implications it has on society and the environment. It worries me a lot that things are becoming effortless to the point where no one is going to really value anything any more. I I think you get more value when there is effort and graft, blood, sweat and tears. And less when you just say "I want it now!" and there it is and now it's boring and onto the next thing.
But as a blind man I appreciate all the help I can get.
Conflicted
I am of two minds when it comes to VibeCoding. On one hand, I absolutely agree with @JoĂŁo Santos; that fully relying on AI to write code and build programs of any significance, is just going to lead to disaster. As someone who graduated with a degree in computer science, I have an understanding of what it takes to learn code, to learn syntax, and to understand the differences between variables and operators, 'if statements' and boolean states, arguments and definitions, just to name a few things.
On the other hand, I don't think there's necessarily anything wrong with using AI to doublecheck a line or two of code that you are perhaps having trouble with. Say for example if there is a line, or even a block of code that is giving you trouble, perhaps it is producing an error and you can't quite figure out how to go about correcting it, or other such situations. In these instances, I don't see how utilize an AI is necessarily a bad thing.
So...
Pre-existing coding knowledge does seem to be the key here. Syntactically LLMs can be a useful tool, I use it all the time for command line stuff which I can't recall or just don't know. It does seem command line, like terminal, could do with a natural language option or, at least, a corrective option.
Context drift is a perfect way of putting it. I've tried to use AI to help refine my writing in the past but more iterations and it starts contradicting itself and I get that feeling I'm running in circles around something that is dissolving.
I guess, we could say, create an action that takes file x, converts in to M4A, then strip meta data and search it on this URL and return and replace the meta data... Then it's still on us to go in, look at the seps it's taken and actually understand what it is done.
Using the writing metaphor, as in fiction, it's the same as an author saying, writing a novel about a Honey Badger called Norman who dreams of being a member of the USA bobsleigh team, leaving it to do its thing, and being surprised when it comes out with a pile of shit. readable, but functionally and emotionally lacking.
LLM is auto correct on steroids, as far as I understand it. This dissolution of intent will always broaden with the function of iterations, in the same way we could hit the predictive buttons above our IOS keyboard until you've got a string of utter nonsense on the macro level.
pre-existing knowledge is essential
you need pre-existing knowledge. You can vibe an app and release it, but what will you do if it breaks? how do you know it's secure? And all these self-contained platforms that people are using to vibecode apps. What happens when that company goes down, or you want to move your hosted app to somewhere else?
Where vibecoding can be useful is in situations where you've got a bug in a large codebase and you can't figure it out. Or when you want to automate a load of boring, repetitive tasks. Or when you want a very simple script to do something for yourself.
Case in point. I was working on a mechanical project yesterday, building a turntable part. I wanted to evaluate the fundamental resonance frequency of an aluminium component, and find an optimal value where the resonance frequency was above a certain point, the moment of inertia was within a certain range, the overall component mass was low, but there was still x amount of material left. Because i can't use FEM tools in a CAD program, using Python was the best way to do it. The script had to digitally model the exact part, run through as many combinations of numbers as were valid (nearly 7000 combinations) and output the results. I could probably have written the python script myself in half an hour or so, once I'd figured out the math, looked up the syntax for any unfamiliar functions etc. Gemini wrote it in about 30 seconds, I checked it for any errors, and had my part optimised in less than 10 minutes.. that's where vibecoding can be useful. But not for writing apps, and not if you don't check its work before you run the code.
Devin and Codex
You might be interested in what amazing stuff Devin is coding
https://tweesecake.social/@pixelate
don't be ambitious
I think the main point all you experienced coders are making is don't believe the hype, don't be ambitious, don't assume you can have an idea and bang it on the app store, make millions, and retire to an island. Rather, get some coding behind you, keep your sights low, understand it's only a shortcut to iterative processes and to, on the whole, take responsibility for your code...
Is that right?
I suppose, the next qeustion question is, where should a fool like me start with learning about coding? What are the fundamentals? My memory of objects, arrays, classes, etc is really blurred. What's a good and accessible way of trying a few fun little projects with a satisfying output? Even better, one which one could, once understanding how things fit together, include some vibe coding to speed up development rather than the lazy wand waving I was hoping for?
Is XCode good? It did seem a little overwhelming when I was having a poke about. Are there any good voiceover/coding instructions you can point me to?
Doll Eye
There is no truer statement. Oh, and feel free to be as ambitious as you like, just do it on your own merits. With regards to Xcode, if you are coding on a Mac, you have limited choices here. At least in my experience. You can do XCode, you could do VIM or Nano from within commandline via Mac Terminal, or you can write your code manually in a simple text editor, compile it an Xcode, and go from there. As for where to start, I would recommend something like Python, as Python is more or less a lot more straightforward than some of the other more robust programming languages out there. Sadly, I do not know if the Python IDE is accessible these days, but you can run Python through command line, and test out your code that way without using Xcode, if Xcode is too overwhelming for you.
HTH.
Edited because I forgot to switch to markdown
Great Points Here
I agree with a lot of what was said so far. Unless we get a massive shift in how these AI models are created, there will likely never be a situation where you can type a one shot prompt and get a completely working app with zero issues or vulnerabilities. With that being said, these tools are definitely useful and you shouldn't dismiss them. I don't think it's a good idea to do so called "vibe coding" without understanding anything about the underlying concepts behind programming, the language you're writing in, etc., but it can speed you up for specific tasks.
For example, let's say you wanted to add a feature to an app you're working on. If you already have a decent idea of what you want and how it should be implemented, it might be faster and easier to ask one of these coding agents to try and code it for you, of course monitoring as it goes and reviewing the output. For some people, including myself who's pretty new to CS, I find it easier if some code has already been written that I can just review. Sometimes the hardest thing is getting started, and these tools can help out with that problem.
Even companies like Apple are starting to embrace this technology. I watched a developer session a few days ago where they were showing off LLMs directly inside Xcode to build an iOS app. They weren't prompting it for the entire thing, they were going step-by-step and looking at the app take shape in real time.
Python
Am I correct in thinking python is sequential?
And, could something like text mate be a good editor to use? I've mucked about with it in the past.
And, and I'm sorry to keep asking questions like a small child... Is python a good gateway language to lead into things like swift?
Starting languages
I would have thought think the starting language will depend on what you want to do with it. If all you are interested in is building a native app then I would likely look to XCode and Swift. As I said before, I only dipped my toe into Swift but I would imagine it is what most people use for apps and there are plenty of blind people using it.
I mostly like Python. It isn't quite as punctuation heavy as some languages which works better for me with VoiceOver. However, not all of its standard methods are easy to hear as it mashes words together, and you must enable indentation sounds of some sort.
If you are happy writing little command-line tools for the terminal then Python will do pretty well. I've never tried writing a UI with it and honestly I'm a bit suspicious of doing so. It can be used for web APIs or server-side applications if that's your thing. Or if you were writing an Alexa skill or some such it would likely be good.
As for IDEs, PyCharm is excellent and works really nicely with VoiceOver. There are a few things that aren't accessible (like the database table viewer which I never use anyway) and the built-in terminal last time I checked. But all the things I use on a daily basis work as well as anything on the Mac.
Personally, and no doubt this will attract some scorn, but I think coding with a pure text editor is a fool's game. I think there are some usable text editors on the Mac but I can't say I love any of them. BBEdit is maybe the best of the bunch. Smultron isn't bad. I use TextMate for a few things but I've never tried actually writing anything in it. I think it's more of a fault with VoiceOver than any of the actual editors but none quite work the right way for me.
Anyway, just my thoughts. But I will admit I am totally stuck in my ways.
Re: Sequential
Sorry glossed over half the reply. Firstly, never feel bad about asking questions - the ones you are asking are great questions anyway but even stupid ones are fine. I've asked enough myself. If anyone is down on you for having interest and enthusiasm and wanting to learn, then that's their problem.
What do you mean by sequential? You mean you write the code line by line from top to bottom?
You certainly can write a Python script like that. You will want to break into functions and then other modules and other things at some point. So for a small script then you can just batch a load of lines together, but for anything complicated you will likely to need more structure.
Correct me if I'm not really answering the question as I might have misunderstood.
Sequential
I think I'm trying to ask, do the functions run line by line, as if they were being typed into a command line? I know some languages can be less linear and do various calls and parallel functions... At least, that was how I understand it.
I do think swift might be the way to go saying all that. It does seem to be the go to for the sort of thing I'd want to build. It's just finding somewhere to learn it. I'm aware of playgrounds but think accessibility was a bit of an afterthought in that, we could learn it by the examples, but there might have been better examples to learn from which weren't quite so visual.
It's kinda the story of the Mac in miniature. The dream, an OS built around how we use computers... Sigh.
Apparently a controversial perspective
For context, Iâm writing this as someone who has an understanding of software architecture but has not written code in over 20 years. I have been using AI Several hours a day for about two years now.
As of today, my opinion is that those who heavily caveat the capabilities of using a large language model for coding are either not using the right tools or do not want to be honest about the capabilities because their paycheck depends on it.
I use Claude Code in the terminal with Opus 4.6. I primarily develop in python and JavaScript because those are the languages that the model has been trained on most extensively. Front end development is possible but The output is generic and sighted people will hate it. But if you are developing a backend for a project using node, itâs extremely powerful. I regularly complete projects, including API integrations, with no errors or bugs.
The criticism that you will not be able to develop software using a one shot prompt is true. It will always be true. Because the model doesnât know the specifications in which your brain and vision the output. I use a plug-in for Claude code called get shit done that you can find on GitHub. Using this plug-in, I spend about 60% of my time building and refining project specification documentation for the model to use when developing the project. This canât be skipped.
If you have not valuate the state of vibecoding in the last eight weeks, then you know nothing about the capabilities of the models are evolving that fastâČ
The actual code writing has quickly become the cheapest input of software development. Product design and product management is, at least right now, the most valuable skill Set outside of marketing.
I hope this is helpful and I wish you luck. I donât check these forms often, but if anything in this post is beyond your technical understanding, Iâd recommend pasting it into ChatGPT and asking whatever questions you have there. It is surely a better partner in this exploration than I am!
Re: Sequential
Yes, Python reads the way you would read, or the way a screen reader reads. That is to say left to right, and top to bottom. Of course, it gets tricky when you do things like import libraries for various things, or defined functions for a particular feature or action for your software. The code will still initially read left to right and top to bottom, but for example let's say you have a function you are defining online 283, if you have not properly imported your libraries, and/or define said function ahead of time, you'll break your code.
I hope that makes sense.
SHOW me THE CODE
As Linus Torvalds once said in a different context, "Talk is cheap, show me the code!". AI junkies are all deceiving each other with these claims about how great the technology is, but as I said earlier, people are yet to build anything worth any actual value with it, and the service is being sold at a loss. Therefore, if you truly believe the above quote, point me at that amazing code you got an LLM to write so that I can roast it myself. And yes, I already know what kind of answer I'll get,, which is either that the code is either a trade secret (the convenient excuse that people give most often), or you'll just show me a turd that you think is really good when it's actually a huge spam of extremely poorly structured software that looks pretty with lots of useless comments and creative identifier names.
Who's the experienced engineer actually reviewing the code to ensure that it doesn't have bogs? Reviewing code is significantly harder than writing it, because the reviewer is not in the same loop as the developer, meaning that they are a lot less likely to understand the developer's train of thought and are thus less likely to be aware of potential flaws in the reasoning. From my observations, current large language models are indeed good at getting the happy path right, but any conditions that steer just a little bit away from it are problematic.
Even with a proper specification you won't get anywhere. Just ask those models to produce a C compiler right out of any of the ISO standards, or write a DNS server out of its original RFC, and you'll see that the only thing they'll do great is spend your subscription money.
It wouldn't be very hard to prove the value of this technology if it actually had any, and if producing proper code does require special skills, that alone is already an admission of failure. The idea that is being sold is that the technology is revolutionizing software engineering by making development more cognitively accessible and fast, yet the amazing software built by large language models, which should be everywhere at this point, is actually nowhere to be found...
I'm talking about actual value, not hype value, and by that I mean writing software that solves actual problems, not software that sells because it integrates or was written by AI. These are completely distinct metrics, and the people who measure value based on hype are the ones trying to justify their paychecks.
Summary
The TLDR version of @JoĂŁo Santos' post above:
Do not VibeCode.
Thank you, that is all. đ
I'd just say know to code before vibe coding
Among all the real devs here I'd rank myself at the botum of the list, I'm just a student.
But even I have enough experience with it to know that most of the time I'd rather take an hour to code it myself... There's a real meme around, whether to take 20 minutes to code it yourself or waste 3 days debugging AI output. And it's mostly true.
Mr Grieves, I'm happy with your result to fix atlassium mess, but I think there's more luck here than usual, as pointed out web and front end especially is the most known to llm.
As for the language debate, it will never end, but just know that overall as much as I hate it there is no wrong choice, just some better than others. IMO go with a statically typed language, it will force you to learn things you'll have to learn anyway. Don't choose javascript, I started with php (yes). Html / css are not languages. Like, purebasic (on windows) would be a bad choice.
Otherwise I agree 99.9% with what @JoĂŁo Santos said, the ai vibe coding hype especially is dangerous at best, way too much resources at a terrible environmental, and let's face it, human, cost, for terrible suboptimal quality at best. I'm talking about the psychological suffering of those poorly underpaid 3rd country "worker" who had to extract / filter out of the models the worse of humanity on the internet. You get the idea? Unrelated to this but I'm pretty sure that sb at openai, according to theories, was more or less killed as he was a whistle blower there for something. Just look at wth the situation with openclaw (or whatever its current name is, if it's still around) it's easier to laugh but it's genuinely terrifying what careless people (and those who trust those careless people) can produce.
Follow the primeTime and other real dev content creator online.
@TheBlindGuy07
Not traditionally programming languages, but I mean they are still languages. Markup and stylesheet to be more precise. The latter is more or less a scripting language, similar to something like Applescript, or JavaScript.
Just saying... đ
Yeaaaaaah
Good point.
It can have its uses
Depending on the code, it could definitely give you the basics, maybe even a little bit more advanced concepts. But yeah, others are right, if you use to vibe coding, have an understanding of what the code does otherwise eventually you'll hit a hole you won't be able to dig yourself out of.
I disagree with the, nothing of value has been produced with it, code will be a mess etc takes though. People in this very topic have said they have produced things of value. Of course, value is relative, and maybe itâs a zero value to you, which, fair enough, but if it wasnât producing things of value, major companies like Apple, Microsoft, etc. wouldnât be both using it, and deploying it. But saying, oh yeah itâs produced nothing of value... not true. I.e, autopilots in aircraft? Its not llms, its not neural AI, but its still a form of AI, algorithmic to be precise but, its AI. Thousands of pilots use this daily, from piston aircraft to fighters. Specifically on llms though, they can help with teaching concepts, i.e, coding basics, help with documentation etc to name a few examples. It has, and does, provide value. But saying AI hasn't produced anything of value is untrue and its like saying oh chess engines don't produce anything of value because they'll beat the shit out of you 90% of times... True, they'll beat the crap out of you depending on level but, they've valuable to learn, learn new strategies etc. The oh its produced nothing of value is, imo an absolute take which, tbh, bullshit, but to each their own, and as far as impacting paychecks, we're seeing starting to see this to a point, many people are relying on AI which yeah puts coders etc on a tough spot and does in fact, impact one's paycheck. At some point give how rapid advancements are going, its highly likely software enjeneers, coders, devs, will play a far less significant roll than they do now; even now we're already seeing this diminishing as is.
However, As long as one knows what one is doing, the limitations of not only one's self and AI though and doesn't bite off more than one can chew, it ai coding does have its uses, at least to a point.
Frankly I thought and started to learn coding some years back before realizing, not for me. I'd be good at it if I push myself however; not my passion, itâs not for everyone. Respect for those who do it and enjoy it though.
The thing about AI is, itâs a tool. Nothing more; nothing less; but one has to know how to use it, i.e, what to ask, how to ask, to get the desired results. Of course there are people who yeah, overselling the capabilities of AI, i.e, oh it'll code everything; no. It won't. That's not knowing how to use the tools at one's disposal. At the end of the day, itâs a just a tool. The results one gets will depend on how good one is at using said tool. One could get good results, good code if used propperly; or absolute junk if not used properly, but that's it in essence.
Where's the value then
Yeah, they are just nowhere to be found...
Microsoft has a vested interest in making it seem worth paying for, because they've been making increasingly bigger investments in the technology and are desperate to make it feel useful. One of their strategies, which is backfiring big time, is to push it down everyone's throats, by including Copilot in everything, rebranding their core products after Copilot, and even employing deceiving business practices that make people think that they have to pay for the service. Apple is just responding to user demand for agent integration, there isn't really much else to make from that. None of these examples is actually adding real value to software engineering, and if you think otherwise, well as I said above, just show me some quality vibe coded projects that solve real problems!
And who's denying that AI has value? We're talking about vibe coding here, meaning people thinking that they can generate software with value from natural language prompts. AI does have value, and I did acknowledge that value in my first reply to this thread when I mentioned that the same resources that are being used to serve chat bots could be put to better use for actual scientific research. Hell I'm starting a whole business around AI myself, and have previously worked for an AI company, so I'd be damned if I thought that the technology doesn't have value!
Wake me up when that happens then, because I've been hearing this for years, and while things are definitely changing, the trend now is to hire software engineers to completely rewrite vibe coded ideas that flopped hard not because the ideas themselves aren't good, but because the code is, well, AI slop... Furthermore model advancement is stagnating a lot, with huge diminishing returns and possibly even regressions at this point, so the strategy of just throwing more hardware at the problem is no longer working. One of the problems is that the increasing prevalence of AI slop online is contaminating the data used to train new models, resulting in an unexplained phenomenon in which AI slop in the training corpus correlates to a quadratic increase in hallucination rates. Also and thanks to this mass delusion, a whole generation of software engineers are being deprived of proper training.
It's actually just a gimmick. It may be scientifically impressive, but as far as software engineering is concerned, it doesn't really solve any problems any better than, and in fact doesn't even come close, to the solutions that already existed before. Using chat bots for coding is a solution in search for a problem, because most big tech over-invested on it, and the whole US economy currently depends on convincing everyone that it's actually useful.
Llms and transformer architectures are just stats on steroid
And last time I checked it's weird to imagine that stats can solve complex problems by inventing truly new ways to think about a given problems. Remember that it's trained on open source code at the expense of maintenance / infra cost of the often unpaid engineers, and often not so open source / legally obtained code.
Yes, microslop is again doing classic microsoft things and it will end terribly for everyone.
LLM is the real tech, AI is the general word that can be nothing yet actually something worth it.
As far as I remember before 2022 the buzz word was machine learning, which is ... well, ai, and it's produced some of the best tech we use, ocr / object recognition is just one among many.
Google search ranking, complex algorithms... not all but some are this useful AI people were happy to ignore before gpt but now are so sick of bad ai forced everywhere that some could really just throw it all in one box the good and the bad.
As for vibe coding, just look at openclaw, this... thing, will be among many case studies after the bubble is no longer.
Just, don't overestimate what it can / should, and what it definitely shouldn't be doing (llm).
By definition, llm will just take an output, do massive stats, and get the most probable input out of its training data. Anything that looks like new statistically can't be new anyway, computer randomness is not random, and I don't think that at every step of the inference there's a massive natural entropy generator like these lava things cloudflare use... I could be saying nonsense so feel free to correct, but you get the idea. LLMs remix but donât reason from first principles.
And please, it's been the end of software devs since the first compiler was invented, so... We'll probably have more demand than supply if american big tech lay off everyone to vibe code with llm and start crying when accidents happen, fixing AI mess will create almost decades of jobs in cyber security alone and legacy code patching / review. So 0 worry on my end.
Value
Yeah, they are just nowhere to be foundâŠ
Oh no? Go out there and look then. Asking in a blind community? Yeah, no shit; you likely wonât find anything here but, example? Some of the mods being made for games to make them accessible? Yeah, partially coded with AI. And before you jump in with âthatâs not real value,â the entire point is whether AI contributes to solutions, and it does. Per articles i've read:
Refactoring, generating tests, documentation, architectural suggestions⊠all real uses. I know, because people in THIS thread have literally told you theyâve done it.
For Refactoring
Prompt your AI to improve existing code. (âRefactor this function for readability, adhering to the Single Responsibility Principle. Add comments explaining the âwhyâ for any non-obvious logic.â)
For Generating Tests
Ask your AI partner to help improve quality. (âWrite a comprehensive set of unit tests for this function using Jest, including happy paths, edge cases like null inputs and potential error states.â)
For Brainstorming Solutions
Use AI to explore architectural possibilities. (âPropose three different caching strategies for user profile data using Redis, explaining the trade-offs for each in terms of performance, cost and complexity.â)
For Documentation
Eliminate the tedium. (âGenerate concise documentation for this function in Markdown, explaining each parameter and what it returns.â)
And beyond coding? Letâs not pretend practical value doesnât exist. Healthcare uses LLMs for diagnostic support, drug discovery, personalized plans. Finance uses them for risk prediction, fraud detection, sentiment analysis. Marketing uses them for personalization, content creation, customer service automation. Manufacturing uses them for predictive maintenance, supply chain analysis, and knowledge capture. Legal uses them for research, contract analysis, and document drafting. These arenât hypothetical use cases â this is already happening at scale.
And if you want personal examples? Copilot for meetings, Circleback, ReedAI. Real human beings â myself included â use them every single day for notetaking, summarization, and meeting organization. They save time, they improve workflow, and theyâre absolutely useful. So the âno value existsâ claim? No. Just no.
Microsoft has a vested interest in making it seem worth paying for, because theyâve been making increasingly bigger investmentsâŠ
Partially, youâre correct. But letâs not act like âMicrosoft forces things on usersâ magically began with AI. OneDrive? Forced. Local accounts? Phased out. Telemetry? Forced. Their history of ramming things down peopleâs throats goes way back before Copilot existed... nothing new, in other words.Apple integrating it? Because users demanded it. Exactly what they do with everything else. This isnât a smoking gun â itâs standard tech market pressure.
And whoâs denying that AI has value?
You did. When you said, and I quote:
âIâm talking about actual value, not hype value, and by that I mean writing software that solves actual problems, not software that sells because it integrates or was written by AI. These are completely distinct metrics, and the people who measure value based on hype are the ones trying to justify their paychecks.â
Except⊠look at the industries above. Look at the applications. Look at the actual results. Thatâs not hype; thatâs verifiable usage. The fact that people are USING these systems for real work means it DOES provide actual value.
Itâs actually just a gimmick.
Except you literally said youâre starting an AI company. So⊠if itâs a gimmick, why use it at all? Why build a business around it? Why rely on it? Gimmicks donât make good foundations for businesses. They fade. You donât build around a phase.
Wake me up when that happensâŠ
Too late. Itâs already happening. Amazon, Google, Meta, multiple startups â theyâve already admitted 50â70% of internal code is now AI-assisted. These arenât rumors; these are public statements. Even companies BUILDING the models rely on AI to write and maintain their codebases. Youâre arguing against a trend thatâs already real. and, if you have missed what people have been saying here? Nobody has been saying oh yeah, just have everything to our artificial intelligence, and thatâll do it for you. No. People here, myself included, are saying it works, but you have to have the fundamentals down first. This is the part that you donât seem to get. Itâs not just, I will write the prompt and boom I get an app. Thats what youâre implying. If the person doesn't check the output? Doesn't guide the tool? Yeah; it'll be trash; but the same is true if someone starts writing code for 8 hours no checking, tries to do a test run, only to get a sintax error on line 20; same principal.
And the âengineers being deprived of trainingâ bit? Thatâs on them. If someone relies on AI without understanding fundamentals, thatâs the personâs fault â not the toolâs. People write garbage code every day without AI too. This isnât some new catastrophic phenomenon. Bad engineers produce bad code. Thatâs been true since the beginning of programming.
And the âAI slop contaminating the datasetâ argument? And the âdelusionsâ? Thatâs no different from whatâs already happened for decades. YouTube slop, clickbait slop, misinformation slop â humans generated mountains of garbage long before AI existed. AI didnât invent low-quality content. It just made the existing mess louder.
And by the way â if your entire stance hinges on âshow me good AI-coded software,â fine â the same challenge applies in reverse. You say the code is always garbage? Show an example. Because everything from accessibility mods to internal tooling to research assistance to documentation automation contradicts you.
Speaking of examples, you'll have to dig through this to find the link for the source-code but this mod was partially made by AI and yeah before you ask; its someone who has a lot of experience coding. and assuming you read the full post, before delving into the github link in said post; part of it says, As a historical thing, this is also one of the earliest mods written/maintained by AI. At this point that's not much of a note--you're all used to it, I'm sure. But we had it before almost everyone and it's been great.
So much for no examples, no? Oops.
https://forum.audiogames.net/topic/58549/announcing-factorio-access-20-support-and-mod-overhaul/
And again, thatâs just scratching the surface â and people HERE have already given examples that directly undermine what youâre claiming.
Iâve said my piece. If you canât accept the facts, thatâs on you, not me.
Anyway. Moving on... What this boils down to is, don't use AI as oh write prompt i'll get it done in one go. Check the output like you would any other's work, team-member, student, what have you.
@ TheBlindGuy07
See, that's what i'm saying as well. AI is a tool. If say, shit goes south? Meh; something new will come with time. If it succeeds? Benefit for everyone. Of course like any peace of tech, hell anything really, some will try to use it for, neferious purposes. Then again; that's a tail as old as time, i.e, knives used to cut food can also be used to injer or kill someone. One can use a computer for work, every day tasks, or to create a malware that infects a whole lot of devices. A car can be used for travel or to again; hurt others. Itâs a tool, as said, and that's what people need to keep in mind with this. I.e, people using AI for companionship, relationships? That's one of those, don't make it into something it isn't, kind of things. But as l'ng as one keeps in mind its a tool and nothing else; your fine.