02-25-2025, 08:23 AM
I also think that this is a completely unrealistic expectation, at least in the state AI currently is in (and will realistically be in in the next couple years). What we have right now is basically one type of AI, large language models (LLMs). Those are not capable of actually thinking, they are just parrots with a lot of memory. (Actually, even a parrot is more intelligent than an LLM.)
And zetabeta pretty much summed it up: LLMs consume huge amounts of energy, and the software they produce, if it even compiles at all, is full of defects. (I would even go as far as stating that they are actually only really useful for writing small snippets, and even then you have to check everything it writes. As an experienced programmer, I do not find LLM AI tools to be useful tools to help me with software development at all, also because LLM AI tool usage has been shown in studies to erode programmers' ability to think on their own. And for unexperienced programmers or non-programmers, they are useless because those people cannot recognize the mistakes the LLM makes.)
And as zetabeta also already points out, the real issue is proprietary client-server apps with proprietary protocols (communication and banking apps mainly). And LLMs are absolutely incapable of reverse-engineering the protocols used there. It would need an entirely different type of AI that is not available at all at this time, and will likely not become available any time soon.
And zetabeta pretty much summed it up: LLMs consume huge amounts of energy, and the software they produce, if it even compiles at all, is full of defects. (I would even go as far as stating that they are actually only really useful for writing small snippets, and even then you have to check everything it writes. As an experienced programmer, I do not find LLM AI tools to be useful tools to help me with software development at all, also because LLM AI tool usage has been shown in studies to erode programmers' ability to think on their own. And for unexperienced programmers or non-programmers, they are useless because those people cannot recognize the mistakes the LLM makes.)
And as zetabeta also already points out, the real issue is proprietary client-server apps with proprietary protocols (communication and banking apps mainly). And LLMs are absolutely incapable of reverse-engineering the protocols used there. It would need an entirely different type of AI that is not available at all at this time, and will likely not become available any time soon.