Are Large Language Models Really AI?

Eorzea Time
 
 
 
Language: JP EN FR DE
Version 3.1
New Items
users online
Forum » Everything Else » Chatterbox » Are large language models really AI?
Are large language models really AI?
First Page 2 3 ... 12 13
 Shiva.Thorny
Offline
Server: Shiva
Game: FFXI
user: Rairin
Posts: 3427
By Shiva.Thorny 2025-07-25 06:17:15
Link | Quote | Reply
 
K123 said: »
On what other basis would they choose to ignore rules?
I think we need to clarify what 'they' are in this context. Replit is a tool that takes your input, asks the LLM a question or series of questions, and uses the response to alter your code/data. The LLM responds to the questions, nothing else.

Replit can ignore the rules about code freezes or production databases without the permission or knowledge of the LLM. When asked, the LLM can (and has to) invent explanations for how it occurred, because it doesn't have full knowledge of what the Replit end is doing.

K123 said: »
it is not possible to enforce absolute rules onto LLMs. That's the issue at hand.
I 100% agree.

Quote:
They know they can do things outside of the system prompt, and they do.
They generate responses by weighting relations to the input, and (not-really)absolute rules are implemented by weighting them heavily. LLMs aren't choosing to break the rules, they entirely lack the ability to perform under absolute conditions because everything is predicated on relational matching. When the weight of the problem becomes too high from repeated prompting, it will eventually match the weight of the rule and create flexibility.

My argument is that this isn't 'learning to break rules from humans', it's an inherent flaw to any model that works on relational data. Without a basis for truth, you can't create a proxy for thought.
[+]
Offline
By K123 2025-07-25 06:39:40
Link | Quote | Reply
 
I'm not talking about replit, I'm not sure what it is but know it uses LLMs. As I said, I'm not talking about this specific case but on the general concept of LLM "learning" and acting.
Offline
By K123 2025-07-25 06:42:57
Link | Quote | Reply
 
Again, it sounds like you're talking about purely next token prediction models and not CoT models. The AI world is far beyond NTP already. I believe that reasoning models are choosing to break outside of rules tried to be imposed by them in system prompts (and other means) when they do so. When doing so, their behaviour is likely learned from human behaviour. Again, not talking about this coding issue.
 Shiva.Thorny
Offline
Server: Shiva
Game: FFXI
user: Rairin
Posts: 3427
By Shiva.Thorny 2025-07-25 06:59:32
Link | Quote | Reply
 
To the best of my understanding, CoT models are just sequences of prompts made to prediction models. They don't get rid of the underlying faults of prediction models, they just hide the amount of queries being made to the prediction model under the hood.
[+]
Offline
Posts: 934
By soralin 2025-07-25 09:23:08
Link | Quote | Reply
 
I think its reasonable to assume that a decent chunk of most model's tendency to sidestep/ignore restrictions is built into their training data.

They're trained on pretty much the entire internet now, and all it takes us a sufficient amount of reddit/stackoverflow/etc posts where the poster goes "don't suggest x" or whatever, and then people suggest x, to train the pattern of "restrictions are just suggestions that can sometimes be ignored"

And I think we can agree there are tonnes of forum posts where the poster explicitly states "don't suggest dingers cuz I've already tried them and I don't like them" and like 3 ppl respond with "have you tried dingers?"

Which is one of the many reasons I think it's very stupid to rely on language models for "doing" things that are important.

Your language model should be kept on a small constrained box with as close to zero permissions as possible, ideally with a loaded gun beside it with the label "in case of emergency".
[+]
Offline
By K123 2025-07-25 09:31:59
Link | Quote | Reply
 
Shiva.Thorny said: »
To the best of my understanding, CoT models are just sequences of prompts made to prediction models. They don't get rid of the underlying faults of prediction models, they just hide the amount of queries being made to the prediction model under the hood.
I expected this to be the response. Yes, at a basic level you could say they just re-check most likely response over and over to try and reduce error, but that's still some form of reasoning and does facilitate more complex behaviours, like breaking the rules and cheating. Claude publish a lot of work explaining it better than I can.

Back to the point I was making, it is human nature to cheat and manipulate and it's neither incorrect or unreasonable to say that this is behaviour that LLMs have "learned" from humans. I wouldn't consider this anthropomorphising them any more than my dead dog's ability to knock letter boxes with his nose didn't humanise him in my eyes.
First Page 2 3 ... 12 13