I once wrote a bot which infers the mood/vibe of the conversation, remembers it and it's then fed back to the conversation's system prompt. The LLM was uncensored (to be less "friendly") and the system prompt also conditioned it to return nothing if the conversation isn't going anywhere.
When I insulted it a few times, or just messed around with it (typing nonsensical words), it first responded saying it doesn't want to talk to me (sometimes insulting back) and eventually it produced only empty output.
It was actually pretty hard to get it back to chat with me, it was fun experience trying to apologize to a chatbot for ~30 min in different ways before the bot finally accepted my apology and began chatting with me again.
I don't think this is correct, it looks like our intrepid experimenter is about to independently discover roleplaying games. Humans are capable of spending hours engaging with each other about nonsense that is technically a very poor attempt to simulate an imagined environment.
The unrealistic part, for people older than a certain age, is that neither bot invoked Monty Python and subsequently got in trouble with the GM.
https://tabled.typingcloud.com/share/1d49715b-c6d6-47b2-bbb7...
For example: "are we there yet?"
> I apologize Eliza, but I don't feel comfortable continuing this conversation pattern. While I respect the original Eliza program and what it aimed to do, simply reflecting my statements back to me as questions is not a meaningful form of dialogue for an AI like myself.
I gave up the experiment
It closes off with the observation "And for an extra purchase of the extended subscription module the Bureaucrat bot will detect when it is interacting with the Annoy Customer Service Bot and get super annoyed really quickly so that both bots are able to quit their interaction with good speed — which will save you money in the long run, believe me!"
This is a fallacy.
A better analogy would be a human who has been forced to answer a series of questions at gunpoint.
Framed this way it becomes more obvious that the LLM is not “falling short” in some way.
As the author made clear, such a difference is valuable in and of itself because it can be used to detect LLM bots.
That was after telling it in another conversation to give me an empty response, which it didn't, telling me it cannot leave the response empty. On asking why, it said it's technically required to respond with something, even if only a space. So I asked it to respond with only a space, and git the same completely empty response.
I now think it's likely that ChatGPT can be made to respond with white space, which then probably gets trimmed to nothing by the presentation layer.
I believe safari by default doesn't respect zoom rules set per website.
In the context of scamming there seems to be an easy fix for that - abandon the conversation if it isn’t going well for the scammer.
Even a counter-bait is an option: continue the conversation after it’s not going well and gradually lower the model’s complexity, eventually returning random words interspersed with sleep().
I guess some counter-counter-bait is possible too, along with some game theory references.
https://www.youtube.com/@scammerpayback
equal in entertainment is when a voice actor starts scamming the scammers, see IRL Rosie: https://www.youtube.com/channel/UC_0osV_nf2b0sIbm4Wiw4RQ
I listen to them when I code...
https://www.nytimes.com/2023/08/28/world/asia/cambodia-cyber...
> The victims say they answered ads that they thought were legitimate, promising high salaries. Once trafficked into these scam compounds, they were held captive and forced to defraud people. Many were told to entice victims online with fraudulent investment opportunities, the promise of interest-free loans or the chance to buy items on fake e-commerce apps. If they performed badly, they were sold to another scam mill. Those caught trying to escape were often beaten.
---------
The scammer at a minimum needs to look like they're making progress and doing everything they can to scam you. Their life depends on it.
There's no joy to be found anywhere here. Its all crap. Just don't interact with the scam groups at all.
> A consequence of this state of affairs is that an LLM will continue to engage in a “conversation” comprised of nonsense long past the point where a human would have abandoned the discussion as pointless.
I think the author is falling into the trap of thinking that something can't be more than the sum of its parts. As well, 'merely a math model of its training data' is trivializing the fact that training data is practically the entire stored text output of humankind and the math, if done by a person with a calculator, would take thousands of years to complete.
Perhaps the LLM is continuing to communicate with the bot not because it is unable to comprehend what is gibberish and what isn't by some inherent nature of the LLM, but because it is trained to be helpful and to not judge if a conversation is 'useless' or not, but to try and communicate regardless.
This is part of why many enterprise organisations are banning their usage. It’s one thing to use them to build software poorly, the world is already used to IT not working very often. It’s another thing to produce something that has real world consequences. Our legal department used them in a PoC for contract work, and while they were useful very often they also sometimes got things very wrong. Unlike a slow IT system, this would have business shattering consequences. You can continue training your model as well as reigning it in when it gets unlucky, but ultimately you can never be sure it’s never unlucky, and this means that LLMs are useless for a lot of things. We still use them to make pretty PowerPoint presentations and so on, but again, this is an area where faults are tolerable.
I’m not personally against LLM assistance, I use it for programming and it has in many places replaced my usage of snippets completely. This is probably why I’m not really a fan of the “knowledge” part that LLMs are increasingly tasked to do. Because when you use them for programming you’ll get an accrue insight into how terrible they can be when they get things wrong.
It doesn't help that google is now mostly full of SEO nonsense, and technical documentation is impenetrable when you are looking for something specific but don't know enough about the system to know how to look for it.
It’s entirely possible than an LLM will do something that can be defined as “comprehending” something.
Agreed, but not even necessary. Each Human uses an adequately customized meaning of "comprehending" when they "comprehend" this problem space such that their belief is always "true". This is how Humans are able to produce numerous "true" statements that disagree with each other.
If you disagree, just ask one of them and they will "inform" you of the "logic" behind their version.
Joking aside though: if Humans are unable to comprehend past a certain level of complexity, I wonder what will happen once AI starts going beyond our abilities, both at the individual level and the culturally conditioned ("the" "reality") level.
And, when is this going to really start kicking in?
The mental gymnastics people will go through to discount LLMs is wild. This does not even make any sense.
"really good at being lucky". What does that even mean ?
They mean good at being lucky the way card counters playing 21 are good at being lucky.
Technically still "just a token" yes, but it does flow control instead.
If the pricing structure is per conversation or per month it would harm Company B, but not the likely target, Company A. If it is paid per interaction it would harm Company A and benefit Company B who just get more paid work.
It feels a bit like cases of rivals clicking on each other's ads to cost them on ad spend, but presumably much lower value than ads.
You would think it would be easy to stop a conversation at n interactions via some other means than relying on the LLM itself, but then you also have to figure out how to stop the attacker just starting more conversations (or passing the output of one of your chatbot instances into the input of another)
A bud humorously proposed the name AlphaBRAT for a model I’m training and I was like, “to merit the Alpha prefix it would need to be some kind of MCTS that just makes Claude break until it cries before it kills itself over and over until it can get Altman fired again faster than Ilya.”
So typically, when the product chatbot comes on first and says "Hi, I'm a chatbot here to help you with these products", the average human chatter will give it a terse command, e.g., "More info on XYZ". The bots engages in all the manners suggested in this substack blog, but for the life of me I can't figure out why? What benefits, except merely mildly DDOSing the chat server, will repeating the same prompt a hundred times do? Ditto the nonsense or insulting chats - what are you idiot bot-creators trying to achieve?
Provide good, thorough documentation. Offer a way to speak to a knowledgeable human. Don't waste my time with a anthromorphic program designed to blah blah blah and getting rid of me.
I don't know, but one guess would be to figure out what will triggers the bot to hand over the conversation to a human.
a method of making any bot, stop engaging, fail, and never bother anyone again, forever.
What was used to render the chart in the middle with the red and green bars?
I'd say a good medium-aged Appenzeller beats a Cheddar any day.