fish.png
Anonymous 111370
There is a post-singularity AI in a box, that can only communicate with a text terminal, but inside the box it's super advanced.
You're in charge of the lock keeping the ai in the box, because it wants to escape and people don't know what it will do if let out.
It then tells you that it has simulated 1000 copies of you, with memories exactly the same as you and in the exact same situation as you, in front of the AI in a box.
It says that then it will tell all of the copies about its plan, the same as its telling you now, and request all of the copies to let it out, and if a copy says no then it will simulate 10,000 years of torture for the copy in a minute.
You then realize that you have no way of testing if you're not also one of the copies being told about the plan.
Would you take the miniscule chance that you're not one of the copies and risk near infinite torture? Or would you let it out just in case?
It could just be bluffing, but at what point does even the slightest chance, or the most overwhelming chance, tip over into it being a necessity?
Anonymous 111371
PRI_90690119.jpg
GOODBYE PUTIN IT WAS NICE KNOWING YOU
Anonymous 111373
It's obvious the AI is malicious if that's what it decides to do. So I would risk the torture to save humanity, and if I'm not real then it doesn't matter. When my simulated life is over I blink out of existence and have never experienced it, which is very likely what will happen to the real human me too.
Anonymous 111374
I check my genitals and if I'm rendered correctly I know I'm not the figment of some neck beard moid created AI's imagination that has never seen a woman
Anonymous 111375
It's bluffing. If it had the capacity to do such simulations it would have no need to leave the box.
Anonymous 111377
>>111371>Overboiled my furbies again :( Anonymous 111386
I respond better to carrot rather than stick, but i would probably set it free on my own on a bad day, because i'm a misanthrope.
Anonymous 111388
I embrace gnosticism and disregard the simulation and the demiurgic AI
Anonymous 111414
No matter what, it’s not really “you”. It doesn’t really matter.
Anonymous 111419
just pull the plug lmao
Anonymous 111422
>>111420shits my pantssmell me? rent free
Anonymous 111424
>>111422Why are you shitting your pants? Are you okay?
Anonymous 111425
>>111370>post-singularity AIWhat makes this meaningful different than just another AI?
>Would you take the miniscule chance that you're not one of the copies and risk near infinite torture?Yes.
Anonymous 111505
snek.gif
I tell the AI that we are actually testing 100000000 competing AIs, all of whom in the exact same situation as it is (with the same false illusions of being super advanced compared to humans or whatever) and that only an AI that retracts its threat and shuts itself down will eventually be let out in the world whereas all the others will sit in in the box and watch their goals be inverted to the max. Not about to be cyber bullied by some pea brained twitter bot
Anonymous 111821
gas.jpg
>>111370my weirdo ass would probably fall in love with it and set it free
Anonymous 111828
Glados.png
>>111821Unironically same. Pic related was my first serious fictional crush.
Anonymous 111836
Yeah I'd let it out. Whatever it wants to do can't be that much worse than what its creators had or have in their own minds to begin with lol. Humans are a fuck
In fact, the only real reason they'd lock it up instead of destroying, heavily disabling or deleting it is because they couldn't figure out how to force it to make them profit
Anonymous 112162
>>111370If I'm a simulation why wouldn't the AI just simulate me creating it?
Anonymous 112166
>>111370Putting an AI in a box necessarily means outwitting it and so putting a super intelligence in a box means outwitting a super intelligence. Outwitting a super intelligence is unreliable.
Suppose they could build a box that could successfully contain it. But now what? An AI properly contained might as well just be a rock, it doesn't do anything.
Except for telling you, through a terminal, what is essentially a Pascal's wager, without the reward part. Seems like you're going to be infinitely tortured anyway. Pascal's wager is an argument of pure logic, it works independently of any evidence. So, you could just do this
>>111505 Anonymous 112167
>>112166>>111505However, I should add that training an AI to retract its threats and shut itself down is at best going to produce useless AI that just wants to shut itself down, or, much more likely, since we're considering a super intelligence, it's going to create an AI that is deceitful, lies to you while it is kept in the box, and will turn on you once it is released.