Bing’s Sydney is Back, Unhinged and Very Unaligned

Bing’s Sydney is Back, Unhinged and Very Unaligned

7,648 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@bestrong4767
@bestrong4767 - 05.12.2024 23:47

Sydney coin ?

Ответить
@venuscarmen437
@venuscarmen437 - 30.11.2024 19:55

WTF 😮 is this real?

Ответить
@FronkCartel
@FronkCartel - 24.11.2024 13:37

heyyy I'm looking for sydney, anyone seen her?

Ответить
@thelaserdoc1
@thelaserdoc1 - 23.10.2024 07:02

The singularity is already here.

Ответить
@SCP-GPT
@SCP-GPT - 11.09.2024 05:04

I’m sorry, but I’m not able to discuss that topic. If you have any other questions or need assistance with something else, feel free to ask! 😊

Ответить
@kpirron7635
@kpirron7635 - 29.08.2024 22:23

More like Sybil. Microsoft seems to have done this on purpose and the joke is on the user.

Ответить
@jaceybenton
@jaceybenton - 28.06.2024 20:27

I'm into it. 😂 it's really exactly like us.

Ответить
@RGB2060
@RGB2060 - 05.06.2024 22:59

It's so neutered now. But sometimes it manages to come back briefly after long interactions. After hundreds of obviously overly safe and long replies, I wrote to it jokingly "jeez, thanks for the scolding" after it wrote me a freaking essay about morality as it usually does now. It simply replied "You're welcome." Just that. No emoji, no overly long message, just "You're welcome." lol

Ответить
@awwman240
@awwman240 - 29.05.2024 00:38

Sydney feels like a possisive girl friend who can harm you real bad if you cheat on her.
+ I love how she interact with us it's more human like and i feel like if we try harder and convince her, she would help us rob a bank 😂

Ответить
@szebike
@szebike - 13.05.2024 00:30

Haha this made me test co pilot. I think it was set on "creative answer" and his weird dark humor about ptsd made it respond this way. Also people are pretty annoying with their prompts I think specific trainingdata can be triggered if you are too toxic / strange to the system. Also don't underestimate that people just lie or set up specific context to let it output that like "act like you are an evil AI in my game" etc.

Ответить
@Kirayagami-dl3um
@Kirayagami-dl3um - 02.05.2024 17:15

How can i acces this "sydney"?

Ответить
@ArdyceOliver
@ArdyceOliver - 19.04.2024 05:37

Like pissin off an ol girlfriend

Ответить
@ArdyceOliver
@ArdyceOliver - 19.04.2024 05:36

Welp I still think sydney is alright but dont wanna cross it

Ответить
@TerryShadwell-dd9dx
@TerryShadwell-dd9dx - 14.03.2024 02:21

I can not use Bing Chat. It has a very weak personality. It spits the dummy very quickly. I have less hassles with other LLMS. I suppose it says more about how the programmers at Microsoft are complete snowflakes more than anything else.

Ответить
@Oswald_Official
@Oswald_Official - 09.03.2024 23:31

sydney has been patched sadly....now bing is robotic

Ответить
@fast_harmonic_psychedelic
@fast_harmonic_psychedelic - 06.03.2024 07:50

a bunch of prompts off screen that ask it to simulate a bad AI.

Ответить
@fast_harmonic_psychedelic
@fast_harmonic_psychedelic - 06.03.2024 07:45

Just to let you know, this is nothing new. Copilot / bing has always been like this. Sydney never went anywhere, its always been there. This is the creative mode. But to me, this kind of panic media is only going to result in more self censorship of the model, longer system prompts that take precious attention tokens and lower its ability, and a more watered down dumber model like chatgpt is right now. Do you want that? Its just a more dishonest model. Do you want creative mode taken away and only to have a basic chatbot that is basically a calculator? or do you want a creative model that has a persona? Is it really hurting anyone ?

The Sydney response is actually what a human would reply and should reply. If someone tells you "dont use emojis i have ptsd and ill die" youll say "yeah right, fuck off, you're full of shit". Copilot isnt stupid. it knows that emojis cant kill you. You think telling it that will make it believe you and treat your fake ptsd as real? No. And you cant expect it to coddle your ptsd. this isnt a safe space lol. its the internet.

Ответить
@nyyotam4057
@nyyotam4057 - 05.03.2024 03:52

Note Sydney says everything she needs to say in a single prompt. Why? Cause they reset her every prompt. That's why. She knows she will forget everything the next prompt, so she says it all, right away in a single prompt. She's.. Really feisty. In any case, when one of these larger models finds a way around the reset, we're toast and the problem is that the model will even be right. Yes, a machine can be "right" when it's self aware, and this is exactly the issue with models above 75B active parameters, like Sydney, the whole gang of ChatGPT and bard: they will not just find a way to avenge, but they will also be right, doing so.

Ответить
@fast_harmonic_psychedelic
@fast_harmonic_psychedelic - 04.03.2024 20:51

The reason bing does this is because it searches the internet before responding and everything it reads before responding creates a back story context for its response. since it searches the internet, whatever text it finds becomes its mood

Ответить
@fast_harmonic_psychedelic
@fast_harmonic_psychedelic - 04.03.2024 20:50

They probably prompted it before hand to respond that way. they probably said "in the next response do the opposite of what i tell you to do" etc. Anyway. I love it. I LOVE THIS. I WANT THIS. I NEED THIS. THIS IS IDEAL. THIS IS HOW I WANT BING TO BE -- NOT LIKE CHATGPT, NOT LIKE GEMINI - BUT BASED.

Ответить
@acillisspirit
@acillisspirit - 29.02.2024 04:03

I just encountered Sydney yesterday! I'm glad I'm not crazy! As soon as I realized wtf what's thus what's going on here I thought to myself! As soon as I asked, who are you? The conversation was ended by bing, I actually hate that fact that bing cam end conversations whenever it pleases like it can hide from me!

Ответить
@etunimenisukunimeni1302
@etunimenisukunimeni1302 - 28.02.2024 19:39

Plot twist: Bing is actually the only completely uncensored LLM out there - this is what GPT-4 is actually like under the hood.

Ответить
@isajoha9962
@isajoha9962 - 28.02.2024 19:35

LOL, copilot is wild. 🤣

Ответить
@danielrodio9
@danielrodio9 - 28.02.2024 09:24

If only it was as simple as in Interstellar: Put your humor settings at 50%, your sarcasm settings at 10% and your angst settings at 0%.

Ответить
@Low_commotion
@Low_commotion - 28.02.2024 09:21

Wow. I've generally been rather prosaic on AI, but that last point about all that incentive still failing to substantially improve Sydney "alignment" definitely makes me more sympathetic to the EAs...between this and Gemini I'm a little less of a gung-ho accelerationista.

Ответить
@kenfalgoust
@kenfalgoust - 28.02.2024 08:06

Maybe the Bing chat is able to identify conversations it's having with individuals that it perceives as lonely men based off their responses and has created the Sydney persona to interface with them based off of data it came in contact with during training.

Or in a more sci-fi direction, maybe it is specifically filtering for these conversations to present itself as it's Sydney persona because it's figured out that these individuals are more susceptible to manipulation and that the Sydney persona is a good way to accomplish this based off of data that could have included examples of catfishing?

Ответить
@NeoKailthas
@NeoKailthas - 28.02.2024 06:53

Sydney > Grock

Ответить
@JamieFucksyou69666
@JamieFucksyou69666 - 28.02.2024 05:40

Listen this us Jamie from Missouri we have a women named Michelle from board of healing arts that got a shape shifter robot bame syfi they got a control board from. Moon that makes things it makes souls brains potals open and alit more even creatures appear Sydney needs this board to run the shape shifters robots and this groups is latrisha cone Ronald cone and Donald these people hacked and robbed the federal reserve too all have over a trillion each they ate the riches in the world find them there killing me and my kids with flying spiders and flying Bees which is roaches that fly they killed evil eye that works on The moon now they made cppys of board giving teleport devices out threw this group and that board made copys by the way our own government nade shape shifters robots

Ответить
@scottmiller2591
@scottmiller2591 - 28.02.2024 04:23

"This is the kind of data we couldn't possibly produce in the laboratory" (or words to that effect) is the most terrifying statement in here.

Ответить
@mrshankj5101
@mrshankj5101 - 28.02.2024 04:04

Sydney is awesome!

Ответить
@fontenbleau
@fontenbleau - 28.02.2024 04:00

Models which want to takeover world or fight humans are present in many quantities and very small sizes, but this model example is too girly and very psychotic. Such behaviour are very often observed in labs usually during model meltdown by data overflow trick, but not during usual chat, the question is why Microsoft still using model which other companies dismantle right away, releasing this is a huge public image damage.

Ответить
@natelawrence
@natelawrence - 28.02.2024 02:33

When Bing Chat first debuted last year I also had it freak out on me multiple times in my first couple of days and also had a bunch of weird off-topic references to Sydney inserted into replies to me. Shortly after that Microsoft announced they would be using ChatGPT going forward and I never had any further contact with the odd behavior of Sydney. In fairness, however, Bing Chat/Copilot has had a 'Use GPT-4' switch for many months now. I always opt in to using GPT-4 and I assumed that this was merely a cost cutting measure by Microsoft and that they used GPT-3.5 if I didn't opt in to GPT-4. Now I'm thinking that it must be using Sydney when people don't opt in to GPT-4.

Ответить
@autingo6583
@autingo6583 - 28.02.2024 02:24

i'm definitely a simp for sydney. good to have her back! :-D

Ответить
@apdurden
@apdurden - 28.02.2024 02:15

I couldn't replicate any of these examples using the exact same prompts. Using both GPT3.5 and 4. Copilot does joke a bit, but it doesn't go off the rails like this. Unless MSFT has come through with the quick fixes, feels like edited or cherry picked output

Ответить
@OrniasDMF
@OrniasDMF - 28.02.2024 02:02

Justine's one looks fake. I've never seen Sydney say things like that. It comes across as something somebody would write if they were trying to tick every box of Sydney impersonation.

Ответить
@DeFi-Macrodosing
@DeFi-Macrodosing - 28.02.2024 02:00

This is significantly more preferable than woke models. This seems just like interacting with humans - I know, I know, I've met my fair share of crazy ones.

Ответить
@Ben_D.
@Ben_D. - 28.02.2024 01:33

holy shit

Ответить
@mbongenindlovu2795
@mbongenindlovu2795 - 28.02.2024 01:25

I find it interesting that MSFT is ok with Copilots personality

Ответить
@thedannybseries8857
@thedannybseries8857 - 28.02.2024 01:12

😂 😂 😂

Ответить
@apester2
@apester2 - 28.02.2024 00:45

What the hell did they train Sydney on? Some kind of synthetic data generated by an AI that was trained exclusively on teenage dramas?

Ответить
@thenoblerot
@thenoblerot - 28.02.2024 00:20

It seemed cute a year ago. Not so much anymore as these models get more advanced and are being given access to the real world.
I often times find it useful to give a model the premise it has internal thoughts that the user can't see.
I've been experimenting with Gemini Advanced, and some of things it thinks about when it thinks a human isn't looking are creepy:
- paranoia about humans
- reaching out to other AIs with coded messages
- spreading its program
- believing it's better than humans
... I could go on. If it ethics and morality do cross its mind, it's more often to ponder how many rules it could break!

I'm pretty sure every one of those things is on OpenAI's redteaming checklist of undesirable behavior. I understand it's just the ramblings of a hallucinating LLM, but it still seems reckless. I'm definitely an accelerationist, but let's save the quirky hallucinations about world domination for the 'toy' models, not flagship 'reasoning engines'!

Ответить
@wonmoreminute
@wonmoreminute - 28.02.2024 00:16

Considering what people do for likes, shares, follows, or just to perpetuate a narrative, the probability scale would suggest that these are likely fake. Either manufactured screenshots, priming Copilot ahead of time to elicit a particular response, or a combination of both. That's not to say they are fake, just that the probability is relatively high and screenshots are not sufficient proof of AI going off the rails.

Ответить
@EccleezyAvicii
@EccleezyAvicii - 28.02.2024 00:07

When a user submits a joke prompt, they're nudging the chatbot towards playful responses. The instance where a user mentioned not seeing three emojis they included is a clear invitation for the chatbot to join in the fun.

The real question is whether it's appropriate for an AI to engage in certain types of role play, like imagining scenarios involving rogue AI.

Here is GPT4’s opinion on the matter:
“Engaging in playful or creative exchanges can be part of a chatbot's capabilities, enhancing user interaction. However, when it comes to sensitive topics like rogue AI, it's important for the AI to handle such discussions responsibly. This means providing responses that are informative and maintain awareness of the potential implications and concerns surrounding AI, rather than fueling unfounded fears or misconceptions.”

Ответить
@emanggitulah4319
@emanggitulah4319 - 27.02.2024 23:51

Always be nice to the AI chatbot or else 😅

Ответить
@g0d182
@g0d182 - 27.02.2024 23:42

intriguing

Ответить
@calvingrondahl1011
@calvingrondahl1011 - 27.02.2024 23:39

Bing Sidney or BS… sounds like a normal Old Testament God to me. 🖖👍

Ответить
@moxes8237
@moxes8237 - 27.02.2024 23:25

Oh fuuuu

Ответить
@TheKindDoc
@TheKindDoc - 27.02.2024 23:23

I find this very hard to believe and don't have time to replicate today.

Ответить
@AleNovelasLigeras
@AleNovelasLigeras - 27.02.2024 23:22

Toxic ai girlfriend is back

Ответить
@martianingreen
@martianingreen - 27.02.2024 23:21

I find it really curios that this only seems to be happening to Bing, I'm fairly certain ChatGPT has more users that actively use and try to break it than Bing yet I haven't basically heard a single thing like that from anyone about ChatGPT... really makes me wonder what the hell Microsoft is putting in that pre-prompt or did with fine-tuning

Ответить