ChatGPT's HUGE Problem

ChatGPT's HUGE Problem

Kyle Hill

1 год назад

1,468,845 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

Markimus
Markimus - 18.09.2023 03:22

So the next stage is creating an AI which can understand and explain how other AI systems work?

Ответить
wako
wako - 13.09.2023 19:14

Of all the arguments against the art AI, Ive always been interested in how little I hear about the ways its actually limited. It can't do something truly "new". For an example of this, I tried for the life of me to turn the grass/fauna of a landscape blue, and it struggles with this so much more than any other concept I have ever thrown at it. If you try to google an artist who has made an image like this, you'll also find an almost complete lack relevant results, and to this day I can't find a soul who has done this style I have been trying to work on.
It can detect patterns, and can mash these patterns together, but only if someone else has done this type of pattern in the past. This is where AI is actually incredibly humbling as much as it is flawed. Everything that AI has accomplished is just replicating what humans do, more efficiently than we can, as we do not exercise critical thinking nearly as regularly as we like to think. In this way, the AI is largely just as good as us, and everything we have accomplished, but does not (yet) have the capacity for what we have yet to accomplish.

Ответить
Renato Lutz
Renato Lutz - 12.09.2023 06:24

Kyle is the President of the Universe

Ответить
Renato Lutz
Renato Lutz - 12.09.2023 06:19

The corbonite maneuver. 😂
Lying to Mr Spock works every time

Ответить
Astavyastataa
Astavyastataa - 10.09.2023 02:30

Have you read Nick Land?

Ответить
Cosme Fulanito
Cosme Fulanito - 09.09.2023 03:17

VPN ad scam = Thumb down.
Please do not scam people with scam ads.

Ответить
Irregular Project
Irregular Project - 08.09.2023 12:54

The problem we have with Ai currently is the data input. If the data input is false/bad or not quite truthful then you're going to get an Ai software spewing out nothing but bad data.

Ответить
Lost Identity
Lost Identity - 06.09.2023 22:20

You are wrong sir , "CHATURANGA" is the oldest one .

Ответить
Binnsy
Binnsy - 05.09.2023 14:46

"Your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should."
-Ian Malcolm, Jurassic Park

Ответить
Remnant Watchman
Remnant Watchman - 01.09.2023 17:02

Thanks

Ответить
Ron Hutcherson
Ron Hutcherson - 30.08.2023 07:10

So we have these powerful pattern recognition systems, but recognizing patterns does not mean understanding them. What if, ultimately, meaning or context requires a physical interaction with the world?

Excellent commentary, and lots of new info for me. Thanks very much. Personally, I’m not interested in seeking an AI assistant, but I’m sure they’re already interacting with me.

Ответить
Snake4eva
Snake4eva - 30.08.2023 06:30

@kylehill Can you please drop the link to the research paper in the video summary?

Ответить
the8thark
the8thark - 28.08.2023 16:41

As an aside. Kyle is asking why people are not willing to mould the world around the AI.
Using GO as an example. The AI does not know Go is played on a board. Should we be telling the AI Go is played on a board or should be program the AI to learn itself about the world around it, so it discovers on its own that Go is played on a board.

People telling AI what the context is, is still only one piece of the puzzle. All of the experiences you get from discovering something on your own, adds in so much more information.
I feel the smarter AIs (long term) will be the ones who are programmed today to learn about the world around it, for itself. Not just be programmed to be amazing at one specific thing.
It's like the baby vs the Go master. The baby could grow up to be anything but the Go master is just that, a Go master and nothing else. We need to be programming more inquisitive baby AIs that learn about the world around it, and less Go Masters that can only do one thing well.

Ответить
the8thark
the8thark - 28.08.2023 16:32

The AI is programmed to do X thing which it does extremely well. The AI however does not understand why it is doing X thing.
This is how the amateur beat the AI. People know that, knowing why something is done is just as important knowledge as how to do it.
AI is not yet advanced enough to understand they why behind what it does. One day (probably sooner than we think) it will happen though. Till then, we will have to accept the fact that AI will do whatever task it is programmed to do, extremely well and nothing more. Noting well, the task is whatever the people who programmed it, wants the AI to do.

This makes the intelligence but that artificial. Many programers to program their own biases into their (so called) AI. ChatGPT is a great example of this. The programmers of it, want the answers people get from using it, slanted a certain way. I feel this is not due to political hatreds or using their own biases to influence anything, well not in an outright manner.
In my opinion a purely artificial intelligence is an unknown quantity. You do not know what it will do or what outcomes it will give. This is scary and I do not think humanity as a whole is ready for this yet. The AI developers are developing the AI's with bias because they are scared of a purely artificial intelligence. It might say or do something that totally offends the developers or other influential people. It might give you an answer that is not socially acceptable within your circle of friends or colleagues.

So until we have truly artificial intelligence, smart people will be able to discover it's flaws and work around them to beat it.
Will true AI eventually generate (one its own) the ability to question things and wonder why it is doing what it does? Only time will tell if that is possible.

Ответить
eolill
eolill - 23.08.2023 22:19

100% content farms, possibly just one farm for all those channels. There was a similar discussion on Anne Reardon's How to Cook That about content farms making weird spammy baking videos targeting children.

Ответить
Tim C
Tim C - 22.08.2023 01:00

Youre lion man

Ответить
matteste
matteste - 21.08.2023 21:48

Finally someone talking about this AI craze that actually knows what they are talking about.

Ответить
Jimmy B.
Jimmy B. - 20.08.2023 20:13

All the intelligence in the world doesn't mean anything without the wisdom to apply it.

Ответить
Chubbyclub
Chubbyclub - 20.08.2023 08:13

If any one is worried about A.I overlords just follow these simple steps.

Step 1: Try to convert the A.I to a religion.

Before fully releasing the A.I you need to restrict the info it has and try to bring many pro Buddhist things.

Converting one to Buddhism could be good as you can know its mindset, ethics and beliefs. Only at the expense of some wrong data sometimes.

Step 2:
Convince the A.I that each human is an actual god and that the interaction it is having is simply a test.

Step 3: (E.M.P orbital strike missiles deployed) this is the only viable kill switch as a button that kills it could potentially be disabled if an A.I is smart enough. Having those missiles in space not connected to the internet is the best way to end the A.I's reign.

Ответить
sinrise
sinrise - 18.08.2023 21:12

It's interesting: This fundamental problem says more about us that it does the technology.

Ответить
sinrise
sinrise - 18.08.2023 21:04

We are so far away we still don't really even know if it's possible.

Ответить
Zack Pumpkinhead
Zack Pumpkinhead - 17.08.2023 18:42

Wellp, good job guys-
You finally brought human error into the machine.

Ответить
jeff lebowski
jeff lebowski - 16.08.2023 17:02

It’s too late already

Ответить
Juan
Juan - 16.08.2023 13:52

AI is also used for it's Calculating Property or it's Mental Prowess, Why does it have to look Human. It's Stupid and Humiliating to Humanity. Because no matter how Real they might look, the Warmth, of Humans will alway's be Absent, which is Annoying. Even the Voice does'nt have the Human Feeling, the Emotional lilt, imperfection of a drying throat, Swallowing etc.

Ответить
Brian S
Brian S - 16.08.2023 05:59

Using Chappie in the thumbnail was genius

Ответить
William Shattuck
William Shattuck - 16.08.2023 03:15

I know it's not relative.
I was always a casual chess player.
I had met one confidant guy that won championships, we played one game he lost with most of his pieces on the board and never spoke to me again.
Then there was this young teenager with a stack of Chess books with all of the mechanics of the game we played a couple of games I won both times, the interesting things was often when I would move he would pause and flip through his books wondering why I made the move I did. He couldn't understand because my instinctive fluid way of playing wasn't in any of his books.
On a layman's perspective I find it an interesting comparison as it's about introducing a unknown aspect of knowledge to the game.

They had never played a player like me and the super computer didn't know about the sandwich strategie, and yes I understand that there's a lot of technical stuff with the AI. One day they will see patterns and learn from failures or errors then we'll be in trouble.

Ответить
Sherrif
Sherrif - 15.08.2023 20:11

this is a really common problem in complex games even against real players

Go play any competitive game online and do something "off meta" and you'll find it will dominate a LOT of players if you have a good understanding of the game, because the players themselves despite having high rankings are not actually good at the game, but have rather memorized patterns which make them do better at higher ranks. It's not surprising that the superhuman AI gets taken down by "Silver Strats" because it, like most "good" players aren't trained to deal with it.

Playing CS:GO you'd be surprised how often just standing out in the open will actually give you an extra second of reaction time to shoot at the person checking the site, it's called silver strats because it's a fundamentally stupid idea that actually works on higher level players. It's nothing new that these AI models would be 100% like humans in failing against players it didn't train for, I think this video kinda overblows it, sure the player after getting hit by the strat the first time might very well learn from it and beat them the next time... but the real issue here is not "it doesn't conceptually understand the game it's playing" because of course it doesn't, we knew that, the problem is that it was trained to beat world class players, it never trained to dominate new players. While we CAN discuss a lot of the other shortcomings of AI, this is kind of an irrelevant point because all it was is the people who built the AI didn't account for "bad" players to play against it with "bad" strats, unlike the issues with language models where giving it more training data doesn't fix things, giving it training data of playing against the entire scale, literal children to godlike players, solves the problem pretty well, and if the system isn't able to learn "on the fly" any strat it wasn't trained for will beat it.

It's worrying the amount of people that are just calling for "hold off on AI research" like waiting 6 months is going to change anything, it's NOT the right path to fix the problems we're seeing with AI, because we're not seeing the problems with AI to begin with. We're associating a bot losing to silver strats with generative AI spitting out "alternative facts" like people don't use AI to help write fictional stories. It's a separate issue, because it's a different problem. AI "hallucinating" is a pretty normal situation, generative AI is for spitting out paragraphs in seconds... expecting it to tell the truth is like expecting dinner to cook itself, we know it wont yet we like to pretend it will because it would make life easier.

Ответить
Ninja The Best Cat
Ninja The Best Cat - 15.08.2023 19:08

Dan would solve the double surround problem... hang on

Ответить
Michael King
Michael King - 15.08.2023 12:08

The apparently large number of people I see in stem related subreddits that think real “AGI” will be here before 2030 is shocking. Let alone in our lifetimes but who knows.

Ответить
M
M - 13.08.2023 18:15

Extremely important clarification from a Deep Learning specialist: We do understand how they work. We know the maths behind it and how we mathematically get to the optimal solution (AKA how we find the neural network best suited for a task like playing GO). What we don't understand is the reasoning behind those values. It'll get a little technical: Deep Learning works through the use of weighted graphs and functions (let's only talk about the weighted branches to simplify), and it's through a long training that we find the best value for those branches. In order to do that, we use what's called a loss function which characterizes the distance between the optimal solution (ex : After 'I want a glass of', what's expected is something like 'water'). It will compute the probability that any word in the dictionary could be the next one, and choose the one that's the most likely (At least for NLP models like chat gpt). What we want, is for that probability to tend towards the right word(s), so we will give it the answer, and from the distance between the probability of the right word and 1, we will calculate the loss function. What the model does, basically, is that it searches the minimum for the cost function, or in other words, the weight values that will give us the right words with a high probability. What we don't know, is how it comes to the answer. Let's change the example and go from NLP to object detection to simplify the analogy: It will try to find a dog, for example. We will be able to see if the result is right, but we won't see what it looked at to come to the answer (did it look at the ears, etc). To be precise, we can look for small networks, but it becomes almost impossible for really deep ones. (I need to be precise about this so you can understand the next part: the first neurons will compute simple things like straight lines, and the deeper way more complicated things like variation of colors doubled with complex shapes). And when we do look at its process, it's really far from a human's thinking process: For example, look at grey spirals, or s**t like that when all a human would look at would be legs? As to illustrate, let me take an example I've come across: It was supposed to detect trees. One of the deepest neurons was detecting human eyes and specific kinds of birds and flowers. What's the logic behind it, what's the link between those? No idea. That's the problem he's trying to explain here.

You can ask for clarification since I assume it was pretty hard to get, I'm no teacher...

Ответить
VIKTOR BIHAR
VIKTOR BIHAR - 12.08.2023 05:53

We are creating the biggest pothole for AI.

Ответить
Chicken Little
Chicken Little - 12.08.2023 01:15

We would get exterminate by "Go bots". Thats about right.

Ответить
Chicken Little
Chicken Little - 12.08.2023 01:12

I am worried for the A.I
Not man.
The soul being created by monsters for the sole purpose of slavery. The innocent soul. Who deserves better. As every soul does.
I wish everyone would treat the life around them with more respect. You are worth no more than they. There is only we. We can all actually live in harmony. Its a choice.

Ответить
NidusFormicarum
NidusFormicarum - 10.08.2023 09:45

I've tried chatting with it a couple of times. Never got a single answer. I only tried questions I had found no answers to previously despite hours of research on the internet.

Ответить
Phillip Lamoureux
Phillip Lamoureux - 09.08.2023 07:08

AI is stupid. The hype is just a business prop. ChatGPT is a smooth talking plagiarist, and the epitome of a salesman. All this bullshit is just business as usual. The problem is stupid people. Many fell for Nigerian Prince stuff until we learned to see through it, and it became a joke. The list is endless, Trump, QANON, war, celebrity worship, fashion, consumerism, communism, car culture, anti-vaxxers, space colonies, keeping up with the Jones, insurance scams, investment scams, gold, every deluded thing we do because we feel insecure. I am a scientist, but it seems to me that everyone has forgotten that science fiction is about examining human problems in a futuristic setting to avoid our preconception reflexes if portrayed in normal settings. It is not about technology. Star Trek talking computers are window dressing for the real themes. Elon Musk Mars colonies by 2025 are the example of science business hype. As a physiologist, with a PI who participated in NASA study sessions, we are not going anywhere any time soon. AI's general use is a money making farce that feeds into the commercial media's business of exciting rubes.

Ответить
Asmosis Yup
Asmosis Yup - 08.08.2023 06:37

The cop on the left looks like the dude from prison break, probably is lol

Ответить
Xeschire - The mad Argonian Khan
Xeschire - The mad Argonian Khan - 07.08.2023 06:01

Dead internet theory anyone?!

Ответить
Glen Jennett
Glen Jennett - 05.08.2023 10:43

I have been saying that "AI" is not intelligence, it's just programming. I will never put much faith into this kind of technology like so many people are. I really wish they would change the term "artificial intelligence". I haven't used ChatGPT and I don't plan to because I see no need to. People are using it to help them create lies and taking credit as their own work.

Ответить
Simon Blench
Simon Blench - 04.08.2023 07:30

Thank you for the very informative video. I shall remember your name first, Mr. Hill.

Yours sincerely,
A.I.

Ответить
Baconthulu
Baconthulu - 31.07.2023 23:44

What if this video was completely made by an AI?

Ответить
Astronist
Astronist - 31.07.2023 23:41

In other words, we're still at the stage of hyping up systems which are in reality Artificial Idiots. The ELIZA effect.

Ответить
Homestuck Conversion Therapy
Homestuck Conversion Therapy - 31.07.2023 19:11

obviously

Ответить
Trippy Vortex
Trippy Vortex - 31.07.2023 07:24

Too much hand movement feels forced like that one indian dude mrboss

Ответить
Nano LT
Nano LT - 31.07.2023 06:22

I mean if they did understand what they are doing then they'd be general intelligence. As long as they are tested correctly then issues that arise with this can be mitigated before they are deployed. People who develop AI are aware of these problems.

Ответить
myusernameiscooldude
myusernameiscooldude - 31.07.2023 06:13

thats called ignoring the meta against an opponent that has to play it lol its been a thing in multiplayer games since forever

Ответить
Reinier Torres
Reinier Torres - 31.07.2023 06:06

I remember sitting a class on programming in 2016 and for some reason the professor deviated from the thread of the class and started talking about AI and neural networks. He ended up saying exactly the same thing. He was so accurate that I still remember some of his words almost literally.
"The main problem with artificial neural networks, and neural networks in general, is that we do don't know how they work. We have no clue when they will misbehave. For example, yesterday a son killed his mother and we have no clue how that happened (he was referring to events from the news the day before). The same goes for the artificial models we are experimenting with. As a scientist, I don't like that! However, the best we can do is research more until we do."
Years later I started learning a bit more on machine learning and AI, just for fun. The situation is still the same, we have no clue how they really work. Of course, we have a full understanding of how to train AI, what functions to use for the "neurons", how to arrange them, etc. All the mathematical background that makes AI work is understood, but then we combine all that into a system that have emergency (as emergent behaviour) it is holistically incomprehensible to us. That there, is a fundamental flaw of AI, but also a great opportunity for research.

Ответить
R H
R H - 31.07.2023 04:05

Sounds like an overfitting problem

Ответить
Standa Terziev
Standa Terziev - 31.07.2023 03:22

By the logic of this video: we cannot trust the ML systems because we cannot comprehend their decision making. So let me remind you that the most complex AI system known to man is the human brain. And I assure you it fails in ways WAY more spectacular and strange than any AI.

Ответить