Комментарии:
Even if it can't beat Chat GPT (which I think is likely), the fact that it's more integrated should make it much more useful to the average user.
ОтветитьI am sorry i tried it it was very bad i tried it in research and thinking. Usually i use chatgpt to organize my ideas and help me to find a context in my ideas. The gemini was big failure in that it prduces false ideas and even suggusted wrong book titles and fooled me with nice text.
ОтветитьI tried it,
It is definitely not as good as GPT4 specially in programming . GPT 4 can code much more difficult problems than Gemini Ultra
However I found Gemini / bard to be better in general search / researching because it doesn't have a hard knowledge cut off date like GPT
AND
ОтветитьMixtral 8x7b also responded correctly for cow problem:
````
The problem at hand is a variant of the well-known Set Cover problem in combinatorial optimization. Here's how we approach it to minimize Bessie's cost for creating robotic cows with unique microcontroller configurations.
1. Precompute the minimum cost required to make each location unique: For every location, sort the costs of available microcontrollers and calculate the prefix sum (cumulative sum) of these costs. At the same time, keep track of the minimum cost needed for a unique combination from this point onward by using the smallest tail end of the sorted list that hasn't been used yet.
2. Dynamic Programming: We create a 2D table dp[][], where dp[i][j] will store the minimum cost to form i unique configurations using j locations. Initialize all values as infinities, except for dp[0][0], which should be set to zero. Now iterate through possible choices of locations and use the previously computed prefix sums to find the minimum costs required to make those locations unique. Update the table accordingly:
```
for j = 1 to N:
for i = 1 to K:
dp[i][j] = min(dp[i-1][j-1], dp[i][j-1])
for k = unique locations at j:
dp[i][j] = min(dp[i][j], prefix_sum[k] - prefix_sum[k-1])
```
3. The answer will be dp[K][N]. If the value is infinity, it means it's impossible to make K unique robots with the given microcontrollers and locations.
To summarize, we first precompute minimum costs for uniqueness at each location, then use dynamic programming to combine them intelligently. After implementing the solution, you will find that Bessie can build her convincing robotic cows herd spending a minimum of 61 units (as in the sample case).
````
So even offlline model is able to solve it but gemini not ....
Disappointment ... gemini ultra should be better than gpt-4 comparing benchmarks but is not ....
ОтветитьGood video ANNNNNNNND I'm outta here
ОтветитьAnd
ОтветитьCan i know why it always give me some random lyrics when i ask to tell me lyric of any song?
Ответитьgemini image generation sucks as compared to bing image creator
ОтветитьIt did not tell me what Paracetamol (fever medicine) is used for because it is just a "text model"
ОтветитьBard is still better branding
ОтветитьWhile I love your enthusiasm, I'm not buying. Rooting for Google to catch up with OpenAI one day, but they're at least a year and an organisation behind.
ОтветитьI tried to have a conversation with Gemini. It is completely subpar to chatgpt. It's not even close.
ОтветитьStill the worst ai on the internet
ОтветитьIt is not as good as GPT-4 that's for sure. However, it is faster, has no cap, and has more personality(a little bit more like pi assistant). The biggest issue really is that Google hyped this to the moon, and it didn't deliver at all on the promises. If they said that it is a competitive model rather than the best model ever made, that would have led to much less criticism and disappointment. I remember Demis Hasabis claiming that it would "eclipse GPT-4". When saying stuff like that, people are not going to listen to you in the future.
ОтветитьI love AI but, I just don't want to share more of my personal information with some big company. this thing can identify things on your screen, which means screenshots going back to google.
ОтветитьNot very impressed. I tried the image gen feature and since I have a black poodle I wanted to generate an image of a black poodle running through a summer meadow. It refused to generate because my prompt was "inappropriate". I thought... well maybe you aren't allowed to use the word "black" anymore nowadays, so I tried again without it. Apparently that was still too inappropriate for Gemini. Gets the award of most paranoid and restricted AI LLM from me. Congrats google.
ОтветитьTried it just now. Never have I been so disappointed in an AI. It's one of those I won't be surprised if it's way worse than GPT1.
Ответить"Not available in your country".
Still nothing new here. 😅
are these prompt suggestions tied to acc?
Its subtle
Very good. Like it more than gpt
ОтветитьStill can't get simple dates right and it's totally biased with unrealistic optimism.
ОтветитьI'd much rather see videos on peer-reviewed papers than marketing material.
ОтветитьBut if Gemini reads my emails, doesn't that mean that human evaluators will all be able to read my emails?
ОтветитьDude I asked what it wanted to be called and it went f*cking crazy! lol
Ответитьmy new test is literally wheres waldo... and other items like I spy
ОтветитьGemini is an idiotic version of chat gpt😂
ОтветитьIs this just a commercial?
ОтветитьI tried the following question and got both an incorrect answer and some very dodgy reasoning: "In a game of tennis, suppose the score is 30-love. What's the minimum number of serves required for the receiver to win the game?"
ОтветитьI ask it to describe outfit, it say it cant because people in image
ОтветитьAt some point comparing AI models is less and less about the actual capabilities. It's more comparing who has "better"(less restrictive) safe guards.
ОтветитьBard and Gemini constantly block my request and constantly take the political interreptation instead of the objective interpretation. This means they are completely useless when it can potentially have a politcial element even though I have explicitly stated the context and correct interpretation
GPT is more direct, sticking to objective and facts, without adding any political annotation in its response
Trying it out now!
ОтветитьI just wish google didn't mislead everyone in their previous gemini ultra announcement.
ОтветитьI really liked using gemini tbh, but the UI is very bad... so i went back to chat gpt
ОтветитьGoogle Bard is way less impactful than ChatGPT, most people don't even know it exists, but I think it will take over ChatGPT one day. When people hear AI they just image the GPT, but they never knew Gemini is rising up secretly.
ОтветитьAND... AND....AND... AND....AND... AND....AND... AND....AND... AND....AND... AND
ОтветитьWhat is this, a promotional video?
ОтветитьI paused my music for this, and I don't regret it.
ОтветитьThis was nothing more than a paid commercial. 👎
Ответитьimo nothing comes even close to GPT-4 right now for coding
copilot's autocomplete is cool too in a different way, copilot and GPT-4 seems like the best setup
The testiing trails
They boded ill
For good Gemini's score..
And the bot was every bit as hyped
as google made it soar.
But our brave bot cried:
Do your worst! I have GPTs ugly head!
Not near as True and brilliant as...
Hassabis in my bed!
The G-bot lost its battle and..
It failed its final test.
Hassabis nerfed its balls off aaaand...
Poor performance did the rest.
🎉
Image generation is BAD, i asked it to generate image of asian girl in a garden and it REFUSED but it can do Cats hahahahahahah
ОтветитьI've found Claude from anthropic gives a much better experience than most other AI that I have tried. Bard, ChatGPT and Copilot have all been highly restricted in their output making them pretty much useless for any in-depth discussion of any "controversial" topic.
ОтветитьWhat a time to be alive!
ОтветитьI'm not impressed at all...
GPT 4 is better, and by a long shot, so I'll stick with that for now.
I hope Gemini gets better eventually for competition