FATALLY FLAWED? YouTube's Test & Compare Thumbnail Tool. Real Data

FATALLY FLAWED? YouTube's Test & Compare Thumbnail Tool. Real Data

Channel Makers

3 месяца назад

7,207 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@Itravelbackintime
@Itravelbackintime - 09.09.2024 03:45

I think the best tester is ultimately your audience. Go with the thumbnail that is doing the best and eliminate the worse poor performing ones.

Ответить
@RichyKearnsproductions
@RichyKearnsproductions - 30.08.2024 09:22

i used this feature on a recent video and my video ended up dead in the water at 99 views i typically get anywhere between 500 to 1000 views initially before the video goes ever green and starts slowly collecting views but this time nothing dead its the only thing i did different so i don't recommend this tool at all

Ответить
@WrestlingFace
@WrestlingFace - 18.07.2024 02:06

I found it kinda useless, by the time I got my test results, my video had already died, so it made no difference on which thumbnail it recommended me to choose in the end, again it's more beneficial for bigger channels because they can get results quicker than small channels can.

Ответить
@secretarchivesofthevatican
@secretarchivesofthevatican - 09.07.2024 16:16

I just love creating more thumbnails. It would be nice to see them showing when I visit my own channel. I also don't know where to see the "results".

Ответить
@larsquestionmark
@larsquestionmark - 08.07.2024 11:56

oh man i was excited by this, i guess i could do it manually but thats so much work

Ответить
@recipe30
@recipe30 - 08.07.2024 10:47

Great info, guys. Thanks for sharing. It's really disappointing to hear this sad news, after my excitement for A/B testing! I’ve never understood why notified (click bell) followers are included, they'll click on anything because they are true fans, so including them kind of seems flawed to me. Also, regular subscribers who know me well are more likely to click on any of my thumbnails. Wouldn’t it make more sense to base this A/B/C testing on non-subscribers only? Wouldn't that be a truer unbiased test? Then again, I’m not sure how this algorithm works, maybe that's how it already is. Just throwing it out there, any thoughts?

Ответить
@MiddleAgedSwedeGoesForAWalk
@MiddleAgedSwedeGoesForAWalk - 04.07.2024 11:49

I get the feeling that the test & compare reaches a conclusion way to quickly, especially considering that it uses time share as the deciding factor.

I'm a pretty small channel and the content I create really isn't for everyone, when I release a video and check the watch time of it, it's always pretty obvious that there's a few people, probably mostly old time viewers who know what to expect, that watch the full video, while a lot of the other viewers give up very quickly. Honestly, the audience retention graphs on most of my videos are catastrophically bad.

I was really surprised to see that one of my recent videos reached a conclusive result after just 170-ish views. That was a 62 minute long video, and according to the audience retention graph, after 22 minutes only 15% were watching and by the end only 9% were left, so less than 17 of those 170 people watched the full length of the video while most gave up A LOT earlier.

This means that the deciding factor probably wasn't related to how good the thumbnail was, it was which of the thumbnails got presented to the most those, less than 17 people, that watched the full duration of the video. A thumbnail could have abysmally bad CTR, but as long as that thumbnail is presented to the largest share of "old time fans" of my channel that'll watch the entire video, it's almost certain to get the best time share %.

Ответить
@AquaLust
@AquaLust - 04.07.2024 02:52

My fear is as a very new channel this could confuse my audience and hurt my impressions ratio. As a weak thumb might not get them to click and what is the chance they will show the same vid with the "winner" thumb to that same person later on? Did I miss that opportunity because I didn't put out my best first?
Have you been surprised with one that you thought would win and one that you thought was poor did well?
We know some facts what works and what doesn’t before this. Maybe a test, testing that. See if what we know is true does come out?

Ответить
@AncientCityCraftworks
@AncientCityCraftworks - 02.07.2024 20:19

I am convinced now. Ran test and compare. A was a tn I wasn't too happy with and thought B was stronger. Results were 83% for A but CTR was .6. Not 6 but .6. I finally gave up and deleted A and went with B. As soon as I did that my CTR went way up, watch time went up. It went from my worst performing video ever to average. YT had to be showing the weak tn A disproportionately high rather than an even split.

Ответить
@EpicDIZ
@EpicDIZ - 02.07.2024 15:32

Just ran my first test sand I got a 33.3 percent on every thumbnail. I'll try it again on my next video but it seems like they may have listened and improved.

Ответить
@Historyun
@Historyun - 01.07.2024 17:08

Fantastic breakdown and a massive time saver 💎

Ответить
@ScottJohnson2
@ScottJohnson2 - 01.07.2024 07:41

I totally get and agree with their decision not to just turn this into a CTR optimization tool that would almost certainly trash up the platform. But it seems the relationship to watch time is approximately nonexistent. It seems right now like someone who doesn't click on the video simply isn't part of the stats... If they want this to work without giving people access to all the stats (so they can just optimize CTR) they will have to build an algorithm that incorporates more metrics. (Maybe they already have without saying it?) CTR modulated by watch time would make a lot of sense, for example—downweight a view under 20 seconds to be the same as a non-click, etc. There aught to be enough knobs to tune there for this to function perfectly well.

Ответить
@MatthewGenser
@MatthewGenser - 30.06.2024 03:43

Here is the big question.... Have you or others paid for A/B Testing from other services and tested them against this? That would be the most interesting way to learn more or compare it to

Ответить
@philscomputerlab
@philscomputerlab - 29.06.2024 06:57

It should test clicks vs imoressions...

Ответить
@VladimirKostek
@VladimirKostek - 28.06.2024 21:47

Hi Channel Makers Team, I just posted a video about my YT thumbnail tool being broken that you guys might find interesting. It appears to be a unique issue

Ответить
@AIKnowledge2Go
@AIKnowledge2Go - 26.06.2024 17:43

This is far worse than I initially feared. I share new video releases with my Patrons via email, and each email contains the same thumbnail. Therefore, these numbers must be incorrect. Additionally, how does a thumbnail determine how long I will watch a video? They need to establish a coefficient with the CTR for each thumbnail somehow (I'm not a math expert).

Ответить
@VicBarry
@VicBarry - 26.06.2024 12:27

REALLY interesting video! Thanks so much for digging in here! I've used it in my last two long form videos (was really excited with it) and both videos have been my worst performing videos in years. The CTRs were anywhere between 1% and 1.9% where, consistently my average across the channel is between 7%-10% CTR.
Sure, that may be an audience thing for one reason or another, but I found that the videos were not being served up to my audience who had subscribed, they seem to take forever to get even half the impressions I usually would get in the first hour and even when I searched for the exact title AND my channel name, it didn't show up either. While I appreciate it's audience and NOT algortihm, I just find it highly coincidental that my 2 worst performing videos in years just happened to be using this feature. I've spoke to some bigger creators who also had similar results. Moving forward for now, I'll do what I always did, if the thumbnail isn't performing I'll swap it out manually.

Ответить
@TechnoPandaMedia
@TechnoPandaMedia - 26.06.2024 09:04

I think unless your video gets lots of impressions, this feature is not for you.

Ответить
@Alexandria_S1222
@Alexandria_S1222 - 26.06.2024 01:24

Well that stinks!

Ответить
@nopeoplenoproblems
@nopeoplenoproblems - 25.06.2024 20:56

Have to add, everything what VidIq said it is working i found it is bogus and learn from you or Raffiti channel that I was right with my opinion that is just wasting of time.

Ответить
@nopeoplenoproblems
@nopeoplenoproblems - 25.06.2024 20:33

Thank you, I just put for test 3 videos yesterday, will see what will happen.

Ответить
@Money-Fast-Plan-a
@Money-Fast-Plan-a - 25.06.2024 19:26

Your video is incredibly engaging! -- "Success thrives on relentless commitment."

Ответить
@CrowContinuum
@CrowContinuum - 25.06.2024 19:00

I don't think using the same thumbnail in an A/B test is going to show a 50/50 split even if the numbers reflect just which thumbnail was shown to people. An A/B test doesn't not automatically split 50/50. It just randomly chooses one at the time a choice needs to be made.

Ответить
@TorqueTestChannel
@TorqueTestChannel - 25.06.2024 17:24

We did an A/A/A test this week and got a 32.4 - 34.2% spread, the closest we've ever seen a thumbnail test using the feature. Not sure if data from the Beta version is as useful as what's in use now

Ответить
@stressfrei_leben
@stressfrei_leben - 25.06.2024 17:07

Thanks for this video.... Frustrating information...I'm a small channel in Germany and don't have the tool yet AND I was really waiting for it since I'm always insecure with the thumbnails... I hope they'll fix it.

Ответить
@your_spirit_guide
@your_spirit_guide - 25.06.2024 14:59

I stopped creating 2 thumbnails and went back to creating 1 thumbnail for each new video

Ответить
@ViewportPlaythrough
@ViewportPlaythrough - 25.06.2024 07:02

dev here(not yt dev, but a prog dev) well sure.. thats why its not rolled out to everyone yet, its called beta testing for a reason. after that initial beta test, they would have to roll out to bigger sample rate to get more data. its just how feature testing works. at some point you have to release a feature to the wild to see how it goes in real life scenarios and sometimes you have to put it up as long as needed to get measurable data.

formula wise, the only reasonable stat they could measure from it is click through rate. thats what static thumnbails are for. they can not measure watch time on it cause its not what thumbnails are for. thats simply unfair to expect that from a tool thats specifically meant to measure how different thumbnails are effective.

running the same thumbnails on it wouldnt be beneficial to test the feature and it is kind of expected that the results wouldnt really be useful. its the devs fault for not anticipating that users wouldnt know how to use the feature on how they want it used, and not clarifying what users should expect out from it.

one thing that should be added is to add an underlying function that checks if the thumbnails are different enough from each other that it would make a difference on the testing. if not, simply not accept that thumbnail.

at this point they have to add a db to check audiences who have clicked the thumbnail so they could stop showing the video after.
then check on audiences who have passed by the thumbnail by a threshold count but did not clicked on it, run the other thumbnails and get similar data on it till end of loop.

but then that doesnt account on peoples preferences, its more of a first come first served approach. so they need to add a priority list on the users end. ie, thumbnail A is the one that the uploader trusts the most that would work the best while thumbnail C is his last choice if thumbnails A and B didnt work.

this kind of feature also depends a lot on an individual channel's usage of the said feature. if you use it once or twice or even just 10 times, you really cant expect it to work properly. what more using the same thumbnail to check it. its like doing the same thing over and over and expecting different results from it.


whats ironic(and contrarian to what i said above) is on its current implementation, you have to force feed it the same thumbnails till you think the data is enough to get you what you want.
ie 1) use the same thumbnails 3-5 videos in a row. its to force feed the algo the data it needs with what you want from it
2) on the 4th/5th/6th time use 2 different thumbnails.
3) repeat the steps till it correctly gives you your desired data
4) use 3 thumbnails
5) repeat step3 with 3 thumbnails

its honestly like training an ai model. you have to train it numerous times first before it could give you the desired results

whats tricky is once they added the function to not accept similar-enough thumbnails. it would slow down the data learning process cause you cant force feed it anymore. but then, it would also teach users on how the feature should be used, and if the feature is used properly then proper data could be collected.

but then theres also the question of, are they using randoms currently/previously? cause if they are, i cant see any formula that can use those random numbers. that would work properly.. if they use randoms at any point of it, their stats would really just come out as randoms.. thus, if they are, prior data should be labeled as inaccurate.. it could still be used, but just not accurate enough...

Ответить
@MartinKPettersson
@MartinKPettersson - 25.06.2024 06:04

This is some interesting information. Great work, thank you for sharing this!

Ответить
@BlackAndBlueGarage
@BlackAndBlueGarage - 25.06.2024 05:52

Not surprised at all. I ask human beings what thumbnails are better. 😌

Ответить
@ParkMetalDetecting
@ParkMetalDetecting - 25.06.2024 03:10

I just want to see the click through rate... That's it... Pretty simple. And shouldn't it be the A/B/C Thumbnail Tester?

Also, why does it not show a running talley on views... I really hate the wait and see not enough data!

Ответить
@curbyvids
@curbyvids - 25.06.2024 03:00

I didn't know it was just an A/B testing tool. I thought it was supposed to recommend different thumbnails, to different demographics, like what Netflix does

Ответить
@-bibliolab
@-bibliolab - 25.06.2024 02:10

But what if I consistently see that one specific style of thumbnail is performing drastically worse? Does that still mean it's wrong? For me it was quite obvious and helpful. I hate to break your hype, but it seem to work for my purposes. It's still small sample size, but when I see 13/13 total fails of one thumbnail style, it's enough for me.

Ответить
@AverageGuyMakingMoney
@AverageGuyMakingMoney - 25.06.2024 01:53

I'd imagine if you get a watch time share difference of like 70-15-15% it would be more trustworthy. Such a spread would tell you two of the thumbnails obviously did not represent the video as well as the other, right?

Ответить
@georgianarundgren2385
@georgianarundgren2385 - 25.06.2024 01:43

There is another factor about thumbnails that isn't always considered. There are many times where the thumbnail just doesn't matter. As a viewer, if I see "Channel Makers" put out a new video, I don't even care about the thumbnail. I'm going to watch. I have a whole bunch of favorites. This goes for smaller channels I watch regularly, too. Pretty sure I'm not the only one. I'm sure it matters for trying to grab new viewers.

Ответить
@KennySparksYT
@KennySparksYT - 25.06.2024 01:22

I’m actually going to keep using the test but not for the data. Different people respond to different types of thumbnails and I like the idea of the randomized thumbnail per user. I typically make 2-3 thumbnails anyway so this will take away the stress of choosing. I wish you could just permanently rotate them instead of it just being 2 weeks.

Ответить
@gabbo13
@gabbo13 - 25.06.2024 01:13

Have you compared this feature to other applications like VidIQ or TubeBuddy?

Ответить
@anniegirl7152
@anniegirl7152 - 25.06.2024 01:07

This feature encouraged me to experiment more with thumbnails and in every case I’ve done it with existing videos views have increased without sabotaging watch time. So thank you tube

Ответить
@wolflover789
@wolflover789 - 25.06.2024 00:48

I think you guys are finally getting some good attention and getting your groove on. I've been watching for a long while and I've kinda been on your journey with you. Congratulations.

Ответить
@DrDreadMarley
@DrDreadMarley - 25.06.2024 00:22

I don’t get why they choose watch time for a thumbnail feature like how does that make sense it should be CTR only as watch time is based on the quality of the video 🤦‍♂️

Ответить
@BStride
@BStride - 24.06.2024 23:52

Uncle raff, yeah click through rate based would be a lot more efficient. Great video. This is concerning

Ответить
@JamesKerLindsay
@JamesKerLindsay - 24.06.2024 23:30

Thanks. This was really helpful. I ran some tests and I just couldn’t make sense of the results. By the way, personally, I’d much testing for titles than thumbnails.

Ответить