Things that annoy you (3 Viewers)

Sick Boy

Super Moderator
Google will change quite dramatically within the next year because of AI.
 

Hobo

Well-Known Member
..declaring on the first day of a Test Match 8 wickets down, just because you want to have a few overs at the opposition openers.
And the fucking computer said No!!
 

shmmeee

Well-Known Member
Exactly what, though? Nobody's even attempted to explain it to the masses.

So we built language models to predict the next word, big whoop we’ve been doing that forever, it’s on your phone keyboard right now.

However we discovered that if we just chuck fuck tons of data and processing power at these word prediction algorithms, then train them a bit with human feedback, suddenly they’re not just word prediction algorithms, they’re something akin to intelligence that can reason.

Suddenly the issues we face aren’t algorithmic but a case of waiting for even bigger models and working out if we’ve accidentally created AGI without trying (there’s still some debate about whether that’s what we’re doing but the fact the major voices in machine learning disagree on this suggests it’s not clear cut either way).

At roughly the same time we’ve managed to develop algorithms that can create professional looking audio and video that means we’ve got close to machine understanding of the word through three major senses.

What will this mean? Well firstly all our economic predictions were probably wrong. Suddenly computer programmers can be waaay more productive, and marketers, artists and musicians may be the first not the last to get their jobs automated.

Is it there now? Fuck no. But the progress made in the last year is astonishing. ChatGPT has bigger user adoption than Google and Facebook at a similar time with zero spend or vitality features. Because it’s really useful to a bunch of people. I use it four ir five times a day from document summary to brainstorming to doing analysis on data I can’t be arsed to where I don’t need exact results.

Most “AI” is machine learning. It’s quite simple fitting to an error function for a given training set. Large language models (like ChatGPT) seem to be something else entirely, producing emergent intelligent behaviour.
 

shmmeee

Well-Known Member
Google will change quite dramatically within the next year because of AI.

They’ve already announced a bunch of changes. What’ll stop the big guys I think is fear of bad outputs. Apple reportedly killed Siri GPT because they want total control over what it outputs and can guarantee no incorrect or dangerous information, which is likely not how any of this works.
 

Sick Boy

Super Moderator
They’ve already announced a bunch of changes. What’ll stop the big guys I think is fear of bad outputs. Apple reportedly killed Siri GPT because they want total control over what it outputs and can guarantee no incorrect or dangerous information, which is likely not how any of this works.
I've had a play around with the Google SGE - in its current form I don't see how they'd avoid being sued in Europe for plagiarism - it's a real shit show. I expect it'll be rolled out before the end of this year though.
 

shmmeee

Well-Known Member
I've had a play around with the Google SGE - in its current form I don't see how they'd avoid being sued in Europe for plagiarism - it's a real shit show. I expect it'll be rolled out before the end of this year though.

Plagiarism?
 

Sick Boy

Super Moderator
Plagiarism?
Unlike Bing, it’s AI generated results don’t display the source of the answer that’s just been lifted from a website.
Not sure I’m comfortable with a company like Google doing that and deciding what the ‘correct’ answer is to someone’s query.
Google has long been trying to move towards it being the source of information and people not leaving it - personally I don’t think that’s a good thing for the internet in the long term.
 

shmmeee

Well-Known Member
Unlike Bing, it’s AI generated results don’t display the source of the answer that’s just been lifted from a website.
Not sure I’m comfortable with a company like Google doing that and deciding what the ‘correct’ answer is to someone’s query.
Google has long been trying to move towards it being the source of information and people not leaving it - personally I don’t think that’s a good thing for the internet in the long term.

I mean the day they do that is the day their business dies so I’d be very surprised. They’ve explicitly said they want to find ways to still run ads and drive traffic.

Google loses money if you just hang about on their site. They aren’t Amazon or Facebook. It’s one of the few sites where time on site is a bad thing.

And it doesn’t lift text. And I think there’s going to have to be a serious reconsideration of copyright law like Japan have done. We had this with generative image aI as well and artists claiming it’s “copying” them. But that’s what humans do. A lot of people are going to have to rethink their business model.
 

Sick Boy

Super Moderator
I mean the day they do that is the day their business dies so I’d be very surprised. They’ve explicitly said they want to find ways to still run ads and drive traffic.

Google loses money if you just hang about on their site. They aren’t Amazon or Facebook. It’s one of the few sites where time on site is a bad thing.

And it doesn’t lift text. And I think there’s going to have to be a serious reconsideration of copyright law like Japan have done. We had this with generative image aI as well and artists claiming it’s “copying” them. But that’s what humans do. A lot of people are going to have to rethink their business model.
I’m talking about organic website traffic rather than the ads - the ads are well incorporated, unsurprisingly. I’ve also seen stuff over the last few years that hasn’t been made public and the direction where it’s going to be taking them.
I’ve seen it myself where it’s taken text word for word from a website within the AI answer and there’s been no attribution or link through to the source of the website.
As I said, I don’t feel comfortable with the user not having the source of the answer. That’s not even going into the lack of incentives for publishers to create content in the future.
 

shmmeee

Well-Known Member
I’m talking about organic website traffic rather than the ads - the ads are well incorporated, unsurprisingly.
I’ve seen it myself where it’s taken text word for word from a website within the AI answer and there’s been no attribution or link through to the source of the website.
As I said, I don’t feel comfortable with the user not having the source of the answer. That’s not even going into the lack of incentives for publishers to create content in the future.

It depends on the source. So it’s working on next word likeliness, so when for example I’m looking at something very specific that not a lot has been written about (say docs for an obscure package or a particular small company) the only training it’s got is the company website or whatever.

This is the problem with copyright as it exists and why Japan have made AI training exempt. It’s not plagiarism in the way we know it, it’s more like Shakespeare trying to sue the infinite monkeys. Similarly AI produced work can’t be copyright, so who can claim money is being made? Also it’s a user entering text into a model, google provide the model but the user provides the input. Who is to blame?

I think discovery is going to have to change, but I’m not convinced it’s the right use case for LLMs anyway. Perhaps SEO changes to how you’re represented in the web as a whole rather than just google, or we find ways to game the training system and start the whole algorithmic weapons race again with search and SEO. And ads on websites will become far more important for strategy.

I think we’ll see historical informational content move to the domain of wikipedias and institutions, and some kind of google for bots that surfaces products and up to date info. I think we’re going to see a massive construction of the open web and free API access like we’re seeing with Reddit at the moment. I think we’ll see subscription revenue become more important again.

Google solved discovery for the web, when we were all browsing Yahoo categories, it may well be the paradigm of discovery in the age of conversational UIs shifts to something we haven’t seen yet from a new entrant.
 

robbiekeane

Well-Known Member
So we built language models to predict the next word, big whoop we’ve been doing that forever, it’s on your phone keyboard right now.

However we discovered that if we just chuck fuck tons of data and processing power at these word prediction algorithms, then train them a bit with human feedback, suddenly they’re not just word prediction algorithms, they’re something akin to intelligence that can reason.

Suddenly the issues we face aren’t algorithmic but a case of waiting for even bigger models and working out if we’ve accidentally created AGI without trying (there’s still some debate about whether that’s what we’re doing but the fact the major voices in machine learning disagree on this suggests it’s not clear cut either way).

At roughly the same time we’ve managed to develop algorithms that can create professional looking audio and video that means we’ve got close to machine understanding of the word through three major senses.

What will this mean? Well firstly all our economic predictions were probably wrong. Suddenly computer programmers can be waaay more productive, and marketers, artists and musicians may be the first not the last to get their jobs automated.

Is it there now? Fuck no. But the progress made in the last year is astonishing. ChatGPT has bigger user adoption than Google and Facebook at a similar time with zero spend or vitality features. Because it’s really useful to a bunch of people. I use it four ir five times a day from document summary to brainstorming to doing analysis on data I can’t be arsed to where I don’t need exact results.

Most “AI” is machine learning. It’s quite simple fitting to an error function for a given training set. Large language models (like ChatGPT) seem to be something else entirely, producing emergent intelligent behaviour.
Also use it at least 5 times a day. I write emails with it, review candidate CVs, write strategies, get recipes with what’s in my fridge. Lazy as fuck and dangerous yes, but for the meantime I’m Uber productive and I’m riding that wave
 

shmmeee

Well-Known Member
Also use it at least 5 times a day. I write emails with it, review candidate CVs, write strategies, get recipes with what’s in my fridge. Lazy as fuck and dangerous yes, but for the meantime I’m Uber productive and I’m riding that wave

Its like having an autistic savant mixed with the best bullshit artist as a personal assistant.

I use it to get me over the hump when debugging or planning or whatever. Start me off, throw some ideas, take a look at what I’ve written and suggest improvements. It’s only going to get better but already if you know its limitations it’s a massive productivity enhancer. Spotted a bug that had flummoxed three engineers for a week straight away. Also produced nonsense code to fix it, so you still need the engineers to vet its output.

the other day I was curious what was taking so long in a pipeline and just gave GPT the log file and asked a bunch of questions. Without it I’d probably never have bothered writing a script to parse the file and doing the analysis just for something I was semi curious about.

Similarly I’ll get it to write boilerplate code for me or do a big refactoring job and take it from there. it writes comments and tests for me. Takes so much grunt work out. It’s like autocomplete on steroids.
 

shmmeee

Well-Known Member
If you like ChatGPT worth a look at perplexity.ai and their copilot feature. It’s basically ChatGPT but it also asks relevant followup questions. You get five queries in a four hour sliding window for copilot.
 

Sick Boy

Super Moderator
If you like ChatGPT worth a look at perplexity.ai and their copilot feature. It’s basically ChatGPT but it also asks relevant followup questions. You get five queries in a four hour sliding window for copilot.
That's much better with the sources of the information at the bottom. ;)
 

shmmeee

Well-Known Member
That's much better with the sources of the information at the bottom. ;)

Yeah it does some nice stuff to turn a chatbot into a useful tool TBH. Thing is behind the scenes it’s using google or bing or whatever with keywords so I guess nothing changes from an SEO perspective?
 

Sbarcher

Well-Known Member
Robbing restaurants and "gastro-pubs" now charging as much for a standard bottle of wine as 2 main course meals.
 

Sick Boy

Super Moderator
Yeah it does some nice stuff to turn a chatbot into a useful tool TBH. Thing is behind the scenes it’s using google or bing or whatever with keywords so I guess nothing changes from an SEO perspective?
It’s similar to what Bing is doing and what Google will eventually do (Google’s testing version of it doesn’t show sources). From an SEO perspective it’ll be the biggest change it’s ever seen.
 

Finham

Well-Known Member
So we built language models to predict the next word, big whoop we’ve been doing that forever, it’s on your phone keyboard right now.

However we discovered that if we just chuck fuck tons of data and processing power at these word prediction algorithms, then train them a bit with human feedback, suddenly they’re not just word prediction algorithms, they’re something akin to intelligence that can reason.

Suddenly the issues we face aren’t algorithmic but a case of waiting for even bigger models and working out if we’ve accidentally created AGI without trying (there’s still some debate about whether that’s what we’re doing but the fact the major voices in machine learning disagree on this suggests it’s not clear cut either way).

At roughly the same time we’ve managed to develop algorithms that can create professional looking audio and video that means we’ve got close to machine understanding of the word through three major senses.

What will this mean? Well firstly all our economic predictions were probably wrong. Suddenly computer programmers can be waaay more productive, and marketers, artists and musicians may be the first not the last to get their jobs automated.

Is it there now? Fuck no. But the progress made in the last year is astonishing. ChatGPT has bigger user adoption than Google and Facebook at a similar time with zero spend or vitality features. Because it’s really useful to a bunch of people. I use it four ir five times a day from document summary to brainstorming to doing analysis on data I can’t be arsed to where I don’t need exact results.

Most “AI” is machine learning. It’s quite simple fitting to an error function for a given training set. Large language models (like ChatGPT) seem to be something else entirely, producing emergent intelligent behaviour.
Thanks for taking the time to try and explain. I suppose I don't understand because I've never used ChatGPT or even been vaguely attracted to it! Nobody at work is using it or seems to know what it is-although they could be lying.
 

OffenhamSkyBlue

Well-Known Member
I assume AI is behind the changes to Twitter, which i find is now borderline unusable. I thought that tweets from anyone i follow would turn up under the "Following" tab - but oh, no - there is loads of stuff in the "For you" tab which is posted from them mashed up with all the vitriol and ads that i have no interest in. It won't be long before i ditch it for good, unless someone can explain to me how it is supposed to work now??
 

shmmeee

Well-Known Member
I assume AI is behind the changes to Twitter, which i find is now borderline unusable. I thought that tweets from anyone i follow would turn up under the "Following" tab - but oh, no - there is loads of stuff in the "For you" tab which is posted from them mashed up with all the vitriol and ads that i have no interest in. It won't be long before i ditch it for good, unless someone can explain to me how it is supposed to work now??

Following should be just people you follow. For You is shit the algorithm thinks you want. What seems to have disappeared is the ordering. Used to be you could have the algorithm sort tweets or just have them in chronological order but that’s disappeared now.
 

OffenhamSkyBlue

Well-Known Member
Following should be just people you follow. For You is shit the algorithm thinks you want. What seems to have disappeared is the ordering. Used to be you could have the algorithm sort tweets or just have them in chronological order but that’s disappeared now.
I think that is the problem. It is all out of order. But why do some tweets from, for example, CCFC or my sister, turn up in "For you" but some in "Following"? I wish they had just left it alone - did they ASK any Twitter users whether this change was wanted? I doubt it!
 

ajsccfc

Well-Known Member
My Following tab is set chronologically and thankfully it's stopped sending me back to For You, but bloody hell the amount of ads that show up now is beyond ridiculous. I saw a fitting tweet that said Musk has now turned it into a racist QVC
 

shmmeee

Well-Known Member
I actually like the variety of views I see on Twitter now, even if shouty thick right wing types seem to be in the majority, I’d hoped for at least equal amounts of shouty thick left wing types.

Musk is very clearly trying to make the free experience garbage and push everyone to pay though. Not sure it’ll ever happen.
 

shmmeee

Well-Known Member
People who just fucking park in the road.

Got a shop on my street with parking bays outside, parking across the road and parking all up and down the street. Yet people still double park essentially making it a one lane road so they don’t have to walk ten feet.

TBF quality of driving and parking in Bedworth comes close to Foleshill levels.
 

Users who are viewing this thread

Top