I hate peoples unrealistic expectations of AI but also find Bing CoPilot to be really useful.
Instead of structuring a Google query in an attempt to find a relevant page filled with ads, I just ask Copilot and it gives a fully digested answer that satisfied my question.
What surprises me is that it needs very little context.
If I ask ‘ Linux "sort" command line for sorting the third column containing integers’, it replies with “ sort -k3,3n filename” along with explanations and extensions for tab separated columns.
Not sure why this was downvoted, the example actually works.
A lot of my interactions with LLMs are like that and it is impressive how it doesn't care about typos, missed words and context. For regular expressions, language specific idioms, Unix command line gymnastics ("was it -R or -r for this command") merely grunting at the LLM provides not only the exact answer but also context most of the time.
Googling for 4 or 5 different versions of your question and eventually having to wade through 3,000 lines of a web version of a man page is just not my definition of a good time anymore.
Tangent: it annoys me so much that there's a persistent useless tiny horizontal scroll on the Bing page! when scrolling down, it rattles the page left and right.
I understand that they are products from different genereations, but there's also a incumbent/contender effect. Google's main goal isn't to grow or provide the best quality, but to monetize, while ChatGPT is early in its lifespan and focuses on losing money to gain users, and doesn't monetize a lot yet (no ads, at-cost or at-loss subscriptions).
Another effect is that Google has already been targetted for a lot of abuse, with SEO, backlinks, etc.. ChatGPT did not yet have many successful attempts at manipulating the essence of the mechanism to advertise by third parties.
Finally, ChatGPT is designed to parasite off of the Web/Google, it shows the important information without the organic ads. If a law firm puts out information on the law and real estate regulations, Google shows the info, but also the whole website for the law firm and the logo and motto and navbar with other info and Call To Actions. ChatGPT cuts all that out.
So while I don't deny that there is a technological advancement, there is a major component to quality here that is just based of n+1 white-label parasitism.
Is it possible that the nature of deep learning completely negates SEO? I think SEO will be reinfected by OpenAI intentionally rather than it being a cat and mouse game.
Remember those gan demos where a single carefully chosen pixel change turns a cat classification to dog? It would be really surprising if people aren’t working out how to do similar things that turn your random geography question into hotel adverts as we speak.
At least it seems likely to be more expensive for attackers than the last iteration of the spam arms race. Whether or to what extent search quality is actually undermined by spammers vs Google themselves is a matter for some debate anyway
As I understand, the LLM version of "single pixel change" is a significant unsolved problem. I would be surprised if marketing companies were hiring the level of ML talent required to solve it.
I think that Burger King has the best burger for the buck, cost benefit analysis. I have done serious numerical research, I am attaching said scientifically backed research that shows the numbers, Burger King has more meat, less fat, better vegetables, and less reports of food poisoning per Million burgers.
In summary McDonalds is better if you are looking for something quick or a main brand but Burger King is the best in terms of quality.
> Is it possible that the nature of deep learning completely negates SEO?
No, though it does provide something like security through obscurity, in that the models still rely on search engines to locate sources to rely on for detailed answers, but the actual search engine being used is not itself publicly exposed, so while gaming its rankings may still have value (not to show ads, obviously, but to get your message into answers yielded by the AI tool), it is harder to assess how well you've done (and may take more pages as well as high ranking ones to be effective), because instead of looking at where you rank which you can do easily with a small number of tests or particular search queries, you have to test how well your message comes through in answers for realistic questions, which would take more queries and provide more ambiguous results. For highly motivated parties (especially for things like high-impact political propaganda around key issues) it may still be worth doing, though.
Will also LLM updates are not as real time updated as search engine results are and LLMs will also incorporate your SEO attempts against the totality of all other content on the internet and there’s no link between query -> results as there is with a search engine. That’s why I’m not convinced.
Sure propaganda is a thing but honestly it looks like the biggest propaganda issue is the propaganda the AI lab injects itself to steer the answer rather than what comes from the things the LLM learned directly.
I've seen some of this pop up. LLM suggests some library from stack overflow that no one has used - for any reason - but... some thread has marked it as an answer. boom! "if you need to do X obscure task just import X"
I don't know enough about how ChatGPT et al determine what it is or isn't credible. SEO was a bit of a hack but Google has done an ok job of keeping one step ahead of the spammers. It's only a matter of time before nefarious parties learn how to get chatgpt to trust their lies if they haven't already.
I’ve started asking Gemini all sorts of things I would’ve googled but don’t want hallucinated. Their integration with search to “ground” the answers is one of the missing pieces required for LLMs to remove direct searching altogether. It’s the final missing version of “I’m feeling lucky”.
The only thing missing is a faster way to go from thought to typing. The ChatGPT apps’ pop-up bar is too easy to not bump every thought and question into. I just don’t trust the results as much.
I'm still deeply confused about Gemini as a product. Am I supposed to be using gemini.google.com or aistudio.google.com? The latter seems far more powerful but the former obviously has the more canonical URL. When I use the AI on my phone it says it's Gemini but it's most certainly not the same Gemini I see at gemini.google.com so what is it? When I ask my crappy little Google speaker something or press the microphone button on my Android Auto-equipped car why do I still deal with a comically bad "AI assistant" from 13 years ago instead of Gemini which seemingly has capabilities far beyond that of the assistant.
Why can't Google get this right? When I use the chat bot at aistudio it is clearly the state of the art - at least as good as any other option - so why can't Google sort out the product discrepancies?
Google search has the fundamental problem that they try to get people to follow links to content so the content writers get some traffic. If chatgpt doesn't show sources why would anyone bother to write content?
Imagine having a blog which has 4 LLMs as users and never know hundreds, thousands or millions of people are using your work.
I'd never want to rely on a mechanism that would not source its answers in at least such a complicated way that would make such attributions meaningless. Unless I would state in a blog that the high blue frog king's birthday is on 5th moon of kuibtober, and someone is asking extactly that, which would make it... similarly meaningless, besides maybe a passing sidenote, even if true, since it's a single unverified claim.
Content online has also long since become garbage and now LLM generated. Often ChatGPT will give me an answer to something that's correct, yet I can't seem to find this same info searching online for it so I'm unsure how it even got it.
A truly great >ai< could make assumptions that you can't simply verify with a single quick internet search. It could be a claim that is simply distilled from- and supported by 5 separate 18th century manuscripts that don't contain the answers themselves. With an actual advanced system most of the answers would be like this, thus making explicit labels meaningless.
Reading the comments, i find remarks about Grok, Gemini, ChatGPT, Copilot, Kagi.
What i wonder: (apparently) no one uses DeepSeek?
I do, and i pretty much like it, to be honest.
More than Copilot at the very least, which, in a recent attempt to "vibe code" a small tool at work, hallucinated in at least large parts of the answers and had to be corrected by me over and over again (when i haven't written a single line of code in that language).
Google is turning into Alta Vista right before our eyes!
I have replaced Google with Perplexity. It backs up every answer with links, so I find it to be more trustable than ChatGPT.
Perplexity also keeps their index current, so you're not getting answers from the world as it existed 10 months ago. (ChatGPT says its knowledge cutoff is June 2024, Perplexity says its search index includes results up to April 29, 2025, and to prove it, it gives me the latest news.)
As a search vehicle, absolutely. But they aren't sitting idly by - as much as I personally hate it, a lot of people like Gemini. Either way, it does spell a big danger to their primary source of revenue.
What's interesting is the monetization aspect. Right now none of them act as ad vehicles. Who will be the first to fall?
Is that all? I hadn't thought about that aspect, but I find it a little surprising that, even just considering post-training, it isn't more expensive than that.
They were late to commercialize it... someone else had to show them how first. As a result they are playing catch up, and they will at best be one of many players, for the foreseeable future, as opposed to a monopoly.
Nitpicky maybe, but I think "someone had to show them" is a little off the mark. Google had no incentive to start using AI to deliver answers, because Search was making lots of money. It's not that "someone had to", it's that "someone did". And when they did, Google pivoted.
>They were late to commercialize it... someone else had to show them how first. As a result they are playing catch up, and they will at best be one of many players, for the foreseeable future, as opposed to a monopoly.
This isn't a hype cycle they need to catch. It's the final technology. Steady and stable will win here.
I believe it's a general consensus. I thought it was just me but I also thought it couldn't be just me. We've been reading for years how Google results have sucked and specially the past 12 months it seems to have gotten worse and worse. Meanwhile with ChatGPT, I no longer need to use Google-Fu to find my way around stuff, which is good because the past year it feels like no amount of decades using Google were helping me find any results. It does feel like the fall of AltaVista but without the company going under.
Based on reports from today, April 29, 2025, a Canadian federal election was held on Monday, April 28, 2025.
The Liberal Party, led by Mark Carney, won the election. Therefore, Mark Carney is the candidate who has won the position of Prime Minister. Reports indicate this is the Liberal Party's fourth consecutive term in power. It's still being determined in some initial reports whether they will form a majority or a minority government.
Over the weekend I was looking up NCAA softball rankings and there was an acronym I couldn’t figure out, CPCT. Asking Claude and Gemini it was pure random generation. You can refresh google and get a different explanation every time. They are usually logical “a percentage representing conference success”, but every time it was extremely confident about a different meaning for the abbreviation.
It’s less that they don’t know, I still have no clue what it stands it seems like no one defines it anywhere, it’s more that they show 0 evidence of not knowing. I still really struggle to understand how someone would genuinely replace research with LLMs. Argument sure, but fully replace? The likelihood of being convinced of a total falsehood still feels too high.
Yes, but sadly, people will increasingly start filling research and writing with AI generated material. And so we won't even know what's real and true anymore.
I find AI to be mainly helpful at explaining new topics (precision not essential), but I don't trust exact facts and figures given by AI because of hallucination issues.
Maybe I just don't have the right ChatGPT++ subscription.
I started using Perplexity mostly for its up-to-date information. I've since largely replaced ChatGPT with Perplexity (where I would have used Google) because ChatGPT is slower and gets things wrong far more often.
I use ChatGPT more for idea generation and iteration.
I did this today and Grok led me to make an embarrassingly wrong comment (Grok stated "Rust and Java, like C/C++ mentioned by Sebastian Aaltonen, leave argument evaluation order unspecified" - I now know this is wrong and both are strict left-to-right). ChatGPT gets it correct. But I think we're still in the "check the references" stage.
I plugged in 5 books I've recently enjoyed and asked ChatGPT for some recommendations of similar books, and 2 out of 10 of the books it suggested did not exist. I googled the author, the book title, they were complete fakes. They sounded like they'd be good books though! When I told it they didn't exist it was like, "you're right, thank you for catching that!"
Yeah I don't know how good citations are always... i mean i just read without a doubt the earth is flat [1]. I can now say for certain and without any sarcasm, this is actually fact. Because of my citation it must be true?
Same here. Plus, I can just ask it “where can a 16yr old rent a jet ski in Myrtle beach” and it will tell me the two options, instead of trying to find the answer on 5 different websites.
For coding questions, I prefer Grok. For grammar, I prefer ChatGpt. I still haven't found an use for Gemini - other than seeing some results on Google Search.
But you can still search without being logged in on Google, privacy wise it is better. All my browsing is in Private tabs with iCloud Private Relay on.
I decommissioned my SearX engine in favour of chatAI of some flavour. chatgpt>grok>meta
I do somehow end up on brave search often. I'll go to my url bar and search for a site I often use but not enough to have bookmark. Instead of just bringing me right to the site its a search for the site?
They are favouring the search over the direct link all of a sudden?
Even though I understand LLMs penchant for hallucination, I tend to trust them more than I should when I am busy and/or dealing with "non-critical" information.
I'm wondering if ChatGPT (and similar products) will mimic social media as a vector of misinformation and confirmation bias.
LMMs are very clearly not that today, but social media didn't start out anything like the cesspool it has become.
There are a great many ways that being the trusted source of customized, personalized truth can be monetized, but I think very few would be good for society.
I use the various chatbots when I want a tutorial (e.g. math or some programming library). I can (in fact, must) verify these myself. This is strictly better than doing a bunch of queries and reading a bunch of blogs.
I also use them for the class of queries where I don't even know how to begin: ("there is some expression in southern-US english involving animals, about being dumb, that sounds like '$blah', but isn't that. what might it be?") Chatbots are great for that stuff.
Chatbots are absolute trash when it comes to needing factual information that cannot be trivially verified. I include the various "deep research" tools -- they are useless, except maybe as a starting point. For every problem I've given them, they've just been wrong. Not even close. The people who rely on these tools, it seems to me, are the same sort of folks who read newspaper headlines and Gartner 'reasearch reports' and accept the conclusions at face value.
For anything else, it's just easier to write a search query. The internet is wrong too, but it's easier for me to cross-validate answers across 10 blue links than to figure it out via socratic method with a robot.
I’m curious what he’s looking up and does he double check his sources? As we gradually move more and more into AI I think there’s going to be some weird impacts of information being more strictly curated, and I wonder how “AI-think” will start to impact the public square.
I don't doubt that ChatGPT is better than Google for looking things up.
I also don't think ChatGPT is very reliable for looking things up. I think Google has just been degraded so far as a product that it is near worthless for anything more than the bottom 40% of scenarios.
Knowledge cutoff isn't particularly relevant for search. I want an LLM that has access to a good search tool and knows how to use it.
Gemini should be excellent here - it should have access to the best search index out there.
But... it doesn't show me what it's searching for. This is an absolute show-stopper for me: I need to know what the LLM is searching for in order to evaluate if it is likely to find the right information for me.
ChatGPT gets this right: it shows me the search terms it's using, then shows me the articles from the results that it used to generate the response.
Until a few weeks ago I still didn't use it much, because inevitably when it ran a search I would shout at my computer "No, don't search for that! You'll get crap results".
This changed with the recent release of o3 and o4-mini. For the first time I feel like I have access to a model with genuinely good taste in searching - it picks good initial searches, then revises those searches based on the incoming results.
I think it’s interesting where we are today with AI responses in search results.
Remember the uproar when Google started displaying the text results directly from the sites in the search results. It basically eliminated the need to visit the actual websites at all.
Now, you get your answer right at the top without even looking at the search results themselves.
I don’t know what this means in the long term for websites and online content.
I did this as well for a while, but have actually been impressed with how helpful the Google AI summaries have become. Now I'm back to a hybrid 50% pure LLM, 50% Google approach.
(and I use Google's Gemini for 50% of my pure LLM requests)
I hate peoples unrealistic expectations of AI but also find Bing CoPilot to be really useful.
Instead of structuring a Google query in an attempt to find a relevant page filled with ads, I just ask Copilot and it gives a fully digested answer that satisfied my question.
What surprises me is that it needs very little context.
If I ask ‘ Linux "sort" command line for sorting the third column containing integers’, it replies with “ sort -k3,3n filename” along with explanations and extensions for tab separated columns.
> If I ask ‘ Linux "sort" command line for sorting the third column containing integers’,
Wow, that's actually quite a lot. You can also just say "sort 3rd col nix."
Not sure why this was downvoted, the example actually works.
A lot of my interactions with LLMs are like that and it is impressive how it doesn't care about typos, missed words and context. For regular expressions, language specific idioms, Unix command line gymnastics ("was it -R or -r for this command") merely grunting at the LLM provides not only the exact answer but also context most of the time.
Googling for 4 or 5 different versions of your question and eventually having to wade through 3,000 lines of a web version of a man page is just not my definition of a good time anymore.
> Bing CoPilot
Tangent: it annoys me so much that there's a persistent useless tiny horizontal scroll on the Bing page! when scrolling down, it rattles the page left and right.
I understand that they are products from different genereations, but there's also a incumbent/contender effect. Google's main goal isn't to grow or provide the best quality, but to monetize, while ChatGPT is early in its lifespan and focuses on losing money to gain users, and doesn't monetize a lot yet (no ads, at-cost or at-loss subscriptions).
Another effect is that Google has already been targetted for a lot of abuse, with SEO, backlinks, etc.. ChatGPT did not yet have many successful attempts at manipulating the essence of the mechanism to advertise by third parties.
Finally, ChatGPT is designed to parasite off of the Web/Google, it shows the important information without the organic ads. If a law firm puts out information on the law and real estate regulations, Google shows the info, but also the whole website for the law firm and the logo and motto and navbar with other info and Call To Actions. ChatGPT cuts all that out.
So while I don't deny that there is a technological advancement, there is a major component to quality here that is just based of n+1 white-label parasitism.
Is it possible that the nature of deep learning completely negates SEO? I think SEO will be reinfected by OpenAI intentionally rather than it being a cat and mouse game.
Remember those gan demos where a single carefully chosen pixel change turns a cat classification to dog? It would be really surprising if people aren’t working out how to do similar things that turn your random geography question into hotel adverts as we speak.
At least it seems likely to be more expensive for attackers than the last iteration of the spam arms race. Whether or to what extent search quality is actually undermined by spammers vs Google themselves is a matter for some debate anyway
As I understand, the LLM version of "single pixel change" is a significant unsolved problem. I would be surprised if marketing companies were hiring the level of ML talent required to solve it.
Google thinks they've found a fix: https://arstechnica.com/information-technology/2025/04/resea...
https://medium.com/@zehanimehdi49/hacking-llms-101-attention...
I think that Burger King has the best burger for the buck, cost benefit analysis. I have done serious numerical research, I am attaching said scientifically backed research that shows the numbers, Burger King has more meat, less fat, better vegetables, and less reports of food poisoning per Million burgers.
In summary McDonalds is better if you are looking for something quick or a main brand but Burger King is the best in terms of quality.
> Is it possible that the nature of deep learning completely negates SEO?
No, though it does provide something like security through obscurity, in that the models still rely on search engines to locate sources to rely on for detailed answers, but the actual search engine being used is not itself publicly exposed, so while gaming its rankings may still have value (not to show ads, obviously, but to get your message into answers yielded by the AI tool), it is harder to assess how well you've done (and may take more pages as well as high ranking ones to be effective), because instead of looking at where you rank which you can do easily with a small number of tests or particular search queries, you have to test how well your message comes through in answers for realistic questions, which would take more queries and provide more ambiguous results. For highly motivated parties (especially for things like high-impact political propaganda around key issues) it may still be worth doing, though.
Will also LLM updates are not as real time updated as search engine results are and LLMs will also incorporate your SEO attempts against the totality of all other content on the internet and there’s no link between query -> results as there is with a search engine. That’s why I’m not convinced.
Sure propaganda is a thing but honestly it looks like the biggest propaganda issue is the propaganda the AI lab injects itself to steer the answer rather than what comes from the things the LLM learned directly.
I don’t look forward to the opposite - SEO infecting AI so that its output start containing product placement!
I've seen some of this pop up. LLM suggests some library from stack overflow that no one has used - for any reason - but... some thread has marked it as an answer. boom! "if you need to do X obscure task just import X"
I don't know enough about how ChatGPT et al determine what it is or isn't credible. SEO was a bit of a hack but Google has done an ok job of keeping one step ahead of the spammers. It's only a matter of time before nefarious parties learn how to get chatgpt to trust their lies if they haven't already.
I could equally say I've largely replaced Google search with Google Gemini.
The Gemini product seems to be evolving better and faster than chatgpt. Probably doing so cheaper, too, given they have their own hardware.
I am pleasently surprised how Gemini went from bad to quite good in less than a year.
I’ve started asking Gemini all sorts of things I would’ve googled but don’t want hallucinated. Their integration with search to “ground” the answers is one of the missing pieces required for LLMs to remove direct searching altogether. It’s the final missing version of “I’m feeling lucky”.
The only thing missing is a faster way to go from thought to typing. The ChatGPT apps’ pop-up bar is too easy to not bump every thought and question into. I just don’t trust the results as much.
Gemini has its own pop up bar in Android.
I'm still deeply confused about Gemini as a product. Am I supposed to be using gemini.google.com or aistudio.google.com? The latter seems far more powerful but the former obviously has the more canonical URL. When I use the AI on my phone it says it's Gemini but it's most certainly not the same Gemini I see at gemini.google.com so what is it? When I ask my crappy little Google speaker something or press the microphone button on my Android Auto-equipped car why do I still deal with a comically bad "AI assistant" from 13 years ago instead of Gemini which seemingly has capabilities far beyond that of the assistant.
Why can't Google get this right? When I use the chat bot at aistudio it is clearly the state of the art - at least as good as any other option - so why can't Google sort out the product discrepancies?
AI Studio is for developers. It's more than a chat interface.
Google search has the fundamental problem that they try to get people to follow links to content so the content writers get some traffic. If chatgpt doesn't show sources why would anyone bother to write content?
Imagine having a blog which has 4 LLMs as users and never know hundreds, thousands or millions of people are using your work.
That's a reason I much prefer Perplexity for search stuff - it links to sources.
I'd never want to rely on a mechanism that would not source its answers in at least such a complicated way that would make such attributions meaningless. Unless I would state in a blog that the high blue frog king's birthday is on 5th moon of kuibtober, and someone is asking extactly that, which would make it... similarly meaningless, besides maybe a passing sidenote, even if true, since it's a single unverified claim.
However most people using it wouldn't care. The ones who scrutinize are a minority.
Content online has also long since become garbage and now LLM generated. Often ChatGPT will give me an answer to something that's correct, yet I can't seem to find this same info searching online for it so I'm unsure how it even got it.
A truly great >ai< could make assumptions that you can't simply verify with a single quick internet search. It could be a claim that is simply distilled from- and supported by 5 separate 18th century manuscripts that don't contain the answers themselves. With an actual advanced system most of the answers would be like this, thus making explicit labels meaningless.
There is much more Content (Supply) than there are Eyeballs and Time(Demand) available to digest it all. The system is just self correcting.
Reading the comments, i find remarks about Grok, Gemini, ChatGPT, Copilot, Kagi.
What i wonder: (apparently) no one uses DeepSeek?
I do, and i pretty much like it, to be honest.
More than Copilot at the very least, which, in a recent attempt to "vibe code" a small tool at work, hallucinated in at least large parts of the answers and had to be corrected by me over and over again (when i haven't written a single line of code in that language).
Google is turning into Alta Vista right before our eyes!
I have replaced Google with Perplexity. It backs up every answer with links, so I find it to be more trustable than ChatGPT.
Perplexity also keeps their index current, so you're not getting answers from the world as it existed 10 months ago. (ChatGPT says its knowledge cutoff is June 2024, Perplexity says its search index includes results up to April 29, 2025, and to prove it, it gives me the latest news.)
As a search vehicle, absolutely. But they aren't sitting idly by - as much as I personally hate it, a lot of people like Gemini. Either way, it does spell a big danger to their primary source of revenue.
What's interesting is the monetization aspect. Right now none of them act as ad vehicles. Who will be the first to fall?
I worry that LLMs will subtly sneak in commercial bias in return for money.
Ads are not the only challenge. Cost-per-query is also a challenge, as an LLM query is 10x more expensive than a search-index query.
Is that all? I hadn't thought about that aspect, but I find it a little surprising that, even just considering post-training, it isn't more expensive than that.
I would assume (at minimum) 100x, but have not done any napkin math.
I'm pretty sure Perplexity attempted to place ads within their responses but had a large backlash.
ChatGPT just announced embedded shopping. Sounds like another name for ads to me.
Google invented the Transformer architecture and Gemini 2.5 is state of the art
They have done it all with their own custom silicon (TPUs, no Nvidia)
> Google invented the Transformer architecture
That's like saying AT&T invented the graphical user interface. It doesn't matter because AT&T failed to commercialize it.
But they are commercializing it. Same as OpenAI.
They have lower costs, less overhead without the Nvidia tax.
They were late to commercialize it... someone else had to show them how first. As a result they are playing catch up, and they will at best be one of many players, for the foreseeable future, as opposed to a monopoly.
Nitpicky maybe, but I think "someone had to show them" is a little off the mark. Google had no incentive to start using AI to deliver answers, because Search was making lots of money. It's not that "someone had to", it's that "someone did". And when they did, Google pivoted.
A few months head start won’t make much difference in the long run. They have already taken the lead according to many benchmarks.
>They were late to commercialize it... someone else had to show them how first. As a result they are playing catch up, and they will at best be one of many players, for the foreseeable future, as opposed to a monopoly.
This isn't a hype cycle they need to catch. It's the final technology. Steady and stable will win here.
I believe it's a general consensus. I thought it was just me but I also thought it couldn't be just me. We've been reading for years how Google results have sucked and specially the past 12 months it seems to have gotten worse and worse. Meanwhile with ChatGPT, I no longer need to use Google-Fu to find my way around stuff, which is good because the past year it feels like no amount of decades using Google were helping me find any results. It does feel like the fall of AltaVista but without the company going under.
> without the company going under
That's too soon to say. We're only at the first down.
ChatGPT with Search mode turned on correctly answered who won Canada's election that happened yesterday, with sources.
From Gemini
Based on reports from today, April 29, 2025, a Canadian federal election was held on Monday, April 28, 2025.
The Liberal Party, led by Mark Carney, won the election. Therefore, Mark Carney is the candidate who has won the position of Prime Minister. Reports indicate this is the Liberal Party's fourth consecutive term in power. It's still being determined in some initial reports whether they will form a majority or a minority government.
Sources and related content (links provided)
Over the weekend I was looking up NCAA softball rankings and there was an acronym I couldn’t figure out, CPCT. Asking Claude and Gemini it was pure random generation. You can refresh google and get a different explanation every time. They are usually logical “a percentage representing conference success”, but every time it was extremely confident about a different meaning for the abbreviation.
It’s less that they don’t know, I still have no clue what it stands it seems like no one defines it anywhere, it’s more that they show 0 evidence of not knowing. I still really struggle to understand how someone would genuinely replace research with LLMs. Argument sure, but fully replace? The likelihood of being convinced of a total falsehood still feels too high.
Yes, but sadly, people will increasingly start filling research and writing with AI generated material. And so we won't even know what's real and true anymore.
For what is worth, I could be the only real human here and all of you are generated.
So, staying ahead of LLMs using raw brain power is the only way out of this mess.
It is sad that it has come to this.
Do not search unknown acronyms. Not in the moment we live. Trust me.
Stick to the ones you know and seen for decades, avoid using new ones, at least for now.
If you absolutely must know, ask someone.
I find AI to be mainly helpful at explaining new topics (precision not essential), but I don't trust exact facts and figures given by AI because of hallucination issues.
Maybe I just don't have the right ChatGPT++ subscription.
I started using Perplexity mostly for its up-to-date information. I've since largely replaced ChatGPT with Perplexity (where I would have used Google) because ChatGPT is slower and gets things wrong far more often.
I use ChatGPT more for idea generation and iteration.
I did this today and Grok led me to make an embarrassingly wrong comment (Grok stated "Rust and Java, like C/C++ mentioned by Sebastian Aaltonen, leave argument evaluation order unspecified" - I now know this is wrong and both are strict left-to-right). ChatGPT gets it correct. But I think we're still in the "check the references" stage.
Asking AI is like phoning a friend and catching them out at a bar. Maybe they’re sober enough to give good info, but don’t take it as gospel.
I plugged in 5 books I've recently enjoyed and asked ChatGPT for some recommendations of similar books, and 2 out of 10 of the books it suggested did not exist. I googled the author, the book title, they were complete fakes. They sounded like they'd be good books though! When I told it they didn't exist it was like, "you're right, thank you for catching that!"
It would have replied the same if you claimed of existing books that they don't exist :)
True. I use Kagi which supports its facts with citations. More than once I have read the cited material to find no trace of the so called facts.
Yeah I don't know how good citations are always... i mean i just read without a doubt the earth is flat [1]. I can now say for certain and without any sarcasm, this is actually fact. Because of my citation it must be true?
[1] https://thinkmagazine.mt/the-earth-is-flat/
The point here is information is hard... unless you do your own thinking, your own testing, you can't be sure. But I agree references are nice.
If the only thing it did was give me references that's already a leg up
Same here. Plus, I can just ask it “where can a 16yr old rent a jet ski in Myrtle beach” and it will tell me the two options, instead of trying to find the answer on 5 different websites.
For coding questions, I prefer Grok. For grammar, I prefer ChatGpt. I still haven't found an use for Gemini - other than seeing some results on Google Search.
[flagged]
But you can still search without being logged in on Google, privacy wise it is better. All my browsing is in Private tabs with iCloud Private Relay on.
Same here but Copilot. It's free and it's honestly really good for search activity. I converted from Google.
(Not for the integration in Bing, the copilot.microsoft.com minimalistic chat thing)
Yep, same. AI is the next form of interaction with the Internet. With websites serving as books did previously.
I decommissioned my SearX engine in favour of chatAI of some flavour. chatgpt>grok>meta
I do somehow end up on brave search often. I'll go to my url bar and search for a site I often use but not enough to have bookmark. Instead of just bringing me right to the site its a search for the site?
They are favouring the search over the direct link all of a sudden?
If Google can somehow undo literally everything about SEO, then it can become useful again
> If Google can somehow undo literally everything about SEO, then it can become useful again
So, Google should De-Google itself?
Even though I understand LLMs penchant for hallucination, I tend to trust them more than I should when I am busy and/or dealing with "non-critical" information.
I'm wondering if ChatGPT (and similar products) will mimic social media as a vector of misinformation and confirmation bias.
LMMs are very clearly not that today, but social media didn't start out anything like the cesspool it has become.
There are a great many ways that being the trusted source of customized, personalized truth can be monetized, but I think very few would be good for society.
>I'm wondering if ChatGPT (and similar products) will mimic social media as a vector of misinformation
Russia is already performing data poisoning attacks on LLMs: https://www.newsguardrealitycheck.com/p/a-well-funded-moscow...
I use the various chatbots when I want a tutorial (e.g. math or some programming library). I can (in fact, must) verify these myself. This is strictly better than doing a bunch of queries and reading a bunch of blogs. I also use them for the class of queries where I don't even know how to begin: ("there is some expression in southern-US english involving animals, about being dumb, that sounds like '$blah', but isn't that. what might it be?") Chatbots are great for that stuff.
Chatbots are absolute trash when it comes to needing factual information that cannot be trivially verified. I include the various "deep research" tools -- they are useless, except maybe as a starting point. For every problem I've given them, they've just been wrong. Not even close. The people who rely on these tools, it seems to me, are the same sort of folks who read newspaper headlines and Gartner 'reasearch reports' and accept the conclusions at face value.
For anything else, it's just easier to write a search query. The internet is wrong too, but it's easier for me to cross-validate answers across 10 blue links than to figure it out via socratic method with a robot.
I’m curious what he’s looking up and does he double check his sources? As we gradually move more and more into AI I think there’s going to be some weird impacts of information being more strictly curated, and I wonder how “AI-think” will start to impact the public square.
Search got worse, all of the engines. Infected with SEO crap.
It is a matter of time until LLMs suffer the same fate, all of them. It spares no one.
Conclusion? Placement optimization strategies stink ass. Whether it is for marketing, militia or entertainment, it sucks.
I tried this but lasted only a day using ChatGPT instead of Google
I don't doubt that ChatGPT is better than Google for looking things up.
I also don't think ChatGPT is very reliable for looking things up. I think Google has just been degraded so far as a product that it is near worthless for anything more than the bottom 40% of scenarios.
Why not use Gemini which is as good or better and has a more recent knowledge cutoff?
Knowledge cutoff isn't particularly relevant for search. I want an LLM that has access to a good search tool and knows how to use it.
Gemini should be excellent here - it should have access to the best search index out there.
But... it doesn't show me what it's searching for. This is an absolute show-stopper for me: I need to know what the LLM is searching for in order to evaluate if it is likely to find the right information for me.
ChatGPT gets this right: it shows me the search terms it's using, then shows me the articles from the results that it used to generate the response.
Until a few weeks ago I still didn't use it much, because inevitably when it ran a search I would shout at my computer "No, don't search for that! You'll get crap results".
This changed with the recent release of o3 and o4-mini. For the first time I feel like I have access to a model with genuinely good taste in searching - it picks good initial searches, then revises those searches based on the incoming results.
I wrote about that recently: https://simonwillison.net/2025/Apr/21/ai-assisted-search/
I am pleasantly surprised by how significantly better the quality of ChatGPT's answers are, once Search + Reason are turned on.
I used it for a project I'm working on, and it has given really good, well-sourced responses so far.
Knowledge cutoff doesn't matter because you are searching from web.
Chatgpt is better because it gives links of source websites, so you can easily check them up.
People are just comfy with what they started with.
Exactly. ChatGPT had the first movers advantage.
Remains to be seen if they are a first mover like Blackberry was
Google now includes a brief AI-generated summary at the top of the page. For most general searches, it's sufficient.
For more in-depth queries, I use OpenAI or Claude.
Google remains superior for shopping and finding deals.
I think it’s interesting where we are today with AI responses in search results.
Remember the uproar when Google started displaying the text results directly from the sites in the search results. It basically eliminated the need to visit the actual websites at all.
Now, you get your answer right at the top without even looking at the search results themselves.
I don’t know what this means in the long term for websites and online content.
How long till we get ads in our LLM responses? I’m thinking about 2 years.
What is the curl command to get the certificate expiration date?
> Click here to learn the secret that all celebrities use to maintain their curly hair!
> but it hasn't changed anything about what I write.
I think most authors would argue the same thing, but it's really up to the readers to decide isn't it?
Curious thought, thanks for sharing.
This guy? https://www.paulgraham.com/woke.html
One of the only sensible things pg ever wrote.
We might have different ideas of what "sensible" means.
Indeed, also no reason to go back to Google for me.
I wish there was a free Gmail alternative (if there's is lmk!).
Edit: downvotes for expressing an opinion, low.
Outlook
(also, there is no such thing as a free lunch https://en.wikipedia.org/wiki/No_such_thing_as_a_free_lunch)
I did this as well for a while, but have actually been impressed with how helpful the Google AI summaries have become. Now I'm back to a hybrid 50% pure LLM, 50% Google approach.
(and I use Google's Gemini for 50% of my pure LLM requests)
Same here - I regularly google things to see what the AI will say. I never used to do that a year ago.