Forum Moderators: Robert Charlton & goodroi
|
let's embrace it and see how we can use it to our advantage.
EU countries have been excluded.
I'm experimenting now, and that's the way. Try and weave unique material so that it is picked up by the bot, and then, is presented. It's taking time to carry out these experiments. If everybody tries we'll see results quicker.
AI at a search engine level cannot handle the power required to do that at a global site by site scale IMO.
This isn't true. In fact very little additional work is required to include live data in AI search results.
it’s true that advancements in RAG (Retrieval Augmented Generation) systems make real-time, live data integration feasible by pulling the most relevant information and feeding it into large language models (LLMs). This approach indeed bypasses the need for constantly training the LLM itself on vast data sets, which would be computationally unsustainable. Instead, RAG relies on selectively retrieving and combining the latest data, which is computationally less intensive than processing everything from scratch.
However, achieving the level of global scale that Google or other major engines require would necessitate a highly optimized and robust infrastructure to process billions of live retrievals simultaneously.
So while the RAG system theoretically enables more real-time answers, the costs and logistical challenges of deploying it globally on the level Schmidt envisions are substantial.
My thoughts are that significant opportunities exist by engaging personalization that Google AI Overviews may lack (more inputs needed); focusing on user-centric, niche, and localized content, authority and trust in specialized, real-time content.
This is exactly correct, there is significant opportunity. IMO it is applying LLM's to private data, for example
It's coming, so let's embrace it and see how we can use it to our advantage.