That's specifically NOT a large language model though - it's an agentic design where an LLM takes your ask for information, decomposes the query into smaller sub-queries, and then searches the internet on your behalf and creates a result based off those findings. It's a just-in-time retrieval augmented generation solution, with the search agent powered by the Gemini LLM foundational model.
That specific solution is literally just a tech demo I show to customers who need an idea of what LLM's can do. It turns out it's incredibly powerful to do feature extraction on your corporate corpus of documents to have a corporate rule citation machine - I'm also working on building a local RAG engine (I'm not a genius, I'm just following a github repo) for tabletop game rules that I'm super familiar with as a "this can't work, can it...?" In google's case their big value add is that they already own a MASSIVE index of information and documents from crawling the web for literal decades.
In both cases, the LLM is the mechanism that's being used to sift through the quantity of information that is otherwise too dense for a human to parse. THAT is the core utility of LLM's, and something that I don't think is ever going to go away. It's just that's NOT what is being sold right now. There's alot of pixie dust and snake oil being peddeled.