I quote: "The single biggest mistake that I see is that people start with a technology first an then try and find a use case for it, which is completely the wrong way around. The best thing to do is to look at your business first and figure where the pain points are. Where are those tasks that employees are really frustrated with, where are the processes that slow things down, or the bottle necks that limit your growth. And once you have kinda identified all those, figure out which ones of those are repetitive, which ones you can put some rules around and that is probably the best way to a set of initial AI use cases."
From no one less than this guy... who is not only a rich man, but a British politician who served as Prime Minister of the United Kingdom and Leader of the Conservative Party from 2022 to 2024.
The hardcore problem is that non of that is a use case for the currently existing LLM models, but to find those.
The LLM models were created to win the Touring Test. That is a test in which humans judge, if they talk with a human or computer. Someone genius must have understood that humans do not say "one plus one equals three" being convinced they are right. This way the created AI is capable of creating logic statements. This appears as human intelligence, but is not.
These AI models are great in being a humans bound to strict logic tool, but they are no human.
Other reports show that AI models have about 20% errors when making decisions.
The Marmalade Marketing Agency CEO is a great example. They must use AI models for texting, but not to replace an Advertising Texter. Instead the Texter creating radio, online, flyer and all kind of texts for marketing and advertising material will have with these AI models a text generator that help him to be much quicker, create more variations and finally offer the client a much better text.
The large online AIs can even help him to understand what the product actually does, which helps a lot creating a good text for an advert, especially in IT Services. ....
Repetitive tasks are no LLM job. That is creating a rule and writing a software script embedded into the computer system or triggered by command. Bottle Necks are an affair of discussion with the humans involved. An AI online model can help to bring everyone on the same page explaining technical terms.
This is humans training, but on how to ask questions. This is blunt Socrates as a teacher. This can be a threat to some parts of society. This creates humans that ask questions that get me fired or limit my career path, while an AI cannot be offended or jealous.
Seriously.
These AI models can do a lot. If used for reasoning they are great. If they exchange human decisions they will create failure.
Fish can't climb trees.