We have a huge problem and I am again among the victims of that. These guys dominate the current AI strategy. They are one example out of many, not a specific example.
The problem is that these guys get millions in funding and my Apache 2.0 based project no attention at all.
But they are wrong.
As much as I never managed to cut through one of their minds about the problem of credit based money or what a database actually is, they keep not understanding that these AI models are Reasoning Systems.
Credit money is easy explained. I am the Central Bank. I have in a simplified example two Banks I give money to. I charge each 10%. That means I want from my market 100% of what I put in plus twice 10% back. The mismatch is 100% Vs 120%.
A database is a structured file in which the first line defines the separators and the naming of the separated fields, as in data base or the base for data. The base of the data in a database system is that structured file. The software or logic of the database system shows and lets me sort the contents of that very file by several means like clicking buttons or a language input.
Now AI.
It does not matter when I am staring at the IT apprentice who is giving me a hard time telling his version while calling me stupid and pointing out the only degree I have had done was in a workshop craft way of computer use not taking any argument or when I am in a bar telling a buddy out of a borderline occult conversation under the influence of Vodka Mules and Gin Tonics about money;
But when in the worst system crisis since the first World Wars an entire generation of decision making IT experts keeps telling continuously misconceptions about a new technology attracting overall billions in investment money, we might be creating the last nail for a coffin that is for a grave having engraved The West.
You want to listen to the Argument instead of the Status no matter our Status in Total Chaos. Trust us. Your Soldier Boys & Affiliates.
Jon Rambo Junior.
#cyberpunkcoltoure
PS: One hand side says the AI models are chat bots, the other hand tries to turn them into human computer workers. So, that is what happened in the OpenAI labs being the next Palo Alto Labs: Someone got really into the Turing Test. His goal was to create a system that cannot be distinguished from a human when chatting with it. Therefore, what the bot says must make sense. There is no round square. As a side effect the system reached a level of being capable to Reason which is to "think, understand, and form judgements logically". "Humans do not reason entirely from facts" and these AI systems do reason only based on their datapoints. They reached a level of "NO WAY THIS IS A HUMAN!"
This being said, will they not have creativity, abstraction capability or correct their own data points. They are dependent on how you speak to them relative to their data points.
They will be perfect in chit-chatting (
, they will be great in reasoning through theories in purely logic means and therefore can present internet search results in the best human understandable way or find flaws in a chain of thought.
They will never solve a problem based on: "It does not work anymore." which requires an entire strategy of abstract, creative thinking understanding a whole cosmos of an environment.
But "Go Fuck Yourself", everybody understands about equally. Innit??
#cyberpunkcoltoure #thedarkmodernity