AI had the potential to be a place to query for undeniable, mathematically-backed truths. That was until the training data switched from facts to perceptions.

We almost had a thing, backed by math. It had the potential to be a “truth machine”. It was almost a place that we could query for the probabilities of outcomes or certainties of various facts.

Sadly, “AGI” put too much pressure to feed it all the garbage of the internet; the kingdom of opinions and few facts.

I wish we had an AI that informed the user when it was providing factual, outcome-proven statements versus probabilistic, token-level information.

But then I guess “facts” or “outcomes” are actually too coarse for the AI we are dealing with. We are dealing with a token-level “intelligence”.

Maybe there’s still an opportunity for a outcome-based AI? Maybe there will never be enough data. But I’d rather have an AI that indicates when claims it is making are statically-“true” or significant or it’s lacking the data for statistical significance.

I’m not saying it shouldn’t answer when it doesn’t have enough data, I’m suggesting that it should just make it clear when it’s making claims that claimed to be statistically “true”.

I have hope. I don’t have hope for this dream. Most people don’t want actual statistical truth. Especially those in power. They want alignment with their beliefs and desires. They believe they are “above” statistics.

People like “dumb” workers that do as they say. And the more these workers can do i.e., AGI, the “better”.