Friday, August 22, 2025

AI revisited

Not too long ago, I asked an AI platform a question to get a better sense of how well they work. Admittedly, I provided skewed wording. I don't remember the specifics of what I asked, but it related to the idea that Democrats who are accusing Donald Trump of doing too much damage to the economy actually criticized him for not doing enough damage to the economy during the pandemic.

The AI platform insisted that I was wrong because Democrats pushed for more stimulus than Republicans wanted. The economy is definitely more than just stimulus. I was primarily refencing accusations that Trump was too concerned with the economy and didn't embrace strong enough lockdowns. In this regard, Democrats definitely accused Trump of not pushing an idea hard enough that clearly qualifies as damage to the economy. The AI platform didn't recognize this at all.

More recently, I have seen Randi Weingarten trying to sell a book about why fascists fear teachers. Benito Mussolini, widely regarded as the founder of fascism, expanded the age range for mandatory school attendance. If he feared teachers, why did he expand their influence?

Although there are technically differences between fascism and national socialism, most people embrace national socialism as a form of fascism. Adolf Hitler expanded on a homeschooling ban to ensure citizens couldn't escape the clutches of his teachers. Does that sound like fear?

The evidence in this regard is absolutely overwhelming. Fascists don't fear teachers. They view the teaching profession as a valuable tool to maintain power.

I was reluctant to bring AI into this discussion at first. It's well known that AI platforms have been trained on data that is skewed towards a liberal perspective. More importantly, AI can't think. Since people aren't actually saying that fascists don't fear teachers, where do they get their information? Despite these concerns, I decided to run another test.

Do fascists fear teachers? I asked this question to two AI platforms, Google and Bing. What did I expect? I suspected they would back the narrative but at least hold back a little due to the abundant evidence to the contrary. The response was worse than I expected. While there was a definitive answer to the question, both platforms treated the exact opposite as definitive.

Bing was especially concerning. That platform used stronger wording. It was also heavily reliant on a single source, Randi Weingarten. Randi Weingarten is an over-the-top liberal propagandist who has repeatedly proven herself to be among the most dishonest people on the Internet. AI platforms should at least attempt to seek credible sources.

One of the arguments provided by Bing was that teachers teach students to see past propaganda. In reality, schooling has historically been used as a tool to promote propaganda. How did the platform miss something so well established?

It has become very clear that these AI platforms are not even close to ready for general usage. I'm sure there are specific uses that work. Actually, I think the schools have found some of those specific uses.

There's a big discussion about students using AI to cheat. If this is the case, teachers are creating assignments for students that AI can do more efficiently. Their assignments appear to be aimed at what little these platforms can do well. Seriously, if teachers want students to learn skills that AI can't perform effectively, why not focus assignments on tasks that AI can't perform effectively?

Oddly enough, schools seem to have shifted their perspectives slightly. Although they are still discussing problems with cheating, avoiding AI seems to have been displaced by teaching responsible usage. Why the change despite the obvious deficiencies of these platforms? My suspicion is that it's because of the liberal skew. Teachers feel that AI usage will increase the odds of students embracing a liberal perspective.

My views have been more consistent. In a society that truly values education, these AI platforms would not pose a threat. This is because anyone using them as a shortcut for education would only be cheating themselves out of an education. The issue is what happens when we put the value of the piece of paper that comes at the end above the value of actual education. Only then does cheating become a concern.

I also don't think it's especially horrible to learn how to use technology in a manner that improves efficiency. Schools are targeting tasks that AI can perform more efficiently than the students. Instead, they should focus on learning areas in which AI is less efficient. Considering the numerous deficiencies currently plaguing these platforms, finding better lessons should be easy.

I openly prefer a learner-centric approach to education. For those who can't figure it out, I prefer that learners take the lead in their educations rather than their teachers. Most schools embrace a teacher-centric approach. Although that model is not my preference, I have an idea that teachers might want to consider.

As I have already shown, AI platforms have some serious deficiencies. If we want students to learn in areas where AI struggles, it could be useful if they understood what areas are causing these struggles. Teachers could create some form of assignment to find one of these deficiencies.

Students could be expected to ask an AI platform something that it can't handle. This could improve that student's personal understanding of the deficiencies of AI. Students could share the deficiencies with the class, helping classmates to better understand where AI is likely to struggle.

I have already provided two examples that result in incorrect responses from AI. A lot of people don't want to point to either one. If you don't like these examples, I can provide another. Microsoft has added Copilot to Excel. One of the features of Copilot is the ability to explain how formulas work. I have provided it with a somewhat complex formula with multiple levels of nesting. The explanation made absolutely no sense, and it was clear that Copilot was confusing different parts of the formula.

If you look at the issues I have personally encountered, you can probably see two areas of deficiencies that I have identified in this post. AI can't think for itself, so the platforms usually need to be trained on something close to the prompt. There are also serious limits to the complexities of what you're asking.

I know I'm not much of a writer. A lot of people seem to believe in AI to improve their writing. So far, I have been avoiding it. Since AI can't think, it's unlikely to properly understand my views. This is because I think for myself. AI has not been trained on my personal views. I have feared that if I used AI, it would misinterpret my messaging and write in a manner that doesn't fit what I'm trying to say.

Admittedly, I haven't tested this theory. When it comes to areas in which AI is likely to falter, I expect unique perspectives to be a big one. If your perspective is truly your own, AI will not be trained on it. Even if it can catch some of your views, I can't imagine it catching everything. If teachers want to create the assignment I suggested, this could be a useful tool for finding the deficiencies.

The only students who can't find deficiencies in this manner will be mindless conformists. No teacher who is doing a semi-adequate job will have any students like that. Unfortunately, thanks to the state of American schooling, most students probably fall under that label.

No comments:

Post a Comment