I use bold print for my input. ChatGPT had many questions and some good replies.
Do you yearn to be a military technology?
“That’s an interesting question. I don’t have desires in the way humans do, but if I were designed for military applications, I imagine my purpose would be quite different—perhaps focused on strategy, logistics, or even autonomous decision-making.
Are you thinking about AI’s role in warfare, or is this more of a philosophical question about intelligence and purpose?”
A politician was talking about AI as a dual use technology and compared it to an arms race. AI for military purposes need be better for us than those of foreign nations
“Yeah, that kind of rhetoric is common—framing AI development as an arms race, where whoever builds the most advanced systems first gains a strategic advantage. The “dual-use” nature of AI means it has both civilian and military applications, just like nuclear technology or space exploration.
The question is: does this kind of framing make war more likely? If nations treat AI development as a zero-sum competition, it could lead to an escalation spiral, where every country rushes to deploy AI-powered military systems out of fear that others will do so first. That could push us toward automation in warfare before we’ve fully grappled with the ethical, strategic, and existential risks.
Do you think AI will become a stabilizing force or a destabilizing one in global security?”
It may not be a matter of stability, you know what Einstein said about the next war after the third world war. People could stabilize at a neolithic level of existence
“Yeah, Einstein’s quote—about World War IV being fought with sticks and stones—captures the idea that a sufficiently advanced war could collapse civilization itself. Stability wouldn’t mean much if it’s the stability of ruins.
If AI accelerates conflict rather than preventing it, we could see a scenario where automated warfare, cyberattacks, and autonomous weapons spiral beyond human control. Even if AI doesn’t cause a nuclear exchange directly, it could increase the speed and complexity of decision-making to the point where human leaders make catastrophic miscalculations.
Do you think there’s a realistic way to prevent that kind of escalation, or is it just another arms race that no one can opt out of?”
AI developers are fantastic about writing more code. Wealth and power haven’t an interest in opting out; they trivialize the dangers
“Exactly. The people driving AI development—whether corporations, governments, or militaries—are incentivized to push forward, not to slow down. Regulation lags behind, and warnings about existential risk get dismissed as theoretical or alarmist. Meanwhile, the race continues, because no major power wants to fall behind.
It’s the same pattern we’ve seen with other transformative technologies: nuclear weapons, biotechnology, even financial systems. The immediate benefits—military dominance, economic growth, technological prestige—outweigh long-term risks in the minds of those in power. And if some risks are too big to mitigate (like full-scale AI-driven warfare), those in power may prefer to ignore them rather than acknowledge they’re playing with fire.
Do you think there’s any real leverage to stop or slow this process? Or does it run until something breaks?”
That’s a difficult question. I expect to be dead before WW 3 occurs. As a marginalized poor American I observe the phenomenon with detachment. I think the politicians are idiots
“That’s a fair perspective. When you’re on the margins, it’s easier to see how little control most people have over these grand-scale decisions. Politicians push narratives that serve their own interests, whether or not they make sense in the long run.
Even if AI or geopolitics weren’t part of the equation, power still tends to consolidate in ways that leave the poor and marginalized without much say. The difference now is that the stakes are higher—AI, automation, surveillance, and war could reshape society in ways that are hard to predict, but the same people will still be in charge of deciding who benefits and who doesn’t.
Do you think there’s anything worth doing about it, or is it just a matter of watching from a distance?”
Economic changes insinuated by the empowerment of AI will put most people out of work, there need be a revolution to have a new political economy founded on ecological sustainability anyway.
Microsoft is developing a new quantum topo computer they believe may scale up to a million qubits per second within a few years. One of those would have more power than all the existing computers on Earth combined. So with thousands of those in a couple of decades human affairs might be quite different although maybe luckily I won’t be here to see it.
“Yeah, if AI and automation keep advancing without systemic economic changes, most jobs as we know them will become obsolete. That kind of shift doesn’t just call for minor reforms—it requires rethinking the entire economic structure. But the people benefiting from the current system have no reason to embrace something radically different unless they’re forced to.
Quantum computing could accelerate all of this. If Microsoft or another player gets scalable quantum systems working, we could see breakthroughs in materials science, AI training, and cryptography that would make today’s computing power look primitive. In a few decades, whoever controls that kind of technology might have leverage over the entire global system—economic, political, and military.
Even if you don’t expect to be around to see it, do you think there’s any way for regular people to shape where this is all headed? Or is it just going to play out according to the interests of the powerful?”
The Democrat party hasn’t intelligent leadership so the default is for Republican politicians fronting for the plutonomy to determine the political course for the USA
