News and Opinion from Sisters, Oregon
A human being wrote this column. You’ll just have to take my word for it.
Strange way to start a column, right? Well, we’re living in strange times. Times in which this column you’re reading might well have been created by a chat bot — and you might not be able to tell the difference. I’d like to think that I have a distinctive enough voice that you wouldn’t mistake a chat bot for me — but actually there’s enough of my writing out there in the world that Artificial Intelligence can scoop it up and, with the right prompts, produce a column (in a minute or two) about AI. One that is a pretty good facsimile of my style and outlook.
Creepy. Content creators everywhere are wondering if they’re about to be rendered obsolete. Just ask Hollywood writers.
The stakes get a whole lot higher when lives are at stake. I encourage readers to sit down with the Netflix documentary “Unknown: Killer Robots,” which explores the implications of AI applied to the military. Artificial Intelligence — or Machine Learning as some of its proponents prefer — offers the prospect of autonomous machines acting on massive quantities of data, making decisions at blinding speed, and reducing the exposure of military personnel to direct combat.
It is clear that in a very near future, a military that achieves AI dominance will crush any force that does not have it. In the crucible of combat, you want to be on the dominant end, not the dominated end. The U.S. has to pursue AI dominance — the alternative, where China, North Korea, Iran, and Russia can compete with us in this arena, is a very dark scenario.
The imperative to develop and apply the technology very quickly is very strong. The stability of what remains of the post-World War II order is at stake. And yet... there are significant questions. Can we put up robust enough guardrails to avoid sliding into a moral abyss, where kill decisions are handed over to an algorithm? Does adhering to those guardrails put us at a fatal disadvantage against adversaries who don’t care to apply them?
One former Army Ranger working in the field noted that decisions have to be made in combat that are not necessarily programmable Rules of Engagement. He asks whether AI can distinguish between what is “legal” and what is right.
There are clear benefits to AI. There’s a segment on the uses of AI in fighting wildfire, which are certainly of interest to anyone living in Sisters Country.
As the documentary demonstrates, AI can help researchers discover new drugs and therapies at a speed unprecedented in human history. And, with the flip of a set of instructions it can, overnight, produce 10,000 new formulas for chemical weapons. Anyone can potentially access this capability on a standard six-year-old Mac computer. You don’t have to be a national security maven to see the problem here…
AI is as revolutionary as nuclear weapons and the Internet — it is already changing our lives. The question is, can the benefits of AI be brought to bear while mitigating the downsides, and protecting humankind from the potentially catastrophic results of losing control of the technology?
This is a profoundly serious moment in human history — and we’re not a serious people right now. Our society and its leaders are burning time and energy on identity politics and culture wars, and we’re poised to enter a three-ring circus inside a Dumpster fire in the 2024 presidential election, which looks to be a rematch between two men who have both demonstrated that they cannot be entrusted with serious power and responsibility.
We need to start paying attention and demanding that our leaders get serious. The first step is educating ourselves, and mapping the terrain of the science fiction world that has become our reality.
Reader Comments(0)