The same way we’re already destroying it.
With flawed assumptions about reality:
That we are mind, will, or energy at essence. That critical thinking isn’t important. That getting the maximum return with the least amount of effort is best. That doing the minimum to get by is a productive work ethic. That emotions are pesky byproducts of mind that must be controlled or repressed. That God is either to be believed in or disbelieved in rather than directly experienced.
AI’s not going to wage war on us like in the terminator movies. It will enable us to languish because that’s what we’ll use it for. The very notion that AI could destroy us is victimhood because if it does, it would actually be humanity destroying itself by its own hands which we’re well on the way to doing. AI will just speed up an existing process, like machine learning does with anything.
Human beings have more than flirted with self-destruction since civilization began. That’s what war is. That’s what nuclear weapons are for. That’s what cigarettes do. That’s what buying a gun with a credit card, no background check, and no waiting period is. That’s what dumping plastic in the ocean does. That’s what mass use of pesticides in agriculture does. That’s what believing people can be essentially evil does. That’s what overeating does. That’s what repressing emotional pain with drugs does.
That’s what spending more than you earn does. That’s what trying to do unconditional love in a world of conditions does. That’s what oil dependence does. That’s what selling lifelong subscriptions to pharmaceuticals does. That’s what a largely useless public school curriculum that doesn’t prepare children for adulthood does. That’s what a bloated, price-inflated, mostly irrelevant university system does. That’s what predatory lending and unnecessary consumer debt does. That’s what pretending you can eliminate your self-interest does through altruism or ego obliteration.
We’re already destroying ourselves with paradigms about reality that don’t work. The question is not “Will AI destroy humanity?” It’s “Will the false gods we created kill humanity?” Will we learn from our mistakes and discard what we’re so afraid isn’t true? Can we lose the absolutist grip on our distorted ideas about what it means to be human and how we’re supposed to live when they’re so obviously not working? Can we admit that we’ve become technically advanced but in 10,000 years, haven’t matured emotionally as a species nearly at all? Can we realize this is the problem and get to work on it?
The rise of AI is perhaps the best thing that could happen because it will soon be the greatest attempt at a technological solution to a maturity problem that will brilliantly fail. When the dust settles, perhaps we will have hit bottom hard enough to consider as a species that maybe we need another way. A way that is based on a completely different set of assumptions than have ever been tested in society.
It’s to that day I very much look forward, but the time between now and then I’m concerned will be increasingly difficult to bear, if that’s time we even have. There’s a better way. We think it’s Edenity and we’ve tested a new set of assumptions for forty years that lead to new outcomes … because that’s what’s required for real change – something actually new, not old assumptions with new packaging which is mostly what’s offered. Dare to try something new. You’re worth it, and so is humanity.