top of page


Sound fun? Join in!


Capitalism & AI

What is the purpose of technology? Ostensibly, it is to make people's lives better and easier. We have seen the benefits of technology in so many ways: instant communication, automated transportation, medical advancement, and more efficient food production, to name just a few.

With every advancement in technology, though, there is the inevitable downside when we fail to manage that technology well. The phones that connect us to one another can also addict and isolate us. The chemicals used to produce greater amounts of food can, in turn, cause cancer. Just as there is lightness and darkness in human beings according to their choices, so there is in the application of their creations. (Read that sentence again. Allow it to register.)

This reality is at the heart of our current dilemma with our latest notable technology - Artificial Intelligence (AI). Add a massive profit motive to this reality, and one can see how AI without ethical guardrails can easily cause collateral damage.

Image Credit: Canva

In fact, one of the leading AI companies, OpenAI, was just valued this month at $80 billion. With that kind of money at stake, it is not hard to see how the temptation to use AI unethically for economic gain could easily overwhelm weak moral boundaries. Additionally, the race to be first with the most advanced technology--and thus the greatest profit--is already outpacing oversight rules that would keep people out of harms way.

The question then, is not whether AI needs ethical standards, but what those standards should be and how they will be enforced. Last year, over 1,000 academics and tech company leaders signed a letter calling for a pause on AI development. Though well intended, this letter has no authority to create or enforce any oversight, a fact which the letter itself acknowledges. "“If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” and further development should advance “only once we are confident that their effects will be positive and their risks will be manageable.”

How should tech companies respond to this call for regulation? As in any other emerging technology, the best way forward seems to be a combination of self-governance coupled with outside oversight. Several steps in creating a thoughtful model of self-interest include:

1. Hiring ethics officers who identify potential problems and enforce company policy. Shockingly, the current trend leads in the opposite direction, with corporations cutting down or even eliminating their AI ethics staff. Leaders from multiple companies--including Twitter, Google, Amazon, and Microsoft--have done just that. A reinstatement of these gatekeepers is critical to the success of managing AI, let alone creating ethical AI.

2. AI companies quickly collaborating to develop effective standards and guardrails for their industry. To the extent they fail . . . well, frankly, we pray they don't. There are several such ongoing efforts, including:

  • The World Economic Forum's AI Governance Summit, which serves as a "pivotal platform for knowledge sharing, strategic outlook, and the formulation of concrete action plans to ensure the responsible development and deployment of generative AI on a global scale;"

  • The UK AI Safety Summit, which is held to examine current and potential future AI threats; and,

  • The Hiroshima Process International Code of Conduct for Advanced AI, which aims to "promote safe, secure, and trustworthy AI worldwide and will provide voluntary guidance for actions by organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems."

3. Multi-national governments collaborating and cooperating, as well as individual governments formulating rules, which has begun in some of the following ways:

All of these are worthy efforts toward the goal of keeping the powerful tool that is AI within acceptable safety boundaries. Corporations and governments must actually put some teeth in these efforts, though, and do so quickly, before extraordinary harm is put into action that cannot be easily undone. 'Extraordinary harm' is not hyperbole.

AI companies that apply an ethic of mutuality (e.g. Understanding Adam Smith - Mutuality, MLK Jr. on Mutuality, Mutuality and Mars, Inc.) to their technological developments will not only create better run and more profitability, they will benefit their customers and themselves by establishing a trustworthy reputation, a key factor in the long-term success of their product and a path towards building a better capitalism.

Our world history has too many examples of businesses that failed because, at the core of that failure, they betrayed trust. If AI companies are to avoid being added to that history, as well as being over-regulated, they would be wise (and ultimately more profitable) to take the needed step of imbedding the ethic of mutuality into the DNA of their creations.

 Want to help create a better future - professionally, morally, and financially?

Buy now, or get a free sample here >>

"This book merits close, sustained attention as a compelling move beyond both careless thinking and easy ideology."—Walter Brueggemann, Columbia Theological Seminary

"Better Capitalism is a sincere search for a better world."—Cato Institute


bottom of page