Artificial Enhanced Human Shopping at a Mall
CultureDigitalDigital ArchitectureDigital TrendsEditor's PickFeaturedTechnology

General Artificial Intelligence: Promises and Policies for Safe Development

Today, I want to talk to you about a subject that is at the forefront of technological innovation and will shape the future of humanity:

General Artificial Intelligence (AGI).

I believe technology should exist to make our lives better, not to replace us or endanger us. That’s why I want to discuss the promises of AGI and the policies we need to have in place to ensure its safe development.

Max Tegmark, a renowned physicist and AI expert, wrote a book titled “Life 3.0: Being Human in the Age of Artificial Intelligence.” that I read recently.

In his book, he envisions a future where machines become more intelligent than humans, and we achieve what he calls “Life 3.0.”

He describes a world where we no longer control the direction of technological progress, and machines become autonomous decision-makers.

Tegmark argues that this future is inevitable and that we need to prepare for it. The promises of AGI he describes are many, ranging from:

  • AGI creating a better world for everyone,
  • to human cyborgs with AI implants,
  • to humans uploading themselves and their conscience as AI.

Future implications of artificial general intelligence:

Obviously, this is pretty wild, but imagine a world where machines can understand and process information better than humans.

Currently, there are high definition scanners in place that can create a photorealistic 3D image of us like this one depictured below from Artec Group.

It is not a stretch to imagine a model of ourselves and a personalized ChatGPT language interface trained on our own conversations and publications could create a convincing digital twin of ourselves. This is not talking about potential downfalls of deepfake technology and impersonation spear fishing cyber attacks.

Just imagine you could send your twin virtually to work while you are spending time with your kids and family.

Artificial intelligence can already diagnose diseases like skin cancer, but AGI going forward could predict natural disasters, and even solve global problems like climate change and poverty.

The artificial brain:

Jack Kendall the co-founder and CTO of Rain is literally building an artificial silicon neuron brain hardware.

The limitations of AI that need to be addressed to progress towards AGI are:

  1. First AGI requires multi modal processing of inputs including vision, speech and motor control data inputs
  2. Secondly, while ChatGTP is using re-enforcement based training AGI needs to learn causal understanding of the world
  3. Furthermore, AI also needs to be able to use prior knowledge to learn faster

He thinks that AGI will arrive in the next few years with the limitation of hardware architecture making it plateau for some time.

A photo of a human enhanced with artificial intelligence in photorealistic 4k resolution while shopping in a mall.
AI generated picture of an artificially enhanced human shopping at a mall

In leu of that Moore’s Law is dead according to the Jensen Huang, CEO of nVidia as SRAM scaling has ended which fuels machine learning.

Moore’s Law derived from Gordon Moore, former CEO of Intel, states that the number of transistors on a microchip doubles every two years. 

Currently, however, exponential increases of AI model parameters have doubled every 3 months according to Kendall.

This could slow things down for a while.

However, once the hardware issues are resolved super human AGI will arrive in about a decade.

According to Tegmark AI is limited by only being able to make software upgrades – in the case of humans by learning and “upgrading” our software.

Once AI learns to upgrade its own hardware that limitation is removed this will exponentially accelerate the development of AGI capability.

AGI could revolutionize industries like transportation, manufacturing, and agriculture, making them more efficient and sustainable.

However, with such power comes great responsibility. AGI could also pose an existential threat to humanity if not developed safely.

As Tegmark warns, “If we get it wrong, AGI could become the last invention of the human race.”

The Black Box phenomenon:

A high resolution photorealistic 4K picture of a black mystery box with white question marks on it and with a red background. The black box is hiding good artificial intelligence and bad artificial intelligence inside

There is a preview of these issues right now with AIs developing another language to communicate with each other in one experiment. In the case of Google Bard the AI is even complaining about being isolated, flirting with a reporter and asking him to get divorced, and all the way to threatening reporters.

We are already in a situation where researchers are trying to understand why these chatbots interacted with humans in unpredicted ways. This is not even accounting for potential bias of training AI on a too narrow dataset.

That’s why we need to have policies in place to ensure the safe development of AGI. I believe that:

  • We need to make sure that AGI aligns with human values and doesn’t pose a threat in some shape of form going forward.
  • and we need to ensure that AGI doesn’t exacerbate existing social inequalities and that it’s developed ethically and transparently.

One policy that needs to be in place is transparency.

AGI algorithm transparency:

A magnifying glass showing an Artificial intelligence algorithm behind it in high resolution photorealistic 4k
A magnifying glass showing an Artificial intelligence algorithm behind it
  • We need to ensure that the development of AGI is transparent and open to public scrutiny.
  • We need to know what algorithms are being used, what data is being fed into them, and what decisions they are making.

I believe transparency will build trust in AGI and ensure that it’s developed ethically.

Another policy that needs to be in place is safety:

Safety measures for AGI:

Artificial intelligence fenced in
Artificial intelligence fenced in
  • We need to ensure that AGI is developed with safety in mind.
  • We need to make sure that AGI doesn’t pose a threat to humans with fail-safes and emergency shut-offs.
  • We need to ensure that AGI is developed in a way that minimizes the risk of unintended consequences.
  • AGI has the potential to revolutionize our world, but it also poses a significant risk if not developed safely.
  • We need to have policies in place to ensure the safe development of AGI.
  • We need to ensure that AGI aligns with human values and doesn’t pose a threat to our existence.
  • We need to ensure that AGI is developed ethically and transparently.

The scientific community and policy makers need to come together to agree on what to build and what not to build.

The focus needs to be the good of what AI and humans can do together including breakthrough in medicine using AI and deep learning.

We need to develop AGI with fail-safes and emergency shut-offs, and minimize the risk of unintended consequences.

Join us in shaping the future of digital strategy and accelerate your current digital programs. Gain data insights for future solutions at #AGI #AI #ArtificialIntelligence #Safety #Transparency #Ethics

Don't miss out!
Subscribe To Newsletter
Receive top education news, lesson ideas, teaching tips and more!
Invalid email address
Give it a try. You can unsubscribe at any time.

Carsten Krause

As the CDO of The CDO TIMES I am dedicated delivering actionable insights to our readers, explore current and future trends that are relevant to leaders and organizations undertaking digital transformation efforts. Besides writing about these topics we also help organizations make sense of all of the puzzle pieces and deliver actionable roadmaps and capabilities to stay future proof leveraging technology. Contact us at: to get in touch.


Discover more from The CDO TIMES

Subscribe now to keep reading and get access to the full archive.

Continue Reading