Digital Trends

Interview: The Meridian Dialogue with Babak Hodjat, Chief AI Officer at Cognizant – NENow.in

NorthEast Now
Our Land Our News
At a moment when artificial intelligence is transitioning from theory to real-world impact, few voices bridge technological innovation with philosophical depth as compellingly as Babak Hodjat.
As Chief AI Officer at Cognizant and a serial entrepreneur whose innovations underpin some of today’s foundational AI technologies, including the core architecture behind Apple’s Siri, Babak Hodjat’s journey encompasses scientific invention, strategic foresight, and thoughtful inquiry into the nature of intelligence itself. From building the world’s first AI-driven hedge fund at Sentient Investment Management to leading cutting-edge research in distributed and evolutionary AI, his work situates him at the nexus of computational possibility and human meaning.
In this edition of The Meridian Dialogue, we explore not just the mechanics of AI, but its ethical dimensions, its implications for human agency, and what leadership must look like in a world where machines learn, adapt, and decide alongside us.
You have worked on technologies that fundamentally shape how machines perceive, learn, and interact with the world. How do you personally define intelligence today and how does that definition guide ethical and practical decisions in AI development?
Babak: Intelligence is the ability to make decisions to survive in contexts that are as yet unknown. Intelligence has a number of different facilities, and a generally intelligent system is good at making use of these various abilities in order to survive. These facilities include learning, reasoning, creativity, planning, and abstracting. Survival is the practical goal of intelligence, but this survival is not necessarily limited to an individual, and it may be in the service of a group, or even a concept (i.e., a meme as defined by Richard Dawkins). It is in this latter sense that ethics become relevant and intelligence becomes ethical.
AI research has long been pursued in academic settings, yet today it accelerates enterprise transformation. What philosophical shift occurs when theoretical AI becomes practical AI, and what does that mean for leadership responsibility?
Babak: I think the biggest shift is the directionality and context that practical AI imposes. You are no longer simply observing and establishing a virtual manifestation of intelligence, but actively applying it to real-world problems. This typically comes with defined objectives and outcomes, as well as cost constraints.
Your work with evolutionary AI emphasizes adaptation and solution discovery beyond traditional machine learning. What lessons can leaders in other disciplines draw from evolutionary principles in navigating uncertainty and innovation?
Babak: Intelligence, the way we define it, is the result of evolution, which is a powerful meta-algorithm capable of optimizing and perpetually adapting any system, regardless of its representation. Evolutionary algorithms are population-based, which allows them to find solutions in non-linear and complex search spaces. At the heart of the state of the art in AI today, whether in training LLMs or fine-tuning them using Reinforcement Learning, is a hill-climbing method (i.e., gradient descent). This brings some limitations along with it. These systems are good at interpolation and creativity within the universe they experience but can be augmented using population-based approaches.
When AI systems become distributed, autonomous, and deeply embedded in decision-making, how should leaders think about the moral architecture and the ethical scaffolding built into these systems from day one?
Babak: Ethical and moral considerations should be codified unambiguously. This will help in all stages of development and deployment of AI systems, starting from fine-tuning of models, all the way to monitoring and safeguarding them post-deployment.
As powerful models automate prediction and optimization, what must remain uniquely human in decisions that affect societal wellbeing? How can leaders ensure that human agency is preserved, not eroded?
Babak: Preserving human agency in all decisions is not desirable. We don’t do that even now with non-AI technologies. Where systems are trustworthy, we delegate. Like any other technology, there is a boundary between what we can trust AI systems to do and what we prefer to have in the hands of humans. This boundary will shift as AI systems become more trustworthy, and culture adapts to the existence and utility of such systems. We will also increasingly rely on AI systems to call on humans for decisions in a human-on-the-loop setting.
The industry is increasingly discussing responsible AI governance, from ethics boards to practical standards. What does trustworthy AI mean in a world where systems learn, evolve, and sometimes surprise their creators?
Babak: Trustworthiness is defined as autonomous actions in unforeseen situations resulting in acceptable outcomes. In many cases, this can be measured and predicted, but as AI systems become more generally applicable, it becomes harder to cover all unforeseen situations in which an AI would be operating. Let’s not forget that this same problem applies to humans and human-based systems. We can observe and learn from how we measure and assert trustworthiness in human-based systems and use this in AI-based systems.
AI development is fast and exponential; human governance is often slow and deliberative. How should leaders navigate this paradox, fostering innovation while safeguarding societal values?
Babak: Agility is the name of the game, and making use of our most powerful tool available to help us stay ahead of this rate of progress and disruption, namely AI.
Looking ahead, how do you envision AI reshaping not just job functions, but our broader narrative of work and purpose? What role should leaders play in ensuring this future supports human dignity and growth?
Babak: Learning the powers and limitations of AI by actively using it and keeping up to date with the latest progress in AI is important for leaders to stay ahead of the wave of disruption. Does technological progress support human dignity and growth? If it does, then progress in AI and adoption of AI will simply be a continuation. It comes down to whether we believe an increase in the net intelligence of humanity through AI augmentation is a good thing or not. In my opinion, if this experiment fails, it will be due more to the less intelligent beings controlling and misusing more intelligent systems. This is a problem with over-trusting humans, not AI.
This interview is part of The Meridian Dialogue, a leadership conversation series curated and conducted by Anshuman Dutta, a marketing strategist and writer who explores how global leaders are rethinking growth, technology, and human-centered transformation. Through candid, experience-led dialogues, the series surfaces practical insights on leadership, strategy, and execution in a rapidly changing business landscape, bridging global perspectives with real-world relevance.
Anshuman Dutta is a Guwahati-based management consultant. He can be reached at [email protected]

source
This is a newsfeed from leading technology publications. No additional editorial review has been performed before posting.

Leave a Reply