We spoke to Hongseok Namkoong, Assistant Professor at Columbia Business School, about where AI in healthcare goes from here, and what is needed to move from mere knowledge distillation to the generation of new, usable knowledge.
Q: You’ve been investigating responsible AI for years. As the market has matured, how have the core problems shifted?
Hong: Earlier, the problems were that if you added a few red dots to a photo of a panda, the AI thought it was a mushroom. Then we had the issue of systems only being trained on so-called pale male faces.
Today, the problem is much more systemic. The main problem with these models is that they are omnipresent: They are expected to spit out an answer, but inevitably they’re going to see stuff they didn’t get trained on.
We are currently throwing the kitchen sink at training data. The result is an AI that doesn't know what it doesn’t know. In a clinical setting, that is dangerous. These models don't know when to say, ‘I'm not sure, a human needs to look at this.’ Because they are overconfident and lack that internal uncertainty gauge, no one truly trusts them yet for meaningful, high-stakes decisions.
Q: We hear a lot of hype about agentic AI. Where is the actual impact happening today, and where is it stalled—specifically in healthcare?
Hong: We were promised Terminators and we got ChatGPT. It’s remarkable, but we are still underwhelmed because the most impactful applications so far are the mundane, like consultants logging their billable hours. In healthcare, the only standout success has been transcription note-taking.
The bulk of advances in AI occurred due to scaling data. If you think about the entire internet, there’s so much garbage, but ChatGPT still does wonders because of the scale of the data we’re able to train on. Medical data doesn't work that way. Records are fragmented. There are so many different modalities. What about images? Blood tests? We are stuck with medieval models that can only predict risk in very narrow applications.
The real frontier is unstructured data, like EKGs, which clinicians often struggle to read at scale. There is a Wild West feel right now. The healthcare system is fragmented; data pipelines are broken across institutions and electronic health record systems aren't harmonized. It makes the business of it almost impossible to navigate—do we charge insurance per patient? Per use? These are the operational bottlenecks.
Q: Where do you see the greatest opportunity in the current healthcare landscape?
Hong: Most AI today is about knowledge distillation—cramming all of humanity’s existing knowledge into a model. But they can’t make new discoveries. For problems that involve general ambiguity, which is basically every clinical problem ever, the model needs to know where areas of uncertainty are and take action to resolve that uncertainty. This is very different from how these models are trained. My work is about moving from knowledge distillation to discovery.
Think about an MBA student. When they arrive, they actually know very little compared to an LLM. But over five years of apprenticeship and experience, they become incredibly competent. Why? Because they know how to resolve ambiguity. They can learn from experience. Current AI is a genius at facts but a failure at experience. At the intellectual core it’s about building intelligent agents that can actively recognize uncertainty and take action to resolve it. That is a fundamentally different way to train a model.
Q: Looking at the next few years, what gives you the most urgency or concern?
Hong: Scaling and curating data has given us marvelous advances over the last eight years, but it's clear that this approach will largely yield incremental gains in capabilities. As a result, for end users of AI products, it's going to feel like progress is stagnating.
On the other side, we see very broad claims about how AI will upheave the economy. While the agentic systems we see these days are truly remarkable, I believe that being more calibrated in our technological claims will actually accelerate progress. I am worried we’re in a tech bubble. If that’s true and it’s not a soft landing, it could set progress back by years.
Q: Finally, what does this mean for the future of the MBA?
Hong: It is undeniable that AI will begin to replace some of the lower-level functions that new graduates use to perform. As a result, we will see a correction in entry level hiring in many domains of knowledge work. We see that AI is already encroaching on pure coding. But for the things a Columbia MBA is trained to do—navigating ambiguity, managing high-level strategy, and leading through fragmented systems—AI isn't getting there in the next decade.
There’s no way AI is going to be able to do all the things MBAs are good at 10 years out of CBS. The students might feel the shift in the job market in the short term, but from a long term perspective this is going to be an empowering device for our community.