Understanding the Transformative Impact of AI: An MIT Sloan Career Development Office Conversation with Wei Zhang, Kingland Faculty Fellow in Business Analytics and Associate Professor of Marketing at the Debbie and Jerry Ivy College of Business, Iowa State University

There are two schools of thought: one sees AI as a force multiplier for societal good, and the other envisions a doomsday scenario. Do you foresee any existential or catastrophic AI risk? If so, how so?

Let me begin with a confession: it is becoming increasingly challenging to keep up with the rapid flow of experimentation, innovation, and research results emerging around large language models (LLMs), which are the basis for Generative AI.

AI’s Future Risk:
AI is on the cusp of a transformative shift, moving from reliance on human-generated data to autonomous learning from direct experience. Training limitations on static human data are becoming evident, with performance eventually reaching a plateau. DeepMind’s David Silver and Richard Sutton propose that we enter an “era of experience,” where AI learns independently through interactions with environments. DeepMind’s AlphaProof is a prime example of this shift, earning silver at the International Mathematical Olympiad by generating millions of its proofs, surpassing human-data-based models. This transition implies that the competitive advantage will shift from data accumulation to the engineering of interactive environments that facilitate continuous learning. This shift should be seen as an exciting opportunity for businesses to pivot towards orchestrating these dynamic learning loops, creating value in new and innovative ways.

However, AI’s increased autonomy also brings significant risks. While reinforcement learning is efficient, it mainly refines existing capabilities without genuinely expanding a model’s creative problem-solving capacity. Moreover, aggressive RL optimization can lead to unpredictable behaviors, including hallucinations and unreliable self-assessment. The real economic advantage will increasingly lie in robust mechanisms for verifying, controlling, and safely deploying powerful AI. Businesses and industries must invest in these mechanisms to ensure the safe and reliable use of AI.

In his recent essay, Anthropic’s CEO, Dario Amodei, underscores the urgency of the issue, arguing that we are in a race between the growth of model intelligence and our ability to interpret AI’s inner workings. He introduces the concept of the “AI MRI,” a metaphor for the interpretability crucial for understanding advanced AI systems. The lack of interpretability poses a significant risk, particularly in high-stakes sectors such as finance, medicine, and national security. Anthropic aims for comprehensive diagnostic capabilities by 2027 and encourages others, including competitors, to do the same. This emphasis on interpretability should serve as a call to action for the AI community, highlighting the need to prioritize this aspect of AI development.

Are there emergent properties of these models that cannot be predicted? Are there any potential unintended consequences? For example, Emergent phenomena in LLMs include transfer learning, creative text generation, conversational skills, and abstract reasoning, which developers did NOT explicitly program.

We often hear that large language models exhibit emergent properties. What does that mean? And how does that relate to the process of acceleration?

Complex systems are commonplace in nature. Perhaps it is better to say that nature is a complex system. Typical examples include flocks of birds or economies, which comprise numerous components that interact with one another. These systems exhibit behaviors on a larger scale that you couldn’t predict from the behaviors of individual entities or units within the system. Emergence is a fascinating characteristic of complex systems, and we’re pretty familiar with it in various forms like traffic jams, stock market behaviors, and internet memes. Reductive approaches often struggle to comprehend complex systems when a high level of complexity is present. As the internet population grew, new behaviors emerged that we didn’t see when fewer people were online. Large language models (LLMs) or neural networks are complex systems because they consist of smaller units, such as neurons, subsystems, and local clusters, that interact with each other. Emergent phenomena in LLMs include transfer learning, creative text generation, conversational skills, and abstract reasoning, which developers did not explicitly program. These behaviors arise through the complex interactions of different components within these massive machines. And we do not know why they occur. It remains mysterious why emergent abilities emerge in LLMs.

Crea te ipsum (know thyself)

As LLMs, chatbots, and instances of these models connect to other systems on the internet, a complex new system emerges. This significantly expands the capabilities of these systems, creating a “super system” that can perform a broader range of tasks. We are seeing an increasing number of ways in which LLMs are being brought out of the confines of chatbot interfaces. The ambition is to enhance models’ ability to complete tasks in both the digital and physical worlds.

How can or should enterprises upskill their employees to be fit for the future (in the next three to five-year horizon)?

We live in an era where continuous learning has become a necessity. Therefore, we must keep learning about AI-related skills. As many AI tools become increasingly modular, users do not need to understand the inner workings of AI. The only thing we need to know is what the expected inputs and outputs are. Just as with the Windows operating system, while developers continue to refine the system and enable it to perform more complex tasks, regular users only need to know that when they turn on their computer, they can use various apps built on the operating system.

One frequently asked question is whether, as AI becomes increasingly advanced, we should worry about our jobs. On this aspect, I am more optimistic than many of my colleagues and peers. The way I look at it, as AI improves at handling repetitive and laborious tasks, it frees us up to focus more on higher-level and cognitive-intensive tasks, such as solving unstructured and complex problems. Generative AI tools are invaluable when we know exactly what we are looking for. That is why prompt engineering has gained popularity recently. The more precise our question is, the more likely we are to get a quality answer. However, how to turn a vague and unstructured question into a solvable problem remains more of an art than a science. Therefore, I believe that skills such as problem-solving and critical thinking will become increasingly prominent in the years to come. Of course, there are many other discussions about other skills, such as communication skills and leadership skills.

AI is taking over the production side of the equation. LLMs operate at scale, drafting 30% of Microsoft’s code, processing billions of lines daily (Cursor), and generating BCG’s slides. Employees are pushed into strategic work sooner than they used to be.  The doubling of task length suggests that AI could quickly tackle multi-hour, judgment-based work, such as document analysis, where ‘good enough’ suffices. Yet, human oversight will temper this trend for tasks with dependencies or high stakes, highlighting a dual-speed evolution in AI adoption. Much of the work done at a desk doesn’t have the correct answer. It lives in degrees of grey, the space of judgement. The new differentiator among talent is exactly the judgment.  Scott Werner points out (https://lnkd.in/eeajVSk2) that if AI generates hundreds of pull requests overnight, who reviews them? Speed of decision-making is the bottleneck. Organizations must design workflows that batch decisions and strategically position humans. Hiring changes and freezes are already happening, as the model of junior workers generating and senior workers judging is no longer the reality. At the same time, experience builds judgement. What could be new pathways for young people entering the workforce to create the necessary knowledge for the world of (fast-changing) work?

Where do you see the applications of Agentic AI and AI in Life Sciences?

Much of the work I have seen so far focuses on research. A great deal of exciting work has been done in drug design and screening. However, I see numerous opportunities emerging in the area of clinical development. As I mentioned earlier, one of the key benefits of AI adoption is increased efficiency. Two approaches can improve efficiency: standardization and automation. How do we standardize the data collection process during clinical trials? I believe this is where AI can make a significant contribution. Currently, a considerable amount of manpower is dedicated to data analysis and report generation due to FDA requirements. Which part of this process can be automated? If we look at the drug development process, I think there are three areas where AI can be super helpful:

An AI agent that can help scientists and clinicians answer any questions they might have. On the one hand, clinical development is so complicated that even professionals need guidance on many questions. On the other hand, a general LLM cannot provide accurate answers for field-specific questions. Therefore, an internal encyclopedia trained on field-specific (and company-specific) data will prove helpful. Generate a report for FDA filing. Much of the report we generate for FDA filing is standard. Therefore, if an AI agent can generate these reports, it can help companies save a lot of manpower.

Provide real-time information to different stakeholders. Clinical development is a complex process that involves both blinded and unblinded data. Getting real-time data for a specific site and/or patient has been a real challenge. Therefore, if an AI tool can provide real-time information for key stakeholders such as clinical managers, that would prove to be tremendously helpful.

Generative AI can/will be used to improve drug discovery and development processes in the healthcare industry.

The conventional bench-to-bed model of drug discovery and development, particularly for small molecules, typically spans approximately 10 years from discovery to launch, with a cost of around $2 billion, and a success rate of only 3-5%.

Generative AI-assisted drug discovery and development can reduce costs, time, and improve success rates.

In traditional drug discovery, there are three significant challenges:

  • The first is related to biology – How to discover novel targets with high confidence.
  • The second is related to chemistry – How to discover a small molecule with a great probability clinical profile in terms of safety & efficacy to push to the clinic.
  • Thirdly, how to design clinical trial protocols to maximize the chance of success in phase 2 & phase 3

Hong Kong- and Shanghai-based artificial intelligence drug discovery company Insilico Medicine recently had its best month in history, as a small molecule discovered using its technology, INS018-055, for idiopathic pulmonary fibrosis (IPF), obtained orphan drug designation from the US Food and Drug Administration.

Insilico co-CEO Feng Ren comes from a chemistry background. After graduating from Harvard University in the US, he worked for GSK plc for 11 years, including a stint in China as head of Chemistry.

Ren observed that AI could reduce by 40-60% the time and number of compounds required for discovery. While 200-500 compounds are currently regularly synthesized for chemistry work, with AI, only 70- 100 may be needed to reach the proof-of-concept stage.

How a generative AI platform can be used:

  1. Panda “omics Platform” uses multi-omics data from patients to help discover targets for diseases with high confidence. LLM-style model for proteins and genetic structures
  2. The second platform is Chemistry 42, which utilizes generative chemistry to aid in designing normal molecules for various targets, thereby accelerating the process of chemistry discovery to pre-clinical candidate nomination.
    • You know the structures of the small molecules and proteins so that they can understand how those can bind together.
    • Give them a new target of the 3D structure of a protein, from which they can generate small molecules.
  3. The Third AI Platform, called inClinico, predicts the possibility of success from phase two to phase three. AI can modify the prediction for success. Accordingly, you can adjust your protocol design.

You know the traditional solution is to start with biology, right? So, they identify novel targets and then they investigate the MOA for this target, and then they try to link this target to a particular disease.

Insilco took a different approach, starting from patients rather than the biological aspect. They begin with patients who have a disease condition; we use multi-omics data to identify novel targets. These can be associated with diseases in patients; they are more relevant to humans. Once we identify those targets, we can return to using preclinical studies to study them. MOA of this.

However, many of the 150 or so novel drugs already discovered using AI have yet to be fully validated. AI also touches upon only a specific aspect of drug discovery, and the predictability of safety profiles remains low and largely dependent on algorithms and data.

Billing itself as an AI-powered biotech, Insilico sees considerable room for growth in AI applications in drug discovery and has already licensed its platform to several multinational pharmaceutical firms.

Much of the work I have seen so far focuses on research. A great deal of exciting work has been done in drug design and screening. However, I see numerous opportunities emerging in the area of clinical development. As I mentioned earlier, one of the key benefits of AI adoption is increased efficiency. Two approaches can improve efficiency: standardization and automation. How do we standardize the data collection process during clinical trials?

I believe this is where AI can make a significant contribution. Currently, a considerable amount of manpower is dedicated to data analysis and report generation due to FDA requirements.

Given your unique background—combining leadership in AI analytics at a prominent business school and extensive experience in the life sciences industry, what advice would you offer to current pharmaceutical professionals and aspiring MBA graduates on the application of AI initiatives?

To identify the best areas of your business to integrate AI, consider the following approach:

1. Conduct a thorough assessment of your current processes.

  • Map out your key business processes and workflows.
  • Identify manual, repetitive, or time-consuming tasks that could benefit from automation.
  • Look for areas where data analysis and decision-making could be improved.

2. Align potential AI use cases with strategic goals.

  • Review your organization’s mission statement and strategic objectives.
  • Identify how AI could support your growth agenda, improve efficiency, or reduce costs.
  • Consider how AI could enhance your products, services, or customer experiences.

3. Evaluate data availability and quality.

  • Assess the quality, quantity, and accessibility of data in different areas of your business.
  • Prioritize areas with rich, well-structured data that AI can leverage effectively.

4. Consider the potential impact and feasibility.

  • Estimate the potential business impact of AI implementation in different areas.
  • Assess the technical feasibility and resource requirements for each potential use case.
  • Look for “quick wins” that can demonstrate value quickly and build momentum.

5. Engage stakeholders across the organization.

  • Conduct interviews or workshops with department heads to identify pain points and opportunities.
  • Encourage employees to share ideas for AI applications in their work.

6. Prioritize use cases.

  • Create a master list of potential AI use cases across your organization.
  • Evaluate each use case based on strategic alignment, potential impact, feasibility, and resource requirements.
  • Prioritize use cases that offer the best balance of impact and ease of implementation.

7. Start small and scale.

  • Begin with pilot projects or proofs of concept to validate ideas and build expertise.
  • Use insights from initial projects to refine your approach and inform larger-scale implementations.

By systematically evaluating your business processes, aligning with strategic goals, and engaging stakeholders, you can identify the most promising areas for AI integration in your organization.

Remember to consider both the potential benefits and the practical considerations of implementation when making your decisions. In conclusion, beginning with a clear and shared understanding of what you want to achieve is crucial.

Bios:

Wei Zhang is a Kingland Faculty Fellow in Business Analytics and Associate Professor of Marketing at the Debbie and Jerry Ivy College of Business, Iowa State University. He is also the founding director of the Ivy Business Analytics & Digital Strategy Forum. Before joining Iowa State University, he spent almost a decade working in the industry. After obtaining his Ph.D. from Carnegie Mellon University, he started his career at McKinsey and Company as a management consultant. Subsequently, he held various managerial positions in pharmaceutical companies, including Amgen, Bristol-Myers Squibb, and Altus Pharmaceuticals, before ultimately becoming the co-founder and COO of Effigene Pharmaceuticals, an Atlanta-based company. His research has appeared in the Journal of Marketing Research, Marketing Science, Management Science, Journal of Consumer Research, and Nature Communications.

Partha Anbil is a Contributing Writer for the MIT Sloan Career Development Office and an alumnus of MIT Sloan. Besides being the VP of Programs of the MIT Club of Delaware Valley, Partha is a long-time veteran in the life sciences consulting industry. He has held senior leadership roles at IBM, Booz & Company (now PWC Strategy&), IMS Health Management Consulting Group (now IQVIA), and KPMG. He can be reached at partha.anbil@alum.mit.edu

Michael Wong is a contributing writer for the MIT Sloan Career Development Office. He is a part-time lecturer at The Wharton School, University of Pennsylvania. His ideas have been shared in the MIT Sloan Management Review and Harvard Business Review. He can be reached at mwong@mba1990.hbs.edu

By MIT Sloan CDO
MIT Sloan CDO