糖心在线

Teaching Executives to Use鈥攁nd Challenge鈥擜I

Article Icon Article
Monday, November 10, 2025
Illustration by iStock/Jesussanz
The SP Jain School of Global Management launches a prompting model that encourages executive learners to use human judgment when they employ AI tools.
  • Students who use AI might see gains in productivity but weaken their critical thinking skills unless instructors create structured frameworks for incorporating AI into the classroom.
  • At the SP Jain School, students are provided with AI analyses of complex scenarios, and students must challenge and rework the AI’s recommendations.
  • Executives learn to approach AI outputs with healthy skepticism, seeing AI as a “first draft” sparring partner instead of an infallible mentor.

 
When I introduced ChatGPT-supported assignments in an executive MBA course on business economics, students immediately were enthusiastic. Participants appreciated the speed, fluency, and structure the tool offered, and many treated it as a productivity breakthrough.

But the trade-off surfaced quickly. Assignments were more polished but noticeably shallower. Students leaned on AI’s coherence but neglected critical analysis, often skipping the deeper interpretive work expected at the executive level. What looked like performance gains were, in fact, signs of cognitive outsourcing.

I soon realized that while AI can democratize information access, it also can dilute critical inquiry if its use is left unstructured. In business education—especially in high-pressure executive and leadership development classes—this creates real risks:

  • Decision-making frameworks could become surface-level.
  • Judgment about source credibility could weaken.
  • Ethical reasoning might be crowded out by speed and convenience.

In executive programs, where learners are preparing for high-stakes leadership roles, these risks cannot be ignored. The problem isn’t AI's capability—it’s the absence of deliberate, faculty-led intellectual scaffolding around it.

Adding Humans to the Loop

At the SP Jain School of Global Management in Dubai, we have developed a human-in-the-loop prompting model that we use in our EMBA classrooms. This model ensures that AI will become a tool for deeper inquiry, not a shortcut for content generation. It consists of three key elements:

Faculty input. Instructors curate complex, critical prompts that students employ when they’re completing assignments. To create the prompts, faculty use a structured sequence that mimics real-world executive decision-making and encourages a multidimensional analysis. They start with a managerial dilemma; layer in a macroeconomic complication, such as inflation or subsidy reform; and embed a behavioral dimension, such as overconfidence bias or loss aversion.

Student effort. Students then use ChatGPT to iterate and refine the prompts. Students are required to challenge the outputs that ChatGPT provides, find flaws in its reasoning, and apply course frameworks to revise and critique its responses. Through these actions, students learn to identify ambiguities, explore consequences, and test ideas—skills directly transferable to boardroom conversations.

A human-in-the-loop prompting model ensures that AI will become a tool for deeper inquiry, not a shortcut for content generation.

Updated rubrics. New grading rubrics measure how well students reflect upon and synthesize information provided by AI. These assessments emphasize strategic thinking and critical engagement over speed or surface coverage.

Unlike generic prompt libraries or AI-first automation tools, this framework relies on a double-layered instructional design. Prompts are not only scenario-based, but also behaviorally framed and economically anchored. Faculty members become architects of ambiguity, deliberately embedding friction points that require executive learners to navigate trade-offs, uncertainty, and imperfect information. These demands mirror the cognitive terrain of real leadership.

Examples for Executive Classrooms

At the SP Jain School, professors have developed a series of scenarios designed to present participants with complex situations similar to those they might encounter at the executive level. The students’ task is to challenge the assumptions in the AI’s analysis using models discussed in class.

In one recent exercise, an AI tool generated recommendations for a company entering an emerging market during a period of high inflation. A participant flagged the AI’s omission of currency volatility and institutional trust indices—issues she had faced while expanding operations in Latin America.

Another prompt involved a behavioral element: AI offered a pricing strategy that assumed rational consumer response. Students were required to identify overconfidence bias and suggest corrective actions using and frameworks.

In a third case, which considered how a company might react to rising shipping costs, AI suggested focusing on supplier renegotiation. One student, a senior executive in logistics, instead used class concepts to model a demand elasticity scenario, revealing how temporary route shifts could lead to long-run gains.

Another hypothetical case involved a retail leader navigating post-COVID inflation. The AI recommended price increases, but the student used behavioral framing to test consumer response strategies such as , decoy pricing, and product bundling.

In a final scenario, an AI tool generated a strategy for expanding into a Southeast Asian market. The AI emphasized projected growth and digital adoption rates, but failed to account for the risks of regulatory opacity and talent attrition.

One of the participants, a tech-sector executive, had faced both of these concerns in a real-world merger. Using course frameworks, the student reconstructed the analysis by integrating institutional economics concepts, recalibrating risk assumptions, and modeling return volatility over a five-year horizon. The exercise highlighted how AI outputs, while data-driven, often lack the strategic depth executives need to manage geopolitical and operational uncertainty.

Presented with complex situations similar to those they might encounter at the executive level, students must challenge the assumptions in the AI’s analysis using models discussed in class.

In all cases, the most impactful learning moments emerged when students critiqued and redesigned the suggestions generated by AI. Several executives noted that these exercises mirrored the uncertainty they encounter in boardrooms, where answers are suggested but judgment determines which ones to trust.

One executive commented, “This was the first assignment where the uncertainty felt real—I had to decide which AI assumptions were safe to use and which to discard.” Another reflected, “It reminded me of reviewing internal reports where I don’t fully trust the data but still have to take a stand.” Such reactions indicate that the prompting model didn’t just engage students cognitively—it triggered reflection on leadership habits, risk framing, and adaptive thinking.

Results and Reflections

As a faculty member employing these exercises in the classroom, I wasn’t just evaluating the content quality of the students’ responses. I was witnessing a shift in how students used AI as a thinking partner. The most insightful responses included annotated disagreements, revised economic forecasts, and flagged inconsistencies.

For example, when one student rewrote an AI-generated demand forecast to include regional consumer trust indices, the student demonstrated not just content mastery, but contextual leadership insight. These moments signaled that executives weren’t outsourcing thinking—they were sharpening it.

After our school used the structured AI-supported model across eight EMBA cohorts, we found that students had benefited in three primary ways:

  • They received higher scores for their critical application abilities (a result we validated through ).
  • They demonstrated stronger reflection and judgment skills in post-assignment qualitative feedback.
  • They refined their understanding of AI—viewing it not as an infallible advisor, but as a “first draft” sparring partner.

Students who initially overtrusted the tool learned to approach AI outputs with healthy skepticism—a skill vital not only in the classroom, but in boardrooms.

Reframing the Executive Mindset

Over the course of eight cohorts, what changed was not just how students wrote, but how they thought. In exit reflections, students described moving from “prompting for perfection” to “prompting for tension.” They sought ambiguity, welcomed contradiction, and began to see AI as a pressure test for their leadership reasoning, rather than an assistant that could handle their critical thinking tasks.

This mindset shift was especially powerful for professionals operating in high-stakes, rapidly evolving environments. Executives reported that they became more comfortable exploring multiple interpretations before making decisions—an essential trait in managing uncertainty tied to digital transformation, regulatory volatility, and organizational change.

Participants began to seek ambiguity, welcome contradiction, and see AI as a pressure test for their leadership reasoning, rather than an assistant that could handle their critical thinking tasks.

One student noted that the exercise helped “build muscle for slowing down under pressure.” This is an unusual but invaluable outcome for executive education, where fast thinking is often rewarded.

Importantly, the prompting model offered participants psychological permission to experiment. AI became a rehearsal space—a cognitive sandbox in which executives could test and refine judgment without reputational risk. This combination of intellectual challenge and emotional safety helped reframe AI from a tactical script to a reflective lens, and from a transactional tool to a strategic learning partner.

Four Strategic Takeaways

We believe our prompting model yields four key lessons for business education leaders:

Faculty curation is non-negotiable. Unstructured AI use can prioritize speed over thinking. Faculty must remain the visible architects of AI prompts and critique frameworks.

Critical thinking must be a graded outcome. Assessing only the product (a polished answer) encourages students to take AI shortcuts. Embedding critique, reflection, and rework into assignments preserves cognitive rigor.

AI should be framed as a test, not a tutor. AI tools should be positioned as resources that students must challenge, not as mentors they should trust blindly. This approach causes students to shift from consuming AI to evaluating it.

Curriculum innovation must preserve ethical skepticism. Leadership education must train students to question, verify, and discern. These skills are essential in a world where AI mediates decision-making.

Leading With Judgment in an AI Era

As AI integration accelerates, the role of business schools is not simply to teach new tools—it is to preserve the judgment frameworks that leadership demands. Faculty must remain the human anchor: curating complexity, structuring inquiry, and fostering the development of critical reflection skills that AI alone cannot replicate.

In an AI-enhanced era, true leaders do not just adopt technology, but master its use. Targeted classroom exercises enable executive learners to develop strategic clarity and to practice ethical skepticism and adaptive leadership in a safe space.

Importantly, our prompting model does not replace rigor with convenience, but expands rigor to include reflection on digital tools themselves. It encourages executives to slow down, test assumptions, and bring curiosity to the unknowns embedded in algorithmic outputs.

Over time, this approach does more than build better responses—it builds better responders. It reshapes classroom culture around inquiry and equips learners with the confidence to challenge the AI systems they use, as well as the broader organizational narratives they encounter in practice.

As accreditation bodies and institutional leaders seek to demonstrate responsible innovation, this model offers a practical, tested framework for meaningful AI integration. This framework blends cognitive rigor, ethical use, and real-world relevance—qualities that business education must continue to champion if it hopes to prepare future leaders for an AI-shaped economy.

What did you think of this content?
Your feedback helps us create better content
Thank you for your input!
(Optional) If you have the time, our team would like to hear your thoughts
Authors
Muniza Askari
Assistant Professor of Economics, SP Jain School of Global Management
The views expressed by contributors to 糖心在线 Insights do not represent an official position of 糖心在线, unless clearly stated.
Subscribe to LINK, 糖心在线's weekly newsletter!
糖心在线 LINK鈥擫eading Insights, News, and Knowledge鈥攊s an email newsletter that brings members and subscribers the newest, most relevant information in global business education.
Sign up for 糖心在线's LINK email newsletter.
Our members and subscribers receive Leading Insights, News, and Knowledge in global business education.
Thank you for subscribing to 糖心在线 LINK! We look forward to keeping you up to date on global business education.
Weekly, no spam ever, unsubscribe when you want.