Skip to main content
Official Logo of Columbia Business School
Academics
  • Visit Academics
  • Degree Programs
  • Admissions
  • Tuition & Financial Aid
  • Campus Life
  • Career Management
Faculty & Research
  • Visit Faculty & Research
  • Academic Divisions
  • Search the Directory
  • Research
  • Faculty Resources
  • Teaching Excellence
Executive Education
  • Visit Executive Education
  • For Organizations
  • For Individuals
  • Program Finder
  • Online Programs
  • Certificates
About Us
  • Visit About Us
  • CBS Directory
  • Events Calendar
  • Leadership
  • Our History
  • The CBS Experience
  • Newsroom
Alumni
  • Visit Alumni
  • Update Your Information
  • Lifetime Network
  • Alumni Benefits
  • Alumni Career Management
  • Women's Circle
  • Alumni Clubs
Insights
  • Visit Insights
  • Digital Future
  • Climate
  • Business & Society
  • Entrepreneurship
  • 21st Century Finance
  • Magazine
Teaching in the Age of Generative AI
  • AI Basics
    • Generative AI Prompt Design
    • Integrate Generative AI & Human Expertise
    • Ethical Considerations
    • Limitations
    • Glossary of Terms
  • AI in the Classroom
    • Faculty Guide for Generative AI Integration
    • Leveraging Generative AI Video Series
    • Classroom Quick Ideas
    • Faculty Spotlights
    • Generative AI Integration Checklist
  • AI Tools and Resources
    • CBS AI Tools
    • AI Policies
    • Course Design Resources
    • Syllabus Requirements
    • Resource Library
  • Samberg Institute Homepage
  • More 

AI Basics

Jump to main content

Getting Started with Generative Artificial Intelligence

At Columbia Business School, the Samberg Institute is committed to empowering our community with the tools and knowledge to navigate the transformative potential of Artificial Intelligence (AI). This page offers foundational resources designed to help you understand the basics of AI, explore its capabilities, and see how it can enhance teaching, learning, and innovation. 

Whether you’re new to AI or seeking a firmer grounding in the technology, these curated resources will provide a clear and approachable introduction tailored to the needs of the CBS community.

Experiment with AI
CBS Faculty Guide to Generative AI Integration


Generative AI Prompt Design

Integrate Generative AI with Human Expertise

Ethical Considerations

Limitations of Generative AI

Glossary of Terms

CBS Photo Image

Columbia University Policy for the Use of Generative AI

Generative AI is a powerful tool capable of creating innovative solutions, automating repetitive tasks, and enhancing productivity. However, to use it responsibly and effectively, users need a solid understanding of best practices, prompting techniques, ethical considerations, and its limitations. Columbia University maintains a practical, easy-to-understand policy addressing common concerns and appropriate use of this technology in our community.

Columbia University Generative AI Policy

Generative AI Prompt Design

A well-written prompt can enable faculty members to get the best possible result from AI. CBS provides a series of prompt design suggestions for faculty to review. 

Be Specific and Clear
Provide Context and Examples
Use Constraints
Incorporate Feedback Loops
Additional Resources

Be Specific and Clear

AI performs best with detailed instructions. The more precise your prompt, the closer the generative AI will get to your desired result. Vague prompts often lead to generic or irrelevant outputs.

What to Do:
Instead of general instructions like "Create a presentation," include specific details such as the number of slides, the topic, and the intended audience. For example:

  • Vague Prompt: "Create a presentation."
  • Specific Prompt: "Create a 5-slide presentation explaining the benefits of renewable energy for corporate sustainability programs. Each slide should have 3-4 bullet points and a brief description."

Why It Works:
Specificity helps AI understand your expectations and tailor its response. If you provide too little information,  generative AI will make assumptions that may not align with your goals.

Provide Context and Examples

AI thrives when provided with background information or examples that guide its output. Context helps generative AI "understand" what you're asking and reduces the chances of irrelevant responses.

What to Do:
Specify the purpose, tone, and audience for your request. If possible, include an example of what you're looking for or assign AI a specific role, such as a peer reviewer, tutor, or marketing consultant, to guide its response effectively. For instance, when drafting an email:

  • Without Context: "Draft an email."
  • With Context: "Draft a professional email introducing our renewable energy consulting services to a potential client. Keep the tone formal but approachable. For reference, here's a similar email we’ve used: [insert example]."

Why It Works:
Providing context helps the AI produce content that is accurate and also aligns with your needs. Examples act as a template, giving generative AI a clearer sense of direction.

Use Constraints

Adding constraints ensures that generative AI’s output adheres to your specific requirements, such as length, tone, or format. Constraints reduce ambiguity and improve relevance.

What to Do:
Define boundaries for the response. For example:

  • Without Constraints: "Summarize this article."
  • With Constraints: "Write a concise, 150-word summary of this article, focusing on the key findings and keeping the tone professional."
  • If you're generating creative content, you can add stylistic constraints:
    • Example for Creativity: "Write a 200-word story in a whimsical tone about a child who discovers a magical forest."

Why It Works:
Constraints limit the scope of the response, ensuring it fits the specific format or purpose you have in mind. They help avoid overly broad or unstructured outputs.

Incorporate Feedback Loops

Refining your prompts based on initial outputs is crucial to achieving the desired result. Think of interacting with generative AI as a conversation in which you provide feedback and adjust your instructions.

What to Do:
If the first output isn’t correct, revise your prompt with more detail or clarification. For instance:

  • First Prompt: "Write a social media post about renewable energy."
    • Generative AI Output: "Renewable energy is important for the environment. Switch today!"
  • Revised Prompt: "Write a 50-word LinkedIn post for a professional audience, highlighting the cost-saving benefits of renewable energy adoption for businesses. Use an optimistic and inspiring tone."
    • Generative AI Output: "Adopting renewable energy isn’t just good for the planet—it’s great for your bottom line. Businesses can cut costs and boost sustainability by switching to clean energy. Join the movement toward a greener future and unlock financial savings today. #RenewableEnergy #Sustainability #BusinessGrowth"

You can also use feedback loops to adjust tone or content:

  • Feedback: "This is too casual. Please revise it to sound more formal and include a specific example."

Why It Works:
Feedback loops allow you to refine and improve AI outputs iteratively. Each adjustment helps generative AI align more closely with your expectations, leading to more useful results over time.

Additional Resources

  • Thinking About Assessment in the Time of Generative Artificial Intelligence - This Masterclass video and instructional guide from the Digital Futures Institute, Teachers College at Columbia University, contains best practices, as well as tips and tricks for prompt writing.
  • How to Talk to AIs: Prompt Design 101 - Columbia Emerging Technologies presents best practices and suggestions for creating effective generative AI prompts.

Integrating Generative AI with Human Expertise

1. Guide Generative AI with Clear Objectives

Start by identifying the specific task or problem you want to address. Thoughtful input leads to more relevant and useful output.

  • Identify the target audience, tone, and purpose for your request.
  • Avoid vague prompts like "Write something about marketing."
  • Use specific prompts like "Draft a social media campaign targeting Gen Z consumers for a sustainable product."
  • Clearly outline what success looks like for the AI-generated output.

2. Balance Generative AI with Human Judgment

AI is a tool to enhance, not replace, human creativity and judgment. Faculty and students should critically assess AI outputs, refining them as needed.

  • Use AI as a starting point, not the final product—human oversight remains essential.
  • Review outputs to ensure they align with your goals and standards.
  • Add domain-specific expertise and personal insight to improve accuracy and relevance.
  • Ensure the final deliverable reflects human understanding and empathy.

3. Engage and Iterate with Generative AI

Thoughtful engagement with AI, combined with an iterative approach, enhances accuracy, relevance, and reliability while fostering a deeper understanding of its capabilities and limitations.

  • Assess AI-generated content critically, comparing it to human analysis and reasoning.
  • Experiment with different prompts and refine inputs to improve the quality of responses.
  • Design assignments that require students to critique, modify, or improve AI-generated outputs.
  • Highlight AI’s biases and knowledge gaps, prompting discussions about credibility and accuracy.

4. Adapt Generative AI Use to Context

Customize your use of generative AI based on the specific context and goals of the task. Ensure it aligns with your objectives and complements your workflow.

  • In educational settings, connect generative AI outputs to specific learning objectives.
  • Encourage critical thinking by using generative AI to complement—not replace—student engagement.
  • Use generative AI to generate ideas or drafts, but rely on human judgment to refine and shape the final outcome.
  • Ensure the way you integrate generative AI is relevant to real-world applications for your audience.

Ethical Considerations

As CBS embraces AI and its potential within higher education, the integration may present ethical challenges. Faculty, staff, and students should be aware of the potential complications of using AI.

  1. Bias and Fairness
    Generative AI systems learn from data, and if that data contains biases, generative AI’s outputs may perpetuate those biases. For example, biased hiring datasets might lead to unfair hiring recommendations. Regular audits of generative AI outputs are critical to detect and address these biases. Inclusivity is also vital—ensure outputs are free from language or assumptions that could marginalize any group.
  2. Transparency
    Always disclose when and how generative AI tools have been used, especially in professional, academic, or creative settings. This maintains integrity and builds trust with your audience or stakeholders.
  3. Energy and Environmental Impact
    Training and running generative AI systems require significant computational resources, contributing to carbon emissions. Organizations and individuals should weigh the environmental costs against the benefits and prioritize using AI tools efficiently.
  4. Privacy and Security
    Never input sensitive or confidential information into generative AI systems unless you’re sure the data is secure. Many generative AI tools retain input data for model improvement, potentially leading to unintended exposure.

Resources from the CBS Community

  • The Implications of AI on the Future of Education & Work: An MBA Perspective - A recent CBS graduate, Jane Bernhard, shares her experience of the influence of AI across education, the workplace, and in society.
  • Raising ‘Responsible AI’ from the Ground Up - CBS’s Hongseok Namkoong reviews the importance of responsible AI and the potential ethical implications.
  • Exploring Democracy in the Age of AI - In this YouTube video and transcript, CBS Professor Bruce Kogut talks about on the significant impact of AI on social and traditional media.
  • Navigating the Ethical Concerns of Generative AI - The 2023 Klion Forum’s panel with expertise in media, law, and industry addresses the ethics of generative AI.

Limitations of Generative AI

While generative AI offers powerful capabilities, faculty need to understand its limitations—including issues with accuracy, contextual gaps, and ethical challenges—to ensure its responsible and effective use in teaching and learning.

  1. Accuracy and Hallucination
    Generative AI sometimes produces "hallucinations"—confidently incorrect or fabricated information. For example, an AI might cite non-existent studies or create fictional data. Always verify the accuracy of AI-generated outputs.
  2. Contextual Understanding
    Generative AI lacks true understanding and common sense. While it can analyze patterns and generate plausible responses, it doesn’t grasp nuanced meanings or cultural subtleties. This can lead to outputs that miss the mark or fail to align with human intentions.
  3. Dependence on Training Data
    Generative AI models are only as good as the data they are trained on. If the training data is outdated, narrow in scope, or biased, the outputs will reflect those shortcomings.
  4. Ethical Dilemmas in Automation
    Over-reliance on generative AI can diminish human oversight and accountability, especially in areas like decision-making, hiring, or content moderation. Users must recognize where human intervention remains essential.

External CSS

Homepage Breadcrumb Block

Samberg AI Block

Official Logo of Columbia Business School

Columbia University in the City of New York
665 West 130th Street, New York, NY 10027
Tel. 212-854-1100

Maps and Directions
    • Centers & Programs
    • Current Students
    • Corporate
    • Directory
    • Support Us
    • Recruiters & Partners
    • Faculty & Staff
    • Newsroom
    • Careers
    • Contact Us
    • Accessibility
    • Privacy & Policy Statements
Back to Top Upward arrow
TOP

© Columbia University

  • X
  • Instagram
  • Facebook
  • YouTube
  • LinkedIn
Back to top

Accessibility Tools

English French German Italian Spanish Japanese Russian Chinese (Simplified) Chinese (Traditional) Arabic Bengali