Getting Started with Generative Artificial Intelligence
At Columbia Business School, the Samberg Institute is committed to empowering our community with the tools and knowledge to navigate the transformative potential of Artificial Intelligence (AI). This page offers foundational resources designed to help you understand the basics of AI, explore its capabilities, and see how it can enhance teaching, learning, and innovation.
Whether you’re new to AI or seeking a firmer grounding in the technology, these curated resources will provide a clear and approachable introduction tailored to the needs of the CBS community.
Columbia University Policy for the Use of Generative AI
Generative AI is a powerful tool capable of creating innovative solutions, automating repetitive tasks, and enhancing productivity. However, to use it responsibly and effectively, users need a solid understanding of best practices, prompting techniques, ethical considerations, and its limitations. Columbia University maintains a practical, easy-to-understand policy addressing common concerns and appropriate use of this technology in our community.
Generative AI Prompt Design
A well-written prompt can enable faculty members to get the best possible result from AI. CBS provides a series of prompt design suggestions for faculty to review.
Be Specific and Clear
AI performs best with detailed instructions. The more precise your prompt, the closer the generative AI will get to your desired result. Vague prompts often lead to generic or irrelevant outputs.
What to Do:
Instead of general instructions like "Create a presentation," include specific details such as the number of slides, the topic, and the intended audience. For example:
- Vague Prompt: "Create a presentation."
- Specific Prompt: "Create a 5-slide presentation explaining the benefits of renewable energy for corporate sustainability programs. Each slide should have 3-4 bullet points and a brief description."
Why It Works:
Specificity helps AI understand your expectations and tailor its response. If you provide too little information, generative AI will make assumptions that may not align with your goals.
Provide Context and Examples
AI thrives when provided with background information or examples that guide its output. Context helps generative AI "understand" what you're asking and reduces the chances of irrelevant responses.
What to Do:
Specify the purpose, tone, and audience for your request. If possible, include an example of what you're looking for or assign AI a specific role, such as a peer reviewer, tutor, or marketing consultant, to guide its response effectively. For instance, when drafting an email:
- Without Context: "Draft an email."
- With Context: "Draft a professional email introducing our renewable energy consulting services to a potential client. Keep the tone formal but approachable. For reference, here's a similar email we’ve used: [insert example]."
Why It Works:
Providing context helps the AI produce content that is accurate and also aligns with your needs. Examples act as a template, giving generative AI a clearer sense of direction.
Use Constraints
Adding constraints ensures that generative AI’s output adheres to your specific requirements, such as length, tone, or format. Constraints reduce ambiguity and improve relevance.
What to Do:
Define boundaries for the response. For example:
- Without Constraints: "Summarize this article."
- With Constraints: "Write a concise, 150-word summary of this article, focusing on the key findings and keeping the tone professional."
- If you're generating creative content, you can add stylistic constraints:
- Example for Creativity: "Write a 200-word story in a whimsical tone about a child who discovers a magical forest."
Why It Works:
Constraints limit the scope of the response, ensuring it fits the specific format or purpose you have in mind. They help avoid overly broad or unstructured outputs.
Incorporate Feedback Loops
Refining your prompts based on initial outputs is crucial to achieving the desired result. Think of interacting with generative AI as a conversation in which you provide feedback and adjust your instructions.
What to Do:
If the first output isn’t correct, revise your prompt with more detail or clarification. For instance:
- First Prompt: "Write a social media post about renewable energy."
- Generative AI Output: "Renewable energy is important for the environment. Switch today!"
- Revised Prompt: "Write a 50-word LinkedIn post for a professional audience, highlighting the cost-saving benefits of renewable energy adoption for businesses. Use an optimistic and inspiring tone."
- Generative AI Output: "Adopting renewable energy isn’t just good for the planet—it’s great for your bottom line. Businesses can cut costs and boost sustainability by switching to clean energy. Join the movement toward a greener future and unlock financial savings today. #RenewableEnergy #Sustainability #BusinessGrowth"
You can also use feedback loops to adjust tone or content:
- Feedback: "This is too casual. Please revise it to sound more formal and include a specific example."
Why It Works:
Feedback loops allow you to refine and improve AI outputs iteratively. Each adjustment helps generative AI align more closely with your expectations, leading to more useful results over time.
Additional Resources
- Thinking About Assessment in the Time of Generative Artificial Intelligence - This Masterclass video and instructional guide from the Digital Futures Institute, Teachers College at Columbia University, contains best practices, as well as tips and tricks for prompt writing.
- How to Talk to AIs: Prompt Design 101 - Columbia Emerging Technologies presents best practices and suggestions for creating effective generative AI prompts.
Integrating Generative AI with Human Expertise
1. Guide Generative AI with Clear Objectives
Start by identifying the specific task or problem you want to address. Thoughtful input leads to more relevant and useful output.
- Identify the target audience, tone, and purpose for your request.
- Avoid vague prompts like "Write something about marketing."
- Use specific prompts like "Draft a social media campaign targeting Gen Z consumers for a sustainable product."
- Clearly outline what success looks like for the AI-generated output.
2. Balance Generative AI with Human Judgment
AI is a tool to enhance, not replace, human creativity and judgment. Faculty and students should critically assess AI outputs, refining them as needed.
- Use AI as a starting point, not the final product—human oversight remains essential.
- Review outputs to ensure they align with your goals and standards.
- Add domain-specific expertise and personal insight to improve accuracy and relevance.
- Ensure the final deliverable reflects human understanding and empathy.
3. Engage and Iterate with Generative AI
Thoughtful engagement with AI, combined with an iterative approach, enhances accuracy, relevance, and reliability while fostering a deeper understanding of its capabilities and limitations.
- Assess AI-generated content critically, comparing it to human analysis and reasoning.
- Experiment with different prompts and refine inputs to improve the quality of responses.
- Design assignments that require students to critique, modify, or improve AI-generated outputs.
- Highlight AI’s biases and knowledge gaps, prompting discussions about credibility and accuracy.
4. Adapt Generative AI Use to Context
Customize your use of generative AI based on the specific context and goals of the task. Ensure it aligns with your objectives and complements your workflow.
- In educational settings, connect generative AI outputs to specific learning objectives.
- Encourage critical thinking by using generative AI to complement—not replace—student engagement.
- Use generative AI to generate ideas or drafts, but rely on human judgment to refine and shape the final outcome.
- Ensure the way you integrate generative AI is relevant to real-world applications for your audience.
Ethical Considerations
As CBS embraces AI and its potential within higher education, the integration may present ethical challenges. Faculty, staff, and students should be aware of the potential complications of using AI.
- Bias and Fairness
Generative AI systems learn from data, and if that data contains biases, generative AI’s outputs may perpetuate those biases. For example, biased hiring datasets might lead to unfair hiring recommendations. Regular audits of generative AI outputs are critical to detect and address these biases. Inclusivity is also vital—ensure outputs are free from language or assumptions that could marginalize any group. - Transparency
Always disclose when and how generative AI tools have been used, especially in professional, academic, or creative settings. This maintains integrity and builds trust with your audience or stakeholders. - Energy and Environmental Impact
Training and running generative AI systems require significant computational resources, contributing to carbon emissions. Organizations and individuals should weigh the environmental costs against the benefits and prioritize using AI tools efficiently. - Privacy and Security
Never input sensitive or confidential information into generative AI systems unless you’re sure the data is secure. Many generative AI tools retain input data for model improvement, potentially leading to unintended exposure.
Resources from the CBS Community
- The Implications of AI on the Future of Education & Work: An MBA Perspective - A recent CBS graduate, Jane Bernhard, shares her experience of the influence of AI across education, the workplace, and in society.
- Raising ‘Responsible AI’ from the Ground Up - CBS’s Hongseok Namkoong reviews the importance of responsible AI and the potential ethical implications.
- Exploring Democracy in the Age of AI - In this YouTube video and transcript, CBS Professor Bruce Kogut talks about on the significant impact of AI on social and traditional media.
- Navigating the Ethical Concerns of Generative AI - The 2023 Klion Forum’s panel with expertise in media, law, and industry addresses the ethics of generative AI.
Limitations of Generative AI
- Accuracy and Hallucination
Generative AI sometimes produces "hallucinations"—confidently incorrect or fabricated information. For example, an AI might cite non-existent studies or create fictional data. Always verify the accuracy of AI-generated outputs. - Contextual Understanding
Generative AI lacks true understanding and common sense. While it can analyze patterns and generate plausible responses, it doesn’t grasp nuanced meanings or cultural subtleties. This can lead to outputs that miss the mark or fail to align with human intentions. - Dependence on Training Data
Generative AI models are only as good as the data they are trained on. If the training data is outdated, narrow in scope, or biased, the outputs will reflect those shortcomings. - Ethical Dilemmas in Automation
Over-reliance on generative AI can diminish human oversight and accountability, especially in areas like decision-making, hiring, or content moderation. Users must recognize where human intervention remains essential.