Lecture_2 Flashcards
(44 cards)
What is qualitative research?
- Non-numerical data: Text, image, video, artefacts.
Why use qualitative research?
- To gain an in-depth understanding of a phenomenon in its context
- To guide managerial decision making and the development of new theories; to allow the “real”
world inform theorizing and decision-making - To elaborate or extend existing theory
- new concepts, processes, or relationships between concepts, theory applications
- To get new insights
- In practice: new trends, customer needs, untapped market potential
- To make (more) sense of quantitative data and results
- Understanding trends in data, e.g., “Why do we observe a decline in customer satisfaction?”
- Explain “strange” results (statistical outliers) and contradictions
Sampling in qualitative research
- Purposive sampling: Theory-driven.
- Small samples: Contextual depth.
- Evolves: Participants suggest others.
Depth interviews
- Objective: Understand experiences, beliefs, perceptions.
- Formal: Usually >1 hour.
- Core activity: Standalone or combined.
Types of qualitative data
- Archival data: Written, visual materials.
- Interview data: Transcripts.
- Observational data: Field notes, videos.
Projective methods
- Goal: Indirectly reveal feelings.
- Examples: Word association, sentence completion, symbolic matching.
Focus groups
- Participants: 6-12, homogeneous.
- Setting: Relaxed, informal.
- Strength: Interaction adds richness.
Qualitative data analysis
- Coding: Categorization for patterns.
- Iterative: Start early, refine codes.
- Tools: NVivo, Atlas.ti, MAXQDA.
Linking qualitative and quantitative data
- Qualitative: Conceptual development.
- Quantitative: Representative sampling.
- Together: Clarify and validate findings.
Applications of LLMs in qualitative research
- Automated interviewing: Scalable, reduces bias.
- Sentiment analysis: Nuance detection.
- Synthetic data generation: Privacy-compliant.
Key Drivers of the Current AI Revolution
- Massive increase in the amount
of data available worldwide - Technological progress in GPUs
- Development of the transformer
architecture - 135,017 citations
➢ Of these, citations in the top 50
management journals: 6
Basic Principle of Large Language Models
- Tokenization – The input sentence is split into smaller parts (tokens), such as words or characters.
- Embeddings – Each token is converted into a numerical vector, where similar words have closer values.
- Sequence Processing – The model processes the sequence of embeddings to understand the context.
- Prediction – The model predicts the next word based on probabilities, generating meaningful text.
LLM and Temperature
In the context of Large Language Models (LLMs), temperature is a parameter that controls the randomness of predictions when generating text.
🔥 How Temperature Works
* It adjusts how confident or creative the model is when choosing the next word.
* Affects the probability distribution of the next token in a sequence.
📊 Temperature Settings:
* Low temperature (e.g., 0.2) → The model is more deterministic and picks the most likely words (useful for factual responses).
* High temperature (e.g., 1.2) → The model becomes more creative, selecting less likely words more often (useful for storytelling or brainstorming).
* Temperature = 0 → The model will always choose the highest probability word, making responses predictable.
🎯 Example:
If the model is predicting the next word in “It was a dark and…”:
* Temperature = 0.2 → “stormy”
* Temperature = 1.0 → “mysterious” / “gloomy” / “cold”
* Temperature = 1.5 → “exciting” / “adventurous” / “peculiar”
💡 Lower temperatures = better for accuracy, higher temperatures = better for creativity!
Stochastic Parrots
- Training Data Bias: LLMs inherit biases from datasets (e.g., gender and demographic imbalances).
- Evaluation Challenges: Bias is subtle and requires careful analysis.
- Automated Bias: Human biases get amplified in AI-generated text.
- Marginalization Risks: LLMs may disadvantage certain groups.
- Illusion of Understanding: LLMs don’t “think”—they just predict words probabilistically.
Outcome & Impact:
* Sparked global debate on AI ethics, bias, and corporate influence in research.
* Raised concerns about Google’s commitment to responsible AI.
* Led to more focus on transparency and fairness in AI development.
Multimodal AI
Multimodal AI refers to artificial intelligence systems that can process and understand multiple types (or “modes”) of data, such as text, images, audio, and video. This makes them more advanced than traditional AI, which typically focuses on a single modality (e.g., just text or just images).
Case study: Customer Value Prediction with Multimodal Data
take -> Product descriptions, Product images, Transaction data and put into AI (Transformer) -> get Customer Value
Case Study: Predicting Performance for LinkedIn Posts
Text of a LinkedIn post + Picture of a LinkedIn post -> into Embeddings and Machine Learning -> get:
* # Likes
* # Clicks
* # Shares
AI and Customer Service
➢ Replaceable activities of employees in customer service: 63% - 80 % (own calculations)
Applications of LLMs in Qualitative Research
Automated Interviewing
* Concept: LLMs conduct initial screenings or gather preliminary data through natural language interactions.
* Advantages: Scalable, time-efficient, reduces interviewer bias.
* Challenges: Risk of hallucinations, lack of human intuition and context.
* Example: Using GPT-4 to conduct preliminary interviews for market research on a new product launch.
Applications of LLMs in Qualitative Research
Sentiment Analysis
* Concept: LLMs provide qualitative insights into consumer sentiment beyond simple positive/negative categorization.
* Advantages: Rapid processing of vast data, identification of subtle nuances.
* Challenges: Potential misinterpretation of cultural or contextual subtleties.
* Example: Analyzing social media responses to a new advertising campaign using BERT-based models.
Applications of LLMs in Qualitative Research
Open-ended Survey Responses
* Concept: LLMs process and categorize open-ended survey responses for actionable insights.
* Advantages: Quick theme identification and pattern recognition in consumer feedback.
* Challenges: Potential to miss outlier responses or unique insights.
* Example: Using Claude to analyze customer feedback on a new software feature.
Applications of LLMs in Qualitative Research
Qualitative Coding Assistance
* Concept: LLMs support researchers in coding qualitative data by suggesting potential codes and themes.
* Advantages: Accelerates coding process, promotes consistency across multiple coders.
* Challenges: Risk of overlooking context-specific nuances, over-reliance on suggested codes.
* Example: Using Claude to assist in coding interview transcripts for a grounded theory study.
Applications of LLMs in Qualitative Research
Synthetic Data Generation
* Concept: LLMs create realistic, diverse synthetic qualitative data for research and testing.
* Advantages: Provides privacy-compliant data, allows exploration of edge cases, enhances dataset diversity.
* Challenges: Ensuring data quality and relevance, potential biases in generated data.
* Example: Generating synthetic customer reviews to test a new sentiment analysis model.
Applications of LLMs in Qualitative Research
Content Analysis
* Concept: LLMs assist in analyzing large volumes of textual data from various sources.
* Advantages: Efficient processing of diverse content types, identification of complex themes.
* Challenges: Potential oversimplification of nuanced content.
* Example: Analyzing years of annual reports to identify shifts in corporate strategy and culture.
- To gain an in-depth understanding of a phenomenon in its context.
Venomenon vs Context
- To elaborate or extend existing theory
Goal as qualitative researcher (in academia): elaborate or extend existing theory, by identifying
* new concepts (e.g., x3),
* relationships between concepts (variance theory),
* or new processes that explain how outcomes are generated (process theory).