More Ai Flashcards

(19 cards)

1
Q

What is Generative AI?

A

AI systems that can create new content (text, images, code, etc.) by learning patterns from existing data. Unlike traditional AI that classifies or predicts, generative AI creates new outputs.

Use Cases: Content creation (marketing copy, product descriptions), Code generation (GitHub Copilot), Image creation (DALL-E, Midjourney), Text-to-speech (synthetic voices). Business Impact: Can reduce content creation time by 40-70% while maintaining quality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is an LLM?

A

A type of AI model trained on vast amounts of text data to understand and generate human-like text.

Key Players: GPT-4 (OpenAI), Claude (Anthropic), PaLM (Google), Llama 2 (Meta). Technical Detail: Modern LLMs can range from 7B to over 1T parameters.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a Transformer in AI?

A

A revolutionary neural network architecture from 2017 that enables parallel processing of text.

Technical Components: Self-attention mechanism, Positional encoding, Multi-head attention, Feed-forward networks. Why It Matters: Enabled 10x improvement in training efficiency vs. previous approaches.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the main types of AI model training?

A

Three key approaches: Pre-training, Fine-tuning, RLHF (Reinforcement Learning from Human Feedback).

Cost Implications: Pre-training can cost $1M-$10M, fine-tuning starts at $10K.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is Prompt Engineering?

A

The art and science of crafting effective inputs to get desired outputs from AI models.

Best Practices: Be specific and detailed, Use examples (few-shot learning), Include context and constraints, Structure complex tasks. ROI: Good prompt engineering can reduce token usage by 30-50%.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a Token in LLMs?

A

The basic unit of text that LLMs process.

Technical Details: Average word = 1.3 tokens, Pricing typically per 1K tokens, Common limits: GPT-4: 8K-32K tokens, Claude: Up to 100K tokens, Llama 2: 4K tokens. Business Impact: Directly affects operating costs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the Context Window?

A

The maximum amount of text an AI model can consider at once.

Competitive Landscape: Claude: 100K tokens, GPT-4: 32K tokens, Llama 2: 4K tokens. Use Case Impact: Larger windows enable Document analysis, Code review, Complex reasoning tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are Embeddings?

A

Numerical representations of text that capture meaning, enabling semantic search and comparison.

Applications: Semantic search, Document clustering, Recommendation systems. Technical Detail: Usually 768-1536 dimensional vectors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is RAG?

A

Technique combining LLMs with custom knowledge bases.

Implementation Methods: Vector databases (Pinecone, Weaviate), Document chunking, Semantic search. Benefits: Reduced hallucination, Custom knowledge integration, Lower training costs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are Vector Databases?

A

Specialized databases for storing and querying embeddings.

Key Players: Pinecone, Weaviate, Milvus. Use Cases: Knowledge management, search, recommendation systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the main deployment options?

A

Three primary approaches: API Services, Cloud Deployment, On-premises.

Cost Considerations: API: Pay-per-use, lowest upfront cost; Cloud: More control, medium cost; On-prem: Highest control, highest cost.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the key data protection measures?

A

Multiple layers of protection: Data encryption, Access controls, Audit logging, Data residency options, PII detection and redaction.

Compliance: GDPR, HIPAA, SOC 2.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What safety measures prevent harmful outputs?

A

Multiple safeguards: Content filtering, Toxicity detection, Bias mitigation, Output validation.

Implementation: Both model-level and application-level controls.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How to compare AI models?

A

Key metrics: Capability benchmarks, Cost per token, Context window size, Specialization options, Deployment flexibility.

Example: Claude vs. GPT-4: Claude: Longer context, strong reasoning; GPT-4: Higher capability, more integrations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What components make up GenAI TCO?

A

Full cost breakdown: Direct Costs, Indirect Costs.

Direct Costs: API/compute costs, Storage costs, Integration development; Indirect Costs: Prompt engineering, Monitoring and optimization, Training and support. ROI Metrics: Cost per task, time saved, error reduction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are top enterprise GenAI applications?

A

Key applications by department: Sales, Support, HR, Legal.

Success Metrics: Time saved, accuracy rates, cost reduction.

17
Q

How does GenAI adapt to specific industries?

A

Industry-specific considerations: Healthcare, Financial, Legal, Manufacturing.

Implementation: Usually combines base models with industry-specific RAG.

18
Q

What’s next in GenAI?

A

Key developments: Multimodal models, Improved reasoning capabilities, Lower computational requirements, Better factual accuracy.

Timeline: 12-18 month innovation cycles.

19
Q

How is the GenAI market evolving?

A

Key trends: Consolidation of providers, Specialized vertical solutions, Open-source advancement, Regulatory framework development.

Market Size: Expected to reach $100B+ by 2025.