AI Landscape Flashcards

(76 cards)

1
Q

What is the primary purpose of model studios like Azure AI Studio and Amazon Bedrock?

A

They provide user interfaces for experimenting with foundation models, prompt engineering, and fine-tuning models during development.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Where do model studios fit in the AI stack?

A

They sit in the model development and inference layer, enabling prototyping and early-stage testing of LLM-based applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How does Azure AI Studio differ from Amazon Bedrock?

A

Azure AI Studio is integrated with OpenAI and Azure services, while Bedrock offers access to multiple third-party models like Anthropic, AI21, and Cohere on AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a major limitation of using model studios for production?

A

They often lack robust orchestration, observability, or deployment workflows—requiring other tools for enterprise-scale solutions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are common features across most model studios?

A

Prompt playgrounds, fine-tuning UIs, model hosting endpoints, and integrations with cloud storage and APIs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why is vendor lock-in a concern when using model studios?

A

Because workflows and code may become tightly coupled to a specific provider, making switching more difficult later.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What advantage does Vertesia offer over model studios?

A

Vertesia abstracts away the model provider layer, letting you switch LLMs easily without changing application logic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Can Vertesia integrate with model studios like Bedrock or Azure AI Studio?

A

Yes. Vertesia can consume outputs from model studio APIs, though it replaces their orchestration and monitoring features.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How do model studios support fine-tuning?

A

They offer tools and APIs to fine-tune base models using private data for more domain-specific performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Who are typical users of model studios?

A

ML engineers, developers, and enterprise data teams experimenting with LLMs or developing early-stage applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a Prompt Playground?

A

It’s a visual interface to test prompts and immediately see how the model responds.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which studio integrates with Microsoft Teams and Excel?

A

Azure AI Studio, due to its tight connection with the Microsoft ecosystem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the core offering of IBM watsonx.ai?

A

Access to IBM’s Granite foundation models and governance tools, alongside open-source model support.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How does Vertex AI Studio handle retrieval-augmented generation (RAG)?

A

It offers native tools to retrieve documents and feed them to models like PaLM for contextualized answers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Why might a customer choose Vertesia over a model studio?

A

Vertesia unifies prompt management, observability, orchestration, and multi-model flexibility into one solution, avoiding the need for multiple point tools.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the goal of prompt management tools?

A

To help teams version, test, evaluate, and monitor prompts used in LLM applications, often across environments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are examples of popular prompt management tools?

A

PromptLayer, PromptOps, and Humanloop.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Why is prompt versioning important?

A

It allows teams to track changes in prompts over time, roll back ineffective versions, and measure performance impact.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

How does PromptOps differ from PromptLayer?

A

PromptOps is more focused on enterprise observability and prompt testing, while PromptLayer emphasizes lightweight tracking and logging.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Where do prompt management tools sit in the AI stack?

A

They sit at the orchestration and monitoring layer, focusing on the interface between business logic and LLM interaction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

How does Vertesia handle prompt management compared to standalone tools?

A

Vertesia includes native prompt versioning, structured testing, and evaluation—eliminating the need for separate tools.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Can Vertesia integrate with third-party prompt tools?

A

Yes, but it often makes them redundant since prompt workflows are native to Vertesia’s orchestration engine.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Who typically uses prompt management tools?

A

Prompt engineers, product managers, and developers building LLM-powered applications at scale.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is prompt evaluation in this context?

A

The process of systematically testing prompts using metrics like token usage, accuracy, latency, and user feedback.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What common features do prompt management platforms provide?
Version control, logging, A/B testing, evaluation dashboards, team collaboration, and performance analytics.
26
What’s a key benefit of Humanloop?
It allows for real-time prompt updates and model selection during development with user-friendly UI and observability.
27
What is one limitation of using only prompt management tools without orchestration?
They don’t control flow logic, state, or integration with APIs—only the interaction with LLMs.
28
How does prompt monitoring help in production?
It helps detect prompt drift, broken logic, or sudden changes in output quality due to model updates.
29
What does 'prompt drift' mean?
When a model’s response quality changes over time even though the prompt remains the same—often due to upstream model changes.
30
Why would a company consolidate prompt management into Vertesia instead of using a standalone tool?
To streamline workflows, reduce tool sprawl, and unify orchestration, monitoring, and model routing in one platform.
31
What is the purpose of a vector database?
To store and retrieve high-dimensional vectors (embeddings) for similarity search, often used in semantic retrieval and RAG.
32
What are examples of popular vector databases?
Pinecone, Weaviate, and Qdrant.
33
Where do vector databases fit in the AI stack?
In the retrieval layer, bridging unstructured data with LLMs via similarity search based on embeddings.
34
How does Pinecone differentiate itself?
Pinecone focuses on fully managed infrastructure, low-latency retrieval, and hybrid search with metadata filtering.
35
What is Weaviate’s key strength?
It's open source, schema-flexible, and includes built-in support for modular search and multiple embedding models.
36
What does 'hybrid search' mean in vector databases?
A combination of semantic (vector) and keyword (symbolic) search to improve accuracy and relevance.
37
How do vector DBs interact with LLMs?
They provide relevant context to prompts by retrieving semantically similar documents from a knowledge base.
38
Can Vertesia integrate with external vector databases?
Yes. Vertesia can connect to Pinecone, Weaviate, or others as external retrieval engines in its RAG workflows.
39
Does Vertesia include its own vector search engine?
Vertesia includes a basic internal vector index but is often configured to use external vector databases for scalability and flexibility.
40
What role do embeddings play in vector databases?
Text or other content is converted into embeddings (vectors), which are stored and searched by proximity to a query vector.
41
Who typically makes decisions about vector database adoption?
Data scientists, ML engineers, and software architects working on search or RAG-based systems.
42
What is a common objection to vector databases?
Complexity of managing infrastructure or choosing the right database among many emerging options.
43
How does Vertesia abstract vector DB complexity?
It allows developers to configure retrieval behavior without needing to write low-level search code.
44
How is Qdrant different from Pinecone or Weaviate?
Qdrant is open-source with efficient approximate nearest neighbor (ANN) search and a RESTful API interface.
45
Why is vector search critical in RAG (retrieval-augmented generation)?
It brings relevant, factual context into the prompt, helping the LLM generate more accurate, grounded responses.
46
What is a foundation model provider?
A company that develops and serves large-scale language models (LLMs), often powering downstream AI applications.
47
What are some leading foundation model providers?
OpenAI, Anthropic, Cohere, and Mistral.
48
What distinguishes OpenAI in this space?
OpenAI offers advanced GPT models (e.g. GPT-4) with strong capabilities across reasoning, coding, and multilingual support, accessible via API and Azure.
49
What is Anthropic known for?
Developing the Claude family of models, with a strong emphasis on safety, transparency, and constitutional AI principles.
50
What does Cohere specialize in?
Cohere focuses on retrieval-augmented generation (RAG), custom embedding models, and enterprise NLP tools.
51
What is unique about Mistral?
Mistral provides open-source, high-performance models designed to run efficiently in commercial settings, often with no usage fees.
52
Where do foundation model providers sit in the AI stack?
At the base layer—providing the core reasoning engine for chatbots, summarization, classification, and more.
53
How does Vertesia use foundation models?
Vertesia integrates foundation models via its virtual LLM architecture, allowing users to route queries to the best-fit model dynamically.
54
Can Vertesia integrate with multiple foundation model providers at once?
Yes. Vertesia’s orchestration layer allows multi-provider support, model switching, and fallback logic across OpenAI, Anthropic, Cohere, and others.
55
Who typically decides which foundation model to use?
Technical leaders, CIOs, AI architects, and compliance teams, depending on accuracy, latency, cost, and data sensitivity.
56
What is a key risk of relying on a single provider?
Model outages, pricing changes, or data governance issues can affect application reliability and compliance.
57
How does Vertesia mitigate provider lock-in?
By virtualizing models through a unified API, allowing clients to switch providers without rewriting prompt logic or workflows.
58
What are common use cases for foundation model APIs?
Chat interfaces, RAG systems, customer support bots, content generation, summarization, and classification.
59
What pricing model do most foundation model providers use?
Usage-based pricing by token (input + output), with different rates per model tier (e.g., GPT-4 vs GPT-3.5).
60
Why might a client choose Mistral over OpenAI?
Mistral’s models are open-source and free to self-host, offering more control and potentially lower costs for companies with infrastructure.
61
How do LLM frameworks like LangChain differ from vector databases like Pinecone?
LLM frameworks orchestrate logic and prompt flows, while vector databases store and retrieve semantically similar content using embeddings.
62
What’s the relationship between LLM frameworks and vector databases?
Frameworks like LangChain often connect to vector databases to implement retrieval-augmented generation (RAG) by inserting relevant context into prompts.
63
What is orchestration in the context of LLM applications?
Orchestration manages the logic of how prompts, models, tools, memory, and retrieval interact to complete a complex task.
64
How does orchestration differ from retrieval?
Retrieval fetches relevant context (e.g., documents or embeddings), while orchestration defines how that context is used across multiple steps or tools.
65
Where does Vertesia sit in the AI stack compared to other tools?
Vertesia sits in the orchestration and integration layer, connecting models, prompts, tools, data sources, and user workflows.
66
How does Vertesia virtualize foundation models?
By abstracting model providers under a single interface, enabling users to switch or combine models without changing business logic.
67
How is Vertesia different from Hugging Face?
Hugging Face focuses on open-source model access and hosting, while Vertesia focuses on orchestrating model usage and integrating them into business workflows.
68
Why is model abstraction important for enterprise AI?
It avoids vendor lock-in, improves resiliency, and enables multi-model strategies for cost, performance, or compliance needs.
69
How does Vertesia reduce tool sprawl across the AI lifecycle?
It unifies prompt management, orchestration, observability, and model routing—replacing the need for many point tools like LangChain, PromptOps, or external schedulers.
70
How would you explain Vertesia to a company already using Bedrock or Azure AI Studio?
Vertesia provides a unified platform for managing prompts, models, and workflows, enhancing flexibility and reducing tool complexity.
71
How would you explain Vertesia to a company already using Bedrock or Azure AI Studio?
Vertesia builds on those platforms by offering orchestration, evaluation, and multi-model control—allowing them to move from experimentation to production at scale.
72
What is Vertesia’s key value prop over open-source frameworks?
Production readiness, with auditability, structured inputs/outputs, model fallback logic, and enterprise governance.
73
What kind of companies are best suited for Vertesia?
Mid to large enterprises building AI features into products or operations, especially where scale, compliance, or modularity matters.
74
How does Vertesia handle retrieval workflows (RAG)?
It lets users configure custom retrievers, plug in external vector databases, and map retrieved documents into prompt variables.
75
What team roles typically benefit from Vertesia?
Sales engineers, product teams, data engineers, and platform teams responsible for deploying or maintaining LLM-powered solutions.
76
Why is Vertesia well positioned in the AI tooling ecosystem?
It fills the critical gap between raw model access and real-world application delivery—bridging experimentation and enterprise execution.