KG Embeddings Flashcards

(10 cards)

1
Q

Definition of Knowledge Graph Embeddings

A

Low-dimensional vector representations of KG entities and relations that preserve semantic and structural information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Key idea behind KG embeddings

A

Entities and relations that are semantically similar in the graph should be close in the embedding space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

List four advantages of KG embeddings

A

1) Computational efficiency (vector ops vs graph traversal)
2) Integration with neural models
3) Enables link prediction & KG completion
4) Supports similarity search & analogical reasoning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Example: Biomedical research assistant KG-RAG enhancement

A

Uses embeddings to find paths between drugs (ibuprofen inhibits COX → affects kidney; metformin requires healthy kidney) to infer potential interactions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Example: E-commerce product assistant with KG embeddings

A

Finds semantically similar products, identifies features contributing to ‘low-light performance’, and reasons about feature relationships (sensor size → ISO performance).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How do KG embeddings enable Retrieval-Augmented Generation?

A

By enabling semantic subgraph retrieval via nearest-neighbor search in embedding space, providing precise structured context to an LLM.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What downstream tasks are facilitated by KG embeddings?

A

Link prediction, entity classification, clustering, recommendation, question answering, and analogical reasoning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Name two popular KG embedding model families

A

Translational distance models (TransE) and semantic matching models (DistMult, ComplEx).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How are relations represented in translational models like TransE?

A

As vectors such that head + relation ≈ tail, minimizing the distance between head+relation and tail embeddings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why are neural network-based embedding models used?

A

They capture complex, non-linear interactions and high-order graph features beyond simple translational or bilinear scoring.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly