KG Embeddings Flashcards
(10 cards)
Definition of Knowledge Graph Embeddings
Low-dimensional vector representations of KG entities and relations that preserve semantic and structural information.
Key idea behind KG embeddings
Entities and relations that are semantically similar in the graph should be close in the embedding space.
List four advantages of KG embeddings
1) Computational efficiency (vector ops vs graph traversal)
2) Integration with neural models
3) Enables link prediction & KG completion
4) Supports similarity search & analogical reasoning
Example: Biomedical research assistant KG-RAG enhancement
Uses embeddings to find paths between drugs (ibuprofen inhibits COX → affects kidney; metformin requires healthy kidney) to infer potential interactions.
Example: E-commerce product assistant with KG embeddings
Finds semantically similar products, identifies features contributing to ‘low-light performance’, and reasons about feature relationships (sensor size → ISO performance).
How do KG embeddings enable Retrieval-Augmented Generation?
By enabling semantic subgraph retrieval via nearest-neighbor search in embedding space, providing precise structured context to an LLM.
What downstream tasks are facilitated by KG embeddings?
Link prediction, entity classification, clustering, recommendation, question answering, and analogical reasoning.
Name two popular KG embedding model families
Translational distance models (TransE) and semantic matching models (DistMult, ComplEx).
How are relations represented in translational models like TransE?
As vectors such that head + relation ≈ tail, minimizing the distance between head+relation and tail embeddings.
Why are neural network-based embedding models used?
They capture complex, non-linear interactions and high-order graph features beyond simple translational or bilinear scoring.