BeClaude
GuideBeginner2026-05-06

Building Knowledge Graphs from Unstructured Text with Claude

Learn how to extract entities, resolve aliases, and build queryable knowledge graphs from unstructured documents using Claude's structured outputs and entity resolution capabilities.

Quick Answer

This guide shows how to use Claude to extract typed entities and relations from unstructured text, resolve duplicate surface forms into canonical nodes, and assemble a queryable knowledge graph for multi-hop reasoning — all without training data or a database.

knowledge graphentity extractionstructured outputsentity resolutionClaude API

Building Knowledge Graphs from Unstructured Text with Claude

You have a pile of unstructured documents and need to answer questions that span them — "who works with people who worked on project X", "which vendors are connected to this incident". No single document contains the answer. RAG retrieval won't chain the facts for you. You need a knowledge graph: entities as nodes, typed relations as edges, so that multi-hop reasoning becomes graph traversal.

Building one used to mean training a named-entity recognizer on your domain, training a relation classifier, writing entity-resolution heuristics, and maintaining all three as your data shifted. With Claude, each of those stages becomes a prompt.

What You'll Learn

By the end of this guide you will be able to:

  • Use structured outputs to extract typed entities and subject–predicate–object triples from arbitrary text with no training data
  • Apply Claude-driven entity resolution to collapse surface-form variants into canonical nodes, replacing brittle string-similarity heuristics
  • Assemble and query an in-memory graph, and run multi-hop questions by serializing subgraphs back to Claude
  • Measure extraction quality with precision/recall against a gold set and reason about the cost/quality tradeoff between Haiku and Sonnet
Everything runs in memory with no database. The techniques transfer directly to Neo4j, Neptune, or a Postgres adjacency table when you need to scale.

Prerequisites

  • Python 3.11+
  • Anthropic API key (get one here)
  • Basic familiarity with graphs (nodes, edges, traversal)

Setup

We use two models. Haiku handles the high-volume, schema-constrained extraction work where speed and cost matter more than nuance. Sonnet handles entity resolution and summarization, where the model needs to weigh conflicting evidence across documents.

import anthropic
from pydantic import BaseModel, Field
from typing import List, Optional

client = anthropic.Anthropic()

Define your extraction schema

class Entity(BaseModel): name: str = Field(description="The canonical name of the entity") type: str = Field(description="Entity type: PERSON, ORG, LOC, EVENT, etc.") description: str = Field(description="One-line description for disambiguation")

class Relation(BaseModel): subject: str = Field(description="Name of the subject entity") predicate: str = Field(description="Relation type in present tense, e.g. 'works_at'") object: str = Field(description="Name of the object entity")

class Extraction(BaseModel): entities: List[Entity] relations: List[Relation]

Building a Corpus

We need a handful of documents that talk about overlapping entities, so that entity resolution has real work to do. The Apollo program is a good test bed: six short Wikipedia summaries that all mention NASA, the Moon, several astronauts, and a launch vehicle — but each article names them slightly differently.

import requests

def fetch_wikipedia_summary(title): url = "https://en.wikipedia.org/api/rest_v1/page/summary/" + title response = requests.get(url) return response.json()["extract"]

documents = { "Apollo 11": fetch_wikipedia_summary("Apollo 11"), "Neil Armstrong": fetch_wikipedia_summary("Neil Armstrong"), "Buzz Aldrin": fetch_wikipedia_summary("Buzz Aldrin"), "NASA": fetch_wikipedia_summary("NASA"), "Moon": fetch_wikipedia_summary("Moon"), "Saturn V": fetch_wikipedia_summary("Saturn V") }

We fetch summaries from the Wikipedia REST API rather than full articles to keep token costs low. For a production pipeline you would chunk full documents; the extraction logic is identical.

Entity and Relation Extraction

Classical NER tags spans of text with labels (PERSON, ORG, LOC). Classical relation extraction then classifies pairs of spans into relation types. Both traditionally require labeled training data per domain.

We collapse both stages into a single Claude call per document. The key is structured outputs: we define the output shape as a Pydantic model and pass it to client.messages.parse(). Claude's response is guaranteed to validate against that schema and comes back as a typed Python object — no regex parsing, no JSON decode errors, no defensive isinstance checks.

def extract_from_document(doc_id: str, text: str) -> Extraction:
    response = client.messages.parse(
        model="claude-3-haiku-20240307",
        max_tokens=4096,
        system="You are a knowledge graph extraction system. Extract all named entities and their relations from the text. Be thorough but precise.",
        messages=[
            {
                "role": "user",
                "content": f"Extract entities and relations from this document (ID: {doc_id}):\n\n{text}"
            }
        ],
        response_model=Extraction
    )
    return response

Run extraction on all documents

all_extractions = {} for doc_id, text in documents.items(): all_extractions[doc_id] = extract_from_document(doc_id, text)

Let's look at what was extracted. Notice how the same real-world entity appears under different surface forms across documents — this is the entity resolution problem we solve next.

Entity Resolution

The raw extraction gives us overlapping mentions: "NASA" and "National Aeronautics and Space Administration", "Neil Armstrong" and "Armstrong", possibly "the Moon" and "Moon". If we build a graph directly from this, we get a fractured mess where the same concept is split across disconnected nodes.

Traditional approaches use string similarity (edit distance, Jaccard on tokens) plus blocking rules. That works for typos but fails on "Edwin Aldrin" vs "Buzz Aldrin" — two names with zero character overlap that refer to the same person.

We instead ask Claude to cluster entities of each type, using the one-line descriptions from extraction as disambiguation context. The descriptions matter: "Armstrong — first person to walk on the Moon" and "Armstrong — jazz trumpeter" have the same name but should not merge.

def resolve_entities(extractions: dict) -> dict:
    # Collect all unique entity names with their descriptions
    entity_map = {}
    for doc_id, extraction in extractions.items():
        for entity in extraction.entities:
            if entity.name not in entity_map:
                entity_map[entity.name] = {
                    "type": entity.type,
                    "descriptions": []
                }
            entity_map[entity.name]["descriptions"].append(entity.description)
    
    # Group by type for resolution
    from collections import defaultdict
    by_type = defaultdict(list)
    for name, info in entity_map.items():
        by_type[info["type"]].append({"name": name, "descriptions": info["descriptions"]})
    
    alias_to_canonical = {}
    
    for entity_type, entities in by_type.items():
        # Build prompt for Claude to cluster
        entity_list = "\n".join([
            f"- {e['name']}: {'; '.join(e['descriptions'])}"
            for e in entities
        ])
        
        response = client.messages.parse(
            model="claude-3-sonnet-20240229",
            max_tokens=4096,
            system="You are an entity resolution system. Group the following entities that refer to the same real-world thing. Return a list of clusters, where each cluster has a canonical name and a list of aliases.",
            messages=[
                {
                    "role": "user",
                    "content": f"Group these {entity_type} entities by identity:\n\n{entity_list}"
                }
            ],
            response_model=List[Cluster]
        )
        
        for cluster in response:
            for alias in cluster.aliases:
                alias_to_canonical[alias] = cluster.canonical_name
    
    return alias_to_canonical

Two failure modes to watch for. First, any raw name Claude leaves out of every cluster silently disappears from the graph, because alias_to_canonical has no entry for it — a production resolver should fall back to a single-element cluster for unmatched names so nothing is lost. Second, the resolver can over-merge: a specific mission like "Gemini 12" may get folded into the broader "Project Gemini" because the descriptions overlap. The first loses nodes, the second loses precision. Both are worth spot-checking in the output below.

Assembling the Graph

With a clean alias map, we rewrite every relation endpoint to its canonical form and load the result into NetworkX. We use a MultiDiGraph because two entities can be connected by several distinct predicates ("launched from" and "operated by"), and direction matters ("Armstrong commanded Apollo 11" is not the same edge as "Apollo 11 commanded Armstrong").

Each node carries its type, the source document IDs where it was mentioned, and the original surface forms for traceability.

import networkx as nx

def build_graph(extractions: dict, alias_map: dict) -> nx.MultiDiGraph: G = nx.MultiDiGraph() for doc_id, extraction in extractions.items(): for entity in extraction.entities: canonical = alias_map.get(entity.name, entity.name) if not G.has_node(canonical): G.add_node( canonical, type=entity.type, sources=set(), aliases=set() ) G.nodes[canonical]["sources"].add(doc_id) G.nodes[canonical]["aliases"].add(entity.name) for relation in extraction.relations: subj = alias_map.get(relation.subject, relation.subject) obj = alias_map.get(relation.object, relation.object) G.add_edge(subj, obj, predicate=relation.predicate, source=doc_id) return G

graph = build_graph(all_extractions, alias_to_canonical)

Querying the Graph with Multi-Hop Reasoning

Now for the payoff: answering questions that require traversing multiple relations. The trick is to serialize the relevant subgraph back to Claude as context, letting it reason over the connected facts.

def query_graph(graph: nx.MultiDiGraph, question: str) -> str:
    # Serialize the graph as a list of triples
    triples = []
    for u, v, data in graph.edges(data=True):
        triples.append(f"({u}) --[{data['predicate']}]--> ({v})")
    
    graph_context = "\n".join(triples)
    
    response = client.messages.create(
        model="claude-3-sonnet-20240229",
        max_tokens=1024,
        system="You are a knowledge graph query assistant. Answer questions by reasoning over the provided graph triples. If the answer requires multiple hops, trace the path step by step.",
        messages=[
            {
                "role": "user",
                "content": f"Given this knowledge graph:\n\n{graph_context}\n\nQuestion: {question}"
            }
        ]
    )
    return response.content[0].text

Example: "What did Neil Armstrong command?"

answer = query_graph(graph, "What did Neil Armstrong command?") print(answer) # "Neil Armstrong commanded Apollo 11"

Measuring Quality

To trust your graph in production, you need to measure extraction quality. Build a small gold set of expected entities and relations for a few documents, then compare against Claude's output.

def evaluate_extraction(gold: Extraction, predicted: Extraction) -> dict:
    gold_entities = set((e.name, e.type) for e in gold.entities)
    pred_entities = set((e.name, e.type) for e in predicted.entities)
    
    gold_relations = set((r.subject, r.predicate, r.object) for r in gold.relations)
    pred_relations = set((r.subject, r.predicate, r.object) for r in predicted.relations)
    
    entity_precision = len(gold_entities & pred_entities) / len(pred_entities) if pred_entities else 0
    entity_recall = len(gold_entities & pred_entities) / len(gold_entities) if gold_entities else 0
    
    relation_precision = len(gold_relations & pred_relations) / len(pred_relations) if pred_relations else 0
    relation_recall = len(gold_relations & pred_relations) / len(gold_relations) if gold_relations else 0
    
    return {
        "entity_precision": entity_precision,
        "entity_recall": entity_recall,
        "relation_precision": relation_precision,
        "relation_recall": relation_recall
    }

Cost/Quality Tradeoff: Haiku vs Sonnet

In practice, you'll want to benchmark both models on your domain. Haiku is ~5x cheaper and faster, making it ideal for high-volume extraction. Sonnet handles ambiguous cases better, especially for entity resolution and complex relation types.

A common pattern: use Haiku for initial extraction, then Sonnet for entity resolution and any edges where Haiku's confidence is low.

Key Takeaways

  • No training data needed: Claude's structured outputs let you define entity types and relation schemas on the fly, replacing traditional NER and relation classification pipelines.
  • Entity resolution is critical: Raw extraction produces fractured graphs. Claude-driven clustering with disambiguation context beats string similarity for resolving aliases like "Edwin Aldrin" vs "Buzz Aldrin".
  • Multi-hop reasoning works: By serializing the graph as triples and feeding them back to Claude, you can answer questions that require traversing multiple relations — something RAG alone struggles with.
  • Measure and iterate: Build a small gold set to track precision and recall. Watch for over-merging (losing specificity) and under-merging (fractured nodes).
  • Choose your model wisely: Haiku for high-volume extraction, Sonnet for nuanced resolution. The cost/quality tradeoff is worth benchmarking on your own data.