Advent of Code 2025 in Cypher Pierre Halftermeyer challenged himself to solve all of Advent of Code 2025 again with Cypher only. Day 11 (linked above) was a classic graph problem-solving exercise. But there were others equally fun and challenging days: Day 4 (transactional loops with a Graph Data Science twist), Day 6 (elegant pure-functional Cypher) or Day 9 (creative graph modelling on non-graph problems). You can see all challenges with his solutions in Github.
Featured Speakers now published NODES AI 2026 is our online conference dedicated to the future of AI with graphs. Join us on April 15 to explore AI, context engineering, and intelligent agents. Tracks include: Knowledge Graphs & GraphRAG, Graph Memory & Agents, as well as Graph + AI in Production. While the Call for Papers is now closed, we have announced the first six featured speakers from NODES AI. There will also be a Road to NODES AI workshop series (details to follow soon). Take a look at the event page and register so you don’t miss anything!
How to Launch Neo4j Fleet Manager and Maximize Its Value Chris Shelmerdine and Pramod Borkar give you the latest on Fleet Manager, giving teams a unified way to deploy, observe and manage AuraDB and AuraDS instances. It streamlines operations with centralised monitoring, consistent security policies, and organisation-wide governance – ideal for enterprises running graph workloads at scale.
Overcoming LLM Deficits with Multi-Layered Ontologies
Matthias Buchhorn-Roth argues that LLMs are fundamentally ill-suited for reliable legal advice. He proposes a multi-layered Neo4j architecture, combining normative structure, temporal/versioned law, procedural state machines, and case overlays, implemented as a structure-aware temporal GraphRAG backend and a neuro-symbolic UI. This makes explainable “digital caseworker” assistants possible, which can reason about deadlines, exceptions and legal causality instead of just predicting plausible text.
Why Your AI Agent Needs Semantic Tool Discovery
In this article, Dan Starns explains why AI agents require semantic tool discovery when working with extensive collections of MCP tools. Traditional approaches overwhelm LLM prompts with unnecessary schemas and increase token use. By embedding tool metadata and using vector similarity to pick only the most relevant tools for each prompt, developers can cut token usage dramatically while maintaining accuracy and responsiveness in multi-tool agent systems.
|