Your Ai Is Working With Half A Brain. You Need The Other Half
We have all seen the headlines that enterprise AI is failing at a high rate. MIT has reported that 95% of GenAI pilots fail. OpenAI co-founder Andrej Karpathy recently said that true agentic AI could take another 10 years. The challenge is that LLMs today are much better at guessing the best answer than actually distilling the truth.
While disappointing to some, none of this is surprising. IT applications in the enterprise most often come with a high bar for security, privacy, compliance, health and safety, and a host of other controls. When AI can’t meet the correspondingly high bars for accuracy, explainability, security, governance and privacy, those AI systems are bound to languish.
These kinds of failures are actually healthy, and a sign that the checks and balances are working. A significant root cause of failure is a fragmented data foundation. Most AI systems don’t have access to the knowledge and context to get things right. Specifically, LLMs don’t have the most up-to-date data nor have they typically been trained on your own specific enterprise knowledge. Further, neither LLMs nor vector stores have access to explicit context, the capacity for discernment (meaning the capacity to restrict knowledge based on who is accessing the system and for what purpose) or the capacity for explainability. The good news is that all these issues can be solved with an increasingly common architectural pattern: an AI knowledge layer based on a knowledge graph.
Why Do LLMs Hallucinate?
Yann LeCun, former chief AI scientist at Meta, asserts that LLMs will continue to hallucinate until they embody the four characteristics of intelligent behavior:
- Understanding: Capacity to understand the physical world (to which I would add digital).
- Persistent memory: Ability to remember and retrieve things.
- Reasoning: Ability to reason.
- Planning: Ability to plan.
His assertion is that LLMs fail to meet these characteristics in anything more than a very primitive way. As LeCun puts it: “If you expect a system to become intelligent without having the possibility of doing these [four] things, you’re making a mistake.”
Compare an LLM to the human brain. Right-brain behavior is often seen as creative and impulsive — chock-full of great ideas, lacking in self reflection and sometimes including ideas that a sane person would never act on. The right brain is great at coming up with new ideas, but it usually lacks understanding, persistent memory, reasoning or planning — much like AI systems today.
Alternatively, left-brain behavior is associated with detailed understanding, logical reasoning, fact-based memory — the capabilities that tell your brain when a crazy idea is closer to a hallucination than a business plan. A knowledge graph can serve as the left brain in an AI system. It represents the types of connections, past experiences and most important relationships that help present the LLM with the best choices given knowledge of the past.
We can expand this analogy further:
- Right brain — LLMs (and vectors) are not of the world of discrete and understandable facts that can be directly communicated or explained to humans or even to other machines.
- LLMs are statistically-inferred and opaque word prediction engines whose behavior — as seemingly amazing as it can be — is entirely based on statistics around (more or less) word frequency and proximity.
- Like the proverbial right brain, LLMs are impulsive, inscrutable, not entirely predictable — and mostly right, but sometimes spectacularly wrong.
- This core part of the AI stack behaves in ways that are mostly functional, sometimes dysfunctional and always opaque.
- Left brain — Knowledge graphs store precise details about the facts most relevant to whatever kinds of decisions need to be made.
- Knowledge graphs also capture the essential relationships between these same facts. Much like their LLM neural network counterparts, the structures used to store and process data mimics the mechanisms inside the brain.
- Data is stored in ways that are understandable by humans, but can also be executed upon by machines.
- They structure knowledge in a way that lends itself to gating via data access controls, providing AI with a sorely-missing capacity for discernment.
- While you can’t ask them random questions using arbitrary language constructs in the way you can an LLM, a knowledge graph can provide rich context to an LLM so that it can make a better decision.
- Moreover, graph databases can provide exact answers to complex high-stakes questions, complementing LLMs’ creative abilities with optionality for exact, deterministic answers. Let’s not forget that some questions do still have exact answers!
Much like the brain’s two hemispheres combined offer far greater potential when used together, the explicit knowledge and connections available in a knowledge graph can help LLMs provide better answers. They do so by providing rich, specific context as input. This context can include more specific details about the objects, relationships, and rules involved in any given question. It can also include weights resulting from context-based computations (commonly known as “graph algorithms”), which use the emergent shape of the network of knowledge to improve results.
Two common examples are:
- PageRank, which originated with Google as a better way of ranking relevant results, and is often a better way to rank vector results.
- Graph neural networks (GNNs), which numerically describe the way data is shaped, and can be used for topological similarity (such as, does this person’s behavior look more like a high-value customer or a fraudster).
AI App Decision Stakes — Half Brain or Full Brain Required?
We are now armed with a new heuristic that can help choose the right architecture for an AI system. If the stakes are low where a probabilistic answer is good enough, and where context, explainability and the ability to gate results based on access controls aren’t important, then a more right-brain solution comprising LLMs and a vector database will do just fine.
If the stakes are high, however, there’s a good chance you’ll need a knowledge graph to get your application across the prototype-to-production chasm.
Consider the following spectrum:
At one end lies pure creative tasks with a human in the loop. You have writer’s block and don’t know where to start. You need a creative partner to help you get an idea off the ground. Or you have a language-specific task like summarizing meeting notes. All of these lie squarely inside of an LLM’s zone of genius, which is language and creativity. At this end, hallucinations are far less of an issue, and in some cases arguably a feature.
At the opposite end are agentic applications engaged in business activities that have minimal room for error. These are the applications, agentic and otherwise, responsible for running the business. Normally when the value of a good decision is high, the cost of a poor decision is even higher. In the best case, a high-stakes AI decision gone bad hurts the bottom line. In the worst case, it affects reputation and brand, health and human safety, business compliance with regulations, system security, and so on.
For decisions at this end of the scale, the bar for AI accuracy is higher, and the system requirements escalate further when you factor in the need for auditable and provable results to gain stakeholder and regulatory trust.
At the center of the spectrum is a customer service copilot application. Here the stakes can still be moderately high. But having humans in the loop to apply common-sense overrides and use their professional judgment softens the AI accuracy and explainability requirement. While good answers are still quite valuable and up-to-date context is vital, there is some tolerance for error.
Connecting this back to the analogy of the brain: Simple creative problems can be perfectly fine, if not better, working with only a right brain. On the other hand, the higher the stakes, the more one also needs a left brain. While we sometimes joke about humans doing brainless activities, the reality is we all function with two hemispheres in our brain – and your AI systems should too.
The post Your AI Is Working With Half a Brain. You Need the Other Half appeared first on The New Stack.
Popular Products
-
Enamel Heart Pendant Necklace$49.56$24.78 -
Digital Electronic Smart Door Lock wi...$211.78$105.89 -
Automotive CRP123X OBD2 Scanner Tool$649.56$324.78 -
Portable USB Rechargeable Hand Warmer...$61.56$30.78 -
Portable Car Jump Starter Booster - 2...$425.56$212.78