Exclusive: Search was built for strings. Meaning was the missing layer.
Actual GeneralIntelligence
Arbiter / Semantic Coherence

The Semantic Web was right.

It just needed geometry. Arbiter resolves meaning as a coherence problem, not a keyword problem, schema problem, or chatbot problem.

Raw querycook
Top result0.559 chef
CRT semantic search image showing Arbiter ranking chef highest

In 2001, the promise was clear: machines should not merely retrieve information. They should understand what information means.

That vision became known as the Semantic Web.

The problem was not ambition. The problem was implementation.

For machines to understand meaning, the world was asked to label itself. Ontologies. RDF. OWL. Knowledge graphs. Schemas. Triples. Hand-authored structure. A massive effort to make meaning machine-readable before machines could operate on it.

That worked in narrow domains. It did not become the universal meaning layer.

Because meaning is not just metadata. Meaning changes with context. A single word can point in many directions.

Take the word: cook

A search engine sees a string. A knowledge graph sees entities. A human sees a semantic field.

A cook might be a professional chef. It might refer to cooking techniques. It might refer to Captain James Cook. Or Cook County. Or James Cook University. Or the Cook Islands.

The old web problem was simple: how does a machine know which meaning matters?

Arbiter treats this as a coherence problem.

Run the raw query:

query: "cook"

0.559  professional chef and culinary expert
0.552  cooking techniques and recipes
0.356  Captain James Cook, British explorer
0.333  James Cook University
0.192  Cook Islands, South Pacific nation
0.185  Cook County, Illinois

No prompt engineering. No hand-coded ontology. No manual disambiguation rule. No "if restaurant then chef" logic.

Just the word: cook.

And Arbiter still resolves the dominant semantic field. The culinary meanings rise. The proper nouns fall.

Then add context

Now change the query:

query: "I need to find a cook for my restaurant"

0.693  professional chef and culinary expert
0.440  Cook County, Illinois
0.396  cooking techniques and recipes
0.285  Captain James Cook, British explorer
0.285  James Cook University
0.238  Cook Islands, South Pacific nation

The raw query already resolves the core semantic field: chef and cooking are the top two meanings. The restaurant query does something more specific. It separates the professional role from the general activity.

The professional-chef score rises from 0.559 to 0.693. General cooking drops from 0.552 to 0.396. Context does not make every culinary meaning stronger. It selects the role that fits the intent.

That is the important part. Arbiter does not merely retrieve text. It measures which candidate is most coherent with the query, then reorders the field as the intent changes.

Query: cook

0.559professional chef and culinary expert
0.552cooking techniques and recipes
0.356Captain James Cook

Query: restaurant role

0.693professional chef and culinary expert
0.440Cook County, Illinois
0.396cooking techniques and recipes

What the Semantic Web was reaching for

This is what the Semantic Web was trying to make possible: machine-readable meaning.

But instead of requiring the world to encode meaning in advance, Arbiter measures semantic fit directly.

The structure is always the same: query, candidates, coherence ranking.

curl -X POST https://api.arbiter.traut.ai/public/compare \
  -H "Content-Type: application/json" \
  -d '{
    "query": "cook",
    "candidates": [
      "professional chef and culinary expert",
      "Cook County, Illinois",
      "cooking techniques and recipes",
      "Captain James Cook, British explorer",
      "James Cook University",
      "Cook Islands, South Pacific nation"
    ]
  }'

Search found words. Embeddings found similarity. RAG assembled answers. Arbiter measures fit.

That is the missing layer.

The Semantic Web assumed meaning had to be manually encoded into the web. Arbiter suggests something different: meaning can be measured geometrically.

Same mechanism. Higher stakes.

01 / Legal

Discovery

What is the strongest evidence of intent? Which passage contradicts the timeline?

02 / Finance

Markets

What assumption would cause a cascade if wrong? Where does price drift from record coherence?

03 / Medicine

Clinical Fit

Which treatment or trial best fits the patient context and mechanism?

04 / Government

Procurement

Which proposal best satisfies mission constraints and documentary record?

05 / Robotics

Action

Which candidate action is most coherent with the current world state?

From information retrieval to semantic coherence.

Read article Request access