MCP Solves Tool Discovery for LLMs

Ryan Lopopolo

Coding agents like Claude Code and OpenAI Codex don’t struggle with terminals; they struggle to discover tools that exist. If a command isn’t in the model’s training set or post-training hill climbing, they won’t even know to try it. MCP fixes discovery by giving models a live, machine-readable catalog of tools with names, descriptions, input schemas, and example calls. MCP gives the models tokens.

A 3D-rendered, diorama-style illustration shows a small, rounded robot with glossy black eyes sorting through a wooden crate filled with hand tools. The robot holds a hammer in one hand, with other tools like wrenches and a screwdriver neatly arranged in the crate. The scene features soft, neutral studio lighting and warm, natural tones, creating a minimal, premium look.

Unix pipes and man pages gave humans both composition and discovery. In agentic systems, LLMs provide composition and MCP gives LLMs discoverability + affordances.

MCP is the universal plugin interface, but LLM behavior is fundamentally driven by, and constrained by, input tokens. MCP offers something that CLIs ambiently available in $PATH (or training data) never could: a built-in mechanism to automatically prompt the model so it doesn’t have to guess or hunt for tools.

The CLI your developer productivity team built to accelerate human developers is illegible to a coding agent. Agent-first development means prompting the model from the start — and MCP bakes that into tool authorship.