MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools. - From Anthropic docs
Send a message to a channel
, Search for messages in a channel
, Get the list of channels
, Get the list of users
and more.
A Github MCP server will expose the following capabilities: Get the list of repositories
, Get the list of issues
, Get the list of pull requests
, Create a pull request on a repository
and more.
If an LLM is provided access to a Slack and Github MCP server, we can create agents very easily by prompting to the LLM:
Get open pull requests on my repository test-repo
and send me a slack message with the list of pull requests.
If we provide Github and MCP servers to the MCP client, it will automatically be able to call the model, get the tool calls, execute the tool calls to finally acheive the goal of sending me the slack message with the list of pull requests.
Central Registry of MCP servers
Access Control
Authentication / Authorization
Agent Registry
Guardrails
Observability
Rate Limiting / Caching