HyperAIHyperAI

Command Palette

Search for a command to run...

Vatsal Saglani Builds MCP Servers and Client Hub with FastAPI for Obsidian-AI Connection

In the first part, Vatsal Saglani delves into the implementation details of two crucial servers in the nano-MCP architecture: the File Management Server and the Run Command Server. These servers are pivotal components that handle file operations and system command execution, respectively, ensuring seamless integration and functionality in the overarching system. The focus is on building these servers without relying on large language models (LLMs), emphasizing pure technical aspects. File Management Server The File Management Server exposes four tools: create_file, read_file, update_file, and show_folder_tree. Each tool serves a specific purpose: create_file: This tool allows users to upload files to a designated path on the server. FastAPI routes and Pydantic request models will manage the file upload process, ensuring data integrity and structure. read_file: Users can read file content using this tool by providing a file path. FastAPI's asynchronous capabilities will be utilized to enhance reading speed and efficiency. update_file: This tool enables users to modify existing files by uploading new content. A dedicated interface will be designed to receive and overwrite the original files. show_folder_tree: This tool displays the directory tree structure of the current folder, helping users better manage and navigate their file systems. Run Command Server The Run Command Server features a single tool named run_command. This tool allows users to execute various system commands on the server. To mitigate potential security risks, FastAPI's built-in security measures will restrict command execution permissions. Additionally, Pydantic will validate request parameters to ensure the correctness of the commands. Integration and Testing Saglani uses Docker Compose to integrate both servers in a local development environment. Docker Compose simplifies container management and deployment. Shared Docker volumes will enable file synchronization between the File Management Server and the Run Command Server, facilitating a smooth development and testing workflow. Why Build the Servers First? The rationale behind building the servers first is to ensure stability and reliability of the core infrastructure. In the nano-MCP architecture, servers are the edge components that perform actual tasks such as file management and command execution. Without robust servers, AI models and client-side interactions would lack the foundation to convert theoretical ideas into practical actions. This approach streamlines project management and quality control, enhancing development efficiency and reducing maintenance costs. Industry Evaluation Experts commend Saglani's methodical approach to project development. By prioritizing the foundational elements—specifically, the servers—developers can build a solid base before adding more complex AI components. This strategy not only improves development speed but also ensures long-term system stability. Company Profile Vatsal Saglani is an experienced software engineer with a strong background in developing large-scale distributed systems. His current focus is on researching the MCP architecture and its applications in AI, aiming to enhance the performance and security of AI systems. In the second part, Saglani continues his exploration of the nano-MCP architecture by building the MCP Client Hub, which connects natural language models like ChatGPT and Claude to the previously created File Management Server and Run Command Server. This hub is designed to streamline interactions and enable efficient automated operations. Overview of Existing MCP Clients Before diving into the development, Saglani conducted a brief survey of existing MCP clients. He highlighted Claude Desktop, a local tool provided by Anthropic that supports interaction with natural language models. While this tool offers basic functionalities, its flexibility and extensibility are limited. Inspired by Claude Desktop, Saglani set out to create a more transparent, modular, and expandable MCP Client Hub. Development of the MCP Client Hub Technology Stack Saglani chose FastAPI, a high-performance web framework, for building the hub. FastAPI facilitates rapid API development and supports asynchronous operations, making it ideal for this project. Pydantic and httpx were also employed for data model validation and HTTP request handling, respectively. Design Principles Simplicity and Transparency: The system should be straightforward, making it easy for users to understand and operate. Separation of Discovery and Execution: The client hub must differentiate between service discovery and actual execution to enhance manageability and maintenance. Scalability: The design should allow for easy addition of new servers and features without affecting the existing system's stability. Implementation Steps Initialize FastAPI Application: Start by setting up a FastAPI application with basic routing and API paths. Define Data Models: Use Pydantic to define structured request and response models, ensuring data validation and consistency. Service Discovery Mechanism: Implement a mechanism for the client hub to automatically discover available servers and services. Command Execution Interfaces: Establish communication interfaces with the File Management Server and Run Command Server to send and receive data, executing relevant commands. Testing and Debugging: Conduct unit and integration tests to verify the correctness and reliability of each module. Results and Future Plans Saglani successfully developed the MCP Client Hub, which not only meets basic needs but is also highly scalable and flexible. The complete project code is available on GitHub under the nano-MCP repository, encouraging community contributions and further development. Future goals include fully localizing and operationalizing the AI assistant, providing a private and offline solution. Industry Response The community has positively received Saglani’s project. Developers praise his clear and concise approach, noting that FastAPI significantly boosts performance and asynchronous handling. The project serves as an excellent learning resource for beginners, illustrating the step-by-step process of building a robust web service. Personal Knowledge Management with MCP In the third part, the author discusses the creation of a personal MCP server to integrate AI models with his Obsidian knowledge base. The goal is to leverage LLMs to optimize personal workflows, particularly by providing a deeper and more accessible context for AI agents. Project Motivation The author frequently uses Obsidian to organize concepts and build a comprehensive knowledge network. However, many notes remain incomplete over time. The introduction of MCP in Claude Desktop expanded the capability of AI models to access and utilize personal knowledge bases, motivating the author to build a custom solution. Why Build a Custom MCP Server? Privacy Concerns: Obsidian notes are stored as Markdown files on the local file system. Using official MCP servers poses privacy risks, as third-party tools could gain access to sensitive information. Control: The author prefers to grant AI models read-only access, preventing any unwanted modifications. Interest: A curiosity-driven approach led the author to develop the server from scratch. Knowledge Base Structure The author organizes each new concept as a note in Obsidian, storing them in a single directory in alphabetical order. A "homepage" file categorizes and links the notes, enhancing accessibility. MCP Server Architecture Named “knowledge-vault,” the server registers each note as a resource and defines tools for AI interaction: list_knowledges(): Lists all notes' names and URIs. get_knowledge_by_uri(): Retrieves and returns the content of specific notes based on their URIs. Integration with Claude Desktop Running the MCP server involves calling the run() method of a FastMCP instance. For macOS users, updating the Claude Desktop configuration file to point to the local MCP server script ensures proper communication via standard I/O. Test Results Tool Functionality: list_knowledges(): Successfully returned a list of all notes. get_knowledge_by_uri(): Correctly fetched and displayed the content of specific notes. Note Integrity: Zero Byte Notes: The model identified and highlighted empty Markdown files. Weak Content Notes: It scanned all resources, generating a table that shows the completion status of each note, aiding in tracking and improving content. Question Generation: Based on existing notes, the model generated descriptive short-answer questions, reinforcing key concepts and aiding in recall. Future Goals While initial tests have been promising, the author plans to fully localize and operationalize the AI assistant, creating a truly private and offline tool that provides comprehensive control over the entire knowledge management process. Industry Impact The implementation of this technology showcases the potential of the MCP protocol and introduces a fresh perspective on personal knowledge management. By combining AI with personal knowledge bases, users can achieve more efficient learning and project management. The author's expertise in privacy and security enhances the reliability and trustworthiness of his solution. Company Background Claude Desktop, developed by Anthropic, is an advanced AI proxy platform that supports various functionalities, including MCP protocol integration. Anthropic's mission is to make AI a powerful assistant in everyday work and life.

Related Links