An example tool that takes a name and returns a greeting message. Demonstrates the basic structure of an MCP tool using Zod for parameter definition.
Generates non-streaming text content using a specified Google Gemini model. This tool takes a text prompt and returns the complete generated response from the model. It's suitable for single-turn generation tasks where the full response is needed at once....
Generates text content as a stream using a specified Google Gemini model. This tool takes a text prompt and streams back chunks of the generated response as they become available. It's suitable for interactive use cases or handling long responses. Optional parameters allow control over generation and safety settings.
Generates content using a specified Google Gemini model, enabling the model to request execution of predefined functions. This tool accepts function declarations and returns either the standard text response OR the details of a function call requested by the model. NOTE: This tool only returns the *request* for a function call; it does not execute the function itself.
Initiates a new stateful chat session with a specified Gemini model. Returns a unique sessionId to be used in subsequent chat messages. Optionally accepts initial conversation history and session-wide generation/safety configurations.
Sends a message to an existing Gemini chat session, identified by its sessionId. Returns the model's response, which might include text or a function call request.
Sends the result(s) of function execution(s) back to an existing Gemini chat session, identified by its sessionId. Returns the model's subsequent response.
Uploads a file (specified by a local path) to be used with the Gemini API. NOTE: This API is not supported on Vertex AI clients. It only works with Google AI Studio API keys. Returns metadata about the uploaded file, including its unique name and URI.
Lists files previously uploaded to the Gemini API. Supports pagination to handle large numbers of files. NOTE: This API is not supported on Vertex AI clients....
Retrieves metadata for a specific file previously uploaded to the Gemini API. NOTE: This API is not supported on Vertex AI clients. It only works with Google AI Studio API keys....
Deletes a specific file previously uploaded to the Gemini API. NOTE: This API is not supported on Vertex AI clients. It only works with Google AI Studio API keys....
Creates a cached content resource for a compatible Gemini model. Caching can reduce latency and costs for prompts that are reused often. NOTE: Caching is only supported for specific models (e....
Lists cached content resources available for the project. Supports pagination. Returns a list of cache metadata objects and potentially a token for the next page.
Retrieves metadata for a specific cached content resource. Requires the unique cache name (e.g., 'cachedContents/abc123xyz').
Updates metadata (TTL and/or displayName) for a specific cached content resource. Requires the unique cache name (e.g....
Deletes a specific cached content resource. Requires the unique cache name (e.g....
Your API key from Google AI Studio.
Default Gemini model. Optional; if not provided, 'gemini-1.5-flash' is used.