Capabilities
Capabilities are the resources and services your tools can access. When you define a tool, you declare which capabilities it needs, and the framework provides them at runtime.
Available Capabilities
| Capability | What It Provides |
|---|---|
storage |
Read/write to persistent storage |
token |
Sign and verify tokens |
environment |
Access environment variables |
event |
Subscribe to and publish events |
tool |
Call other tools |
ai |
Access AI completions |
object |
Create visual components |
search |
Semantic search across storage |
Declaring Capabilities
In tools.json, list the capabilities your tool needs:
{
"name": "analyze_emails",
"description": "Analyzes emails using AI",
"capabilities": ["storage", "ai"],
"input_schema": { ... },
"output_schema": { ... }
}
Only request what you actually use. A tool with "capabilities": [] can still do useful work - it just can't access external resources.
Storage
The storage capability provides access to persistent, permission-enforced storage.
export default async function my_tool(
input: Input,
capabilities: Capabilities
): Promise<Output> {
const storage = capabilities.storage.use('@my-org/notes');
// Write data
await storage.put('/notes/note-123.json', {
title: 'My Note',
content: 'Hello world'
});
// Read data
const note = await storage.get('/notes/note-123.json');
// List files
const files = await storage.list('/notes/');
// Delete
await storage.delete('/notes/note-123.json');
}
Storage paths are controlled by permissions defined in storage.json. See Storage for details.
Token
The token capability lets you sign and verify tokens.
export default async function login(
input: Input,
capabilities: Capabilities
): Promise<Output> {
// Verify credentials...
// Sign a new token
const token = await capabilities.token.sign('account', {
accountId: user.id,
email: user.email
});
return { success: true };
}
Tokens are automatically returned to the client and stored for future requests. See Tokens for more about token definitions.
Environment
The environment capability provides access to environment variables configured for your app.
export default async function call_api(
input: Input,
capabilities: Capabilities
): Promise<Output> {
const apiKey = capabilities.environment.API_KEY;
const baseUrl = capabilities.environment.API_BASE_URL;
const response = await fetch(`${baseUrl}/endpoint`, {
headers: { 'Authorization': `Bearer ${apiKey}` }
});
return { data: await response.json() };
}
Environment variables are configured via malv env and stored securely per environment (development vs production).
Event
The event capability enables pub/sub communication between apps. See Events for the full guide.
// Subscribe to events from another app
await capabilities.event.subscribe(
'@malv/gmail', // Source app
'received_email', // Event name
'handle_new_email' // Your handler tool
);
// Send events to subscribers
await capabilities.event.send('document_updated', {
documentId: 'doc-123',
updatedBy: user.email
});
Tool
The tool capability lets your tool call other tools, enabling composition.
export default async function create_report(
input: Input,
capabilities: Capabilities
): Promise<Output> {
// Call another tool in this app
const emails = await capabilities.tool.call('list_emails', {
maxResults: 10
});
// Call a tool in another app
const table = await capabilities.tool.call('@malv/tables', 'create_table', {
columns: ['From', 'Subject', 'Date'],
rows: emails.map(e => [e.from, e.subject, e.date])
});
return { tableId: table.id };
}
AI
The ai capability provides access to AI completions from multiple providers. This is useful for tools that need to analyze, summarize, or generate content.
Providers
| Provider | Availability | Model Examples |
|---|---|---|
| Cloudflare AI | Always available | @cf/meta/llama-3-8b-instruct |
| OpenAI | If AI_API_KEY starts with sk- |
gpt-4o, gpt-4o-mini |
| Anthropic | If AI_API_KEY starts with sk-ant- |
claude-3-5-sonnet-20241022 |
Basic Usage
export default async function summarize_document(
input: Input,
capabilities: Capabilities
): Promise<Output> {
const summary = await capabilities.ai.completion({
model: {
openai: 'gpt-4o-mini',
anthropic: 'claude-3-5-sonnet-20241022',
cloudflare: '@cf/meta/llama-3-8b-instruct'
},
messages: [
{ role: 'user', content: `Summarize this document:\n\n${input.document}` }
],
max_tokens: 500
});
return { summary };
}
Model Selection
The model parameter maps providers to model names. Specify models for the providers you want to support:
model: {
openai: 'gpt-4o-mini', // Used if OpenAI key available
anthropic: 'claude-3-5-sonnet', // Used if Anthropic key available
cloudflare: '@cf/meta/llama-3' // Fallback
}
By default, external APIs (OpenAI/Anthropic) are preferred over Cloudflare. You can control this with modelPriority:
await capabilities.ai.completion({
model: { openai: 'gpt-4o', cloudflare: '@cf/meta/llama-3' },
modelPriority: ['cloudflare', 'openai'], // Prefer Cloudflare
messages: [...],
max_tokens: 1000
});
Streaming
For longer responses, use streaming:
for await (const chunk of capabilities.ai.completionStreamed({
model: { openai: 'gpt-4o' },
messages: [{ role: 'user', content: 'Write a story...' }],
max_tokens: 2000
})) {
// Process each chunk as it arrives
process.stdout.write(chunk);
}
JSON Output
Request structured JSON responses:
const result = await capabilities.ai.completion({
model: { openai: 'gpt-4o-mini' },
messages: [
{ role: 'user', content: 'Extract entities from: "John works at Acme Corp"' }
],
max_tokens: 200,
response_format: 'json'
});
const entities = JSON.parse(result);
Object
The object capability lets tools create visual components that appear as tabs in the UI.
export default async function create_chart(
input: Input,
capabilities: Capabilities
): Promise<Output> {
// Create an object that will be rendered in the UI
const id = await capabilities.object.set('@my-org/charts', {
type: 'chart',
name: input.title,
metadata: {
chartType: 'bar',
dataSource: input.dataId
}
});
return { chartId: id };
}
See Objects for more about defining object types and renderers.
Search
The search capability enables semantic search across storage data. When data is written to storage, embeddings are automatically generated in the background.
Basic Search
export default async function find_relevant_data(
input: Input,
capabilities: Capabilities
): Promise<Output> {
const results = await capabilities.search(input.query, {
types: ['storage'],
limit: 10,
minSimilarity: 0.7
});
return { results };
}
Search Options
| Option | Type | Description |
|---|---|---|
types |
string[] |
What to search: 'tools', 'objects', 'examples', 'storage' |
limit |
number |
Maximum results (default: 20) |
minSimilarity |
number |
Minimum similarity score 0-1 (default: 0.5) |
locations |
object |
Filter to specific app paths |
Location-Based Filtering
Narrow searches to specific paths within an app:
const results = await capabilities.search('meeting notes', {
types: ['storage'],
locations: {
'@malv/slack': '/teams/team-123/conversations',
'@malv/drive': '/teams/team-123/documents'
}
});
| Pattern | Meaning |
|---|---|
'*' or '/*' |
All paths in the app |
'/teams/team-123/*' |
All paths under this prefix |
'/teams/team-123/conversations' |
Paths starting with this structure |
Security
Search results are automatically filtered based on the user's tokens. Users only see data they have permission to access.
Making Data Searchable
Add descriptions to storage paths in storage.json to improve search relevance:
{
"same_app": {
"/teams/<token.team>/contacts/": {
"operations": ["read", "write"],
"description": "Customer contact information including email, phone, and address"
}
}
}
To exclude sensitive data from search:
{
"same_app": {
"/cache/": {
"operations": ["read", "write"],
"skipEmbedding": true
}
}
}
Best Practices
Request only what you need - Each capability increases the security surface. A tool that only transforms data doesn't need storage or ai.
Handle errors gracefully - Capability operations can fail (network issues, permission errors). Return meaningful error messages so the AI can explain what went wrong.
Consider costs - The ai capability makes API calls that cost money. Use smaller models (gpt-4o-mini) for simple tasks.
Use search for discovery - Instead of hard-coding paths, use the search capability to find relevant data based on user intent.