Plugin API Examples

While there are examples scattered throughout the Plugin API documentation, this page is dedicated to accumulating examples to help plugin authors access functional building blocks.


linkStarter repos

While this documentation of examples will continue to grow & expand over time, there are also some evergreen repos we recommend visiting as the starting point for writing your own Amplenote plugin:

Plugin Embed Starter Project. Build a React-based application that can be rendered in a note, sidebar, or anywhere else

Plugin Template. A barebones plugin with the means to run tests.

AmpleAI and the multitude of existing plugins. There are more than 100 Amplenote plugins, almost all of which can be retrieved to use as examples & context.


linkCreating a new task action initiated by /slash command

If you want to implement a slash command that is conditionally available in tasks, you'll need to use the check method within a string command nested in the appOption section of your plugin definition. That is,


(() => {
const plugin = {
appOption: {
"My Task Command": {
check: async function(app) {
return app.context.taskUUID;
},
run: async function(app, noteUUID) {
const task = await app.getTask(app.context.taskUUID);
console.log("Now we can act upon task", task);
}
}
}
};
return plugin;
})()


When you retrieve a task, you have access to all of the properties mentioned in the task definition.


linkParsing Rich Footnotes in a task

How to parse the text of a task with Rich Footnotes? Let's look at an example footnote task:



The task connected




This yields the following content returned in task.content:

Here's a task with [description and image][^1], [a link to your favorite website](amplenote.com) [a text-only footenote][^2] [a link to a note](https://www.amplenote.com/notes/2508625c-7931-11ea-a77f-4a9bae68938e) and a connected task [b2bdd0dc-7871-429a-807a-2b35c1a3c7d8](https://www.amplenote.com/notes/tasks/b2bdd0dc-7871-429a-807a-2b35c1a3c7d8?relation=connected). /plan
 
[^1]: [description and image]()
 
With description and picture
 
![](https://images.amplenote.com/d499b3d8-534d-11ef-95c7-0663d8339c46/25fded33-89b1-4013-a67a-b5b775432888.png)
 
[^2]: [a text-only footenote]()
 
Only text?
 


Most often, you'll want an LLM to parse this text to


linkCalling an external service from a plugin

In general, you can use the fetch method to retrieve data from any external URL, as seen in the "Calling an LLM" example, below. The main caveat is that some APIs implement CORS restrictions that prevent the plugin's iframe from receiving the response. If you receive a CORS error, we recommend considering signing up for Cloudflare, to set up a Cloudflare Worker that can proxy requests from your plugin users to the target site & back. Here is the Cloudflare worker implementation that Amplenote uses to proxy requests for services with CORS restrictions.


If you are writing an agent, or a long-running plugin that should not prevent the user from working concurrently, consider using the app.openEmbed() call to write your progress to an ephemeral section available to users on desktop or mobile. Here are additional details on updating the contents of your embed as incremental progress occurs.


linkCalling an LLM (OpenAI, Gemini, Anthropic) from a plugin

The AmpleAI plugin offers a live prototype to see how an AI response. The challenges & decisions to make when you're building a plugin that will interface with AI include "will response be streamed?" (usually advisable) and "how will the plugin show streamed progress if so?" Here is the general idea:


// --------------------------------------------------------------------------
// Make a single request attempt to the AI provider
// From https://github.com/alloy-org/ai-plugin/blob/main/lib/providers/fetch-ai-provider.js
//
// @param {string} apiKey - API key for the provider
// @param {number} attemptNumber - The attempt number (0-indexed)
// @param {boolean} jsonResponseExpected - Whether a JSON response is expected
// @param {Array} messages - Array of message objects
// @param {string} model - Model name
// @param {string} promptKey - Key identifying the prompt type
// @param {boolean} stream - Whether streaming is enabled
// @param {number} timeoutSeconds - Request timeout in seconds
// @returns {Promise<Response>} The fetch response
async function makeRequest(app, messages, model, { attemptNumber = 1, promptKey = null,
stream = null, timeoutSeconds = 10 } = {}) {
const providerEm = "openai"; // See a `providerFromModel(model)` implementation in AmpleAI
if (attemptNumber > 0) console.debug(`Attempt #${ attemptNumber }: Trying ${ model } with ${ promptKey || "no promptKey" }`);
 
const apiKey = app.settings["OpenAI Key"]; // From table of settings for model
const body = { model: "gpt-5.2", messages: ["To be, or not to be?"], response_format: { type: "json_object" }}; // See `requestBodyForProvider(messages, model, stream, tools, { promptKey })` in AmpleAI for live example of interpreting a request body for various providers
const endpoint = "https://api.openai.com/v1/chat/completions"; // See `providerEndpointUrl(model, apiKey)` implementation in AmpleAI for provider-specific URLs
console.debug(`Calling ${ providerEm } at ${ endpoint } with body ${ JSON.stringify(body) } at ${ new Date() }`);
const headers = { "Content-Type": "application/json", "Authorization": `Bearer ${ apiKey }` }; // See `headersForProvider(providerEm, apiKey)` in AmpleAI
 
// Use Promise.race to implement request timeout: the fetch and a timeout promise race,
// and if the timeout resolves first, the request is rejected with a timeout error
const fetchResponse = await Promise.race([
fetch(endpoint, {
method: "POST", headers, body: JSON.stringify(body),
}),
new Promise((_, reject) =>
setTimeout(() => reject(new Error('Timeout')), timeoutSeconds * 1000)
)
]);
 
if (!fetchResponse.ok) {
const err = new Error(`Request failed with status ${ fetchResponse.status }`);
err.response = fetchResponse;
throw err;
}
 
return fetchResponse;
}