sup.ai
The sup.ai package provides access to the runtime’s basic AI capabilities, including text generation and analysis, and embeddings. For image operations, see the sup.ai.image package.
sup.ai.prompt()
Simple prompting
sup.ai.prompt(prompt: string) → string
Simple prompting takes text input and returns a text response.
const response = sup.ai.prompt("Write a haiku about frogs");Multi-modal prompting
sup.ai.prompt(...string or images...) → string
Multi-modal prompting can take multiple inputs, including images. This can be done in any order or combination.
// Multiple inputsconst response = sup.ai.prompt( "Write a haiku about this image", sup.input.image);
// With optionsconst response = sup.ai.prompt("List three colors", { temperature: 0.9 });Advanced prompting
sup.ai.prompt(...strings, images, or options...) → string | object
The sup.ai.prompt() function also takes options that can adjust temperature or provide a response schema. Like mentioned before, you can mix multiple inputs, media types, and options or schemas in any order or configuration you like.
When providing an options object, the following properties are available:
{ temperature?: number; // Randomness of output (0-1) schema?: StructuredResponseSchema; // Schema for structured output reasoning?: boolean | { // Enable reasoning tokens (works with compatible models) effort?: 'high' | 'medium' | 'low'; // Reasoning effort level }; tools?: Record<string, { // Function calling tools description: string; parameters?: Record<string, { type: 'string' | 'number' | 'boolean' | 'object' | 'array'; description?: string; required?: boolean; }>; callback: (args: any) => any; }>;}When using the schema property, you can define the structure of the AI’s response using a simplified JSON Schema syntax. Instead of wrapping your schema in { type: "object", properties: {...} }, you define properties directly at the root level:
type StructuredResponseSchema = { [key: string]: { type: "string" | "number" | "boolean" | "array" | "object"; description?: string; // String-specific enum?: string[]; minLength?: number; maxLength?: number; pattern?: string; format?: string; // Number-specific minimum?: number; maximum?: number; exclusiveMinimum?: number; exclusiveMaximum?: number; multipleOf?: number; // Array-specific items?: StructuredResponseSchemaObject[]; minItems?: number; maxItems?: number; uniqueItems?: boolean; // Object-specific properties?: StructuredResponseSchemaObject; };};The AI will format its response to match the provided schema, making it easy to work with the response programmatically. Structured responses can also be useful for getting multiple pieces of information from a single prompt.
const response = sup.ai.prompt("What's the weather like?", { schema: { temperature: { type: "number", minimum: -100, maximum: 150, description: "Temperature in Fahrenheit", }, conditions: { type: "string", enum: ["sunny", "cloudy", "rainy", "snowy"], }, forecast: { type: "string", maxLength: 100, description: "Brief weather forecast", }, },});
console.log(response);// {// temperature: 22,// conditions: "sunny",// forecast: "Clear skies with light breeze"// }Function calling with tools
The tools property enables AI function calling, allowing the AI to execute custom functions during the conversation. This is useful for giving the AI access to external data, APIs, or custom logic.
const result = sup.ai.prompt("What's the weather in Paris and calculate a 20% tip on $50?", { tools: { get_weather: { description: "Get current weather for a city", parameters: { city: { type: "string", description: "City name" }, units: { type: "string", description: "Temperature units (celsius or fahrenheit)", required: false } }, callback: (args) => { // This function gets called when AI decides to use this tool const temp = args.city === "Paris" ? "22°C" : "20°C"; return `Weather in ${args.city}: ${temp}, sunny`; } }, calculate_tip: { description: "Calculate tip amount on a bill", parameters: { amount: { type: "number", description: "Bill amount in dollars" }, percentage: { type: "number", description: "Tip percentage (e.g. 20 for 20%)" } }, callback: (args) => { const tip = args.amount * (args.percentage / 100); return `${args.percentage}% tip on $${args.amount} = $${tip.toFixed(2)}`; } } }});
console.log(result);// "The weather in Paris is 22°C and sunny. A 20% tip on $50 would be $10.00."Tool Definition:
description: Clear explanation of what the tool does (required)parameters: Object defining expected parameters (optional)callback: Function to execute when the AI calls this tool (required)
Parameter Types:
string: Text valuesnumber: Numeric valuesboolean: True/false valuesobject: Complex objectsarray: Lists of values
Each parameter can specify:
description: What the parameter representsrequired: Whether it’s mandatory (defaults totrue)
The AI automatically decides when to call your tools and handles the conversation flow.
sup.ai.promptFull()
(...strings, images, or options...) → SupAICompletion
const completion = sup.ai.promptFull("Explain quantum physics", { temperature: 0.8, reasoning: true});
console.log(completion.content); // The main responseconsole.log(completion.reasoning); // AI's step-by-step reasoning (if enabled)console.log(completion.finish_reason); // "stop", "length", "tool_calls", etc.console.log(completion.tool_calls); // Tool calls made during response (if any)sup.ai.promptFull() works exactly like sup.ai.prompt() but returns a detailed completion object instead of just the content. This gives you access to additional metadata about the AI’s response.
Returns: A SupAICompletion object with these properties:
content(string | object): The main AI response (same assup.ai.prompt())reasoning(string | undefined): Step-by-step reasoning if reasoning mode is enabledfinish_reason(string): Why the response ended (“stop”, “length”, “tool_calls”, etc.)tool_calls(array | undefined): Details of any tool calls made during the response
Reasoning Mode:
const completion = sup.ai.promptFull("Solve this math problem: 127 × 43", { reasoning: true // or { effort: "high" }});
console.log(completion.reasoning);// "Let me break this down step by step:// 127 × 43 = 127 × (40 + 3) = (127 × 40) + (127 × 3)..."
console.log(completion.content);// "127 × 43 = 5,461"Use reasoning: true or reasoning: { effort: "high" | "medium" | "low" } to get the AI’s thought process.
sup.ai.promptWithContext()
(...strings, images, or options...) → string | object
// this codefunction main() { return sup.ai.promptWithContext("Write a haiku about the input");}
// is roughly similar to this codefunction main() { return sup.ai.prompt(`Write a haiku about ${sup.input.text}`);}This is similar to sup.ai.prompt(), but automatically includes relevant context about the current message, chat, and user. This helps the AI understand the conversation context without needing to manually include things like sup.input.text in your prompt.
The context includes:
- Current date and time
- User information (name, username, bio)
- Chat information (name)
- Message information (input, attachments)
- Reply information (when replying to a message)
sup.ai.tts()
(...strings, audio, or options...) → SupAudio
// Basic text-to-speechconst speech = sup.ai.tts("sup world");
// With voice cloning using sample audioconst clonedSpeech = sup.ai.tts("sup world", sampleAudio);
// With generation optionsconst speech = sup.ai.tts("sup world", { temperature: 0.8, // Voice variation (0-1) exaggeration: 1.2, // Emphasis level cfg: 3.0 // Classifier-free guidance});
// Multiple text partsconst speech = sup.ai.tts("sup", "world", "sup sup");Generates speech audio from text using AI text-to-speech. Can optionally clone a voice by providing sample audio.
Parameters:
- Text strings to convert to speech
- Optional
SupAudiofor voice cloning - Optional configuration object with:
temperature: Controls voice variation (0-1)exaggeration: Controls emphasis and expression levelcfg: Classifier-free guidance scale
sup.ai.embedding.embed()
(input: string | SupImage | SupAudio | SupVideo, options?: SupAIEmbeddingOptions) → SupEmbedding
const embedding = sup.ai.embedding.embed("sup world");// Returns SupEmbedding { value: [0.123, -0.456, ...], model: "gemini-embedding-2-preview" }
const embedding2 = sup.ai.embedding.embed("sup world", { model: "text-embedding-3-small" });// Uses text-embedding-3-small
const imageEmbedding = sup.ai.embedding.embed(myImage);// Embed an image (uses default gemini-embedding-2-preview)
embedding.value; // number[] — the raw vectorembedding.model; // the model used to create this embeddingGenerates a vector embedding for the provided input. Returns a SupEmbedding object containing the vector and the model used.
Options:
model— The embedding model to use. Defaults togemini-embedding-2-preview. Other models liketext-embedding-3-smallandtext-embedding-3-largeare also supported.dimensions— Number of dimensions in the output vector. If not specified, uses the model’s default
sup.ai.embedding.distance()
(a: SupEmbedding, b: SupEmbedding | string | SupImage | SupAudio | SupVideo) → number
const a = sup.ai.embedding.embed("hello");const b = sup.ai.embedding.embed("hi there");
const dist = sup.ai.embedding.distance(a, b);// Returns 0 (identical) to 1 (opposite)
// Auto-embeds strings using the same model as the first argumentconst dist2 = sup.ai.embedding.distance(a, "goodbye");Computes the cosine distance between two embeddings. If the second argument is not a SupEmbedding, it will be automatically embedded using the same model as the first argument.
Both embeddings must use the same model — an error is thrown if models don’t match.
sup.ai.embedding.nearest()
(target: SupEmbedding, candidates: (SupEmbedding | string | SupImage | SupAudio | SupVideo)[]) → { item: SupEmbedding, distance: number, index: number }
const query = sup.ai.embedding.embed("programming languages");const result = sup.ai.embedding.nearest(query, ["Python", "JavaScript", "cooking recipes", "TypeScript"]);
result.item; // SupEmbedding of the closest matchresult.distance; // cosine distance to the closest matchresult.index; // index of the closest match in the candidates arrayFinds the nearest candidate to a target embedding. Candidates that are not already SupEmbedding objects will be auto-embedded using the target’s model.
Intelligent Expressions
Intelligent expressions are a runtime addition that allow you to use “micro prompts” for conditionals, data reasoning, and more during the execution of your program.
Type coercion
Use intelligent expressions to coerce inferred values into specific primitive types.
Standard syntax
sup.ai.int("The number of Rs in strawberry");// -> 3
sup.ai.float("A number between 0 and 1");// -> 0.7
sup.ai.bool("Wednesday is after Tuesday");// -> true
sup.ai.array("yellow red green", "the colors");// -> ['yellow', 'red', 'green']Contextual coercion
Use intelligent expressions on existing variables to infer values based on their context.
Standard syntax
Standard syntax for contextual coercion.
const foo = "strawberry";foo.ai.int("The number of Rs");// -> 3
const bar = "Wednesday";bar.ai.bool("Comes after Tuesday");// -> true
const baz = "yellow red green";baz.ai.array("the colors");// -> ['yellow', 'red', 'green']
const foo = "encyclopedia";foo.ai.text("hyphen between each syllable");// -> `en-cyc-lo-pe-di-a`Array operations
Perform intelligent operations like filtering and mapping on arrays.
anArray.ai.filter()
Standard syntax for filtering arrays:
const words = ["yellow", "red", "sunday", "monday", "green"];words.ai.filter("just the colors");// -> ['yellow', 'red', 'green']anArray.ai.map()
Standard syntax for mapping array elements.
const words = ["foo", "bar", "baz"];words.ai.map("capitalize each word");// -> ['Foo', 'Bar', 'Baz']Reducing arrays
This can often be accomplished using contextual coercion, for example:
words.ai.bool('all words are a color')// -> true