BoxLang ๐ A New JVM Dynamic Language Learn More...
|:------------------------------------------------------: |
| โก๏ธ B o x L a n g โก๏ธ
| Dynamic : Modular : Productive
|:------------------------------------------------------: |
Copyright Since 2023 by Ortus Solutions, Corp
ai.boxlang.io | www.boxlang.io
ai.ortussolutions.com | www.ortussolutions.com
ย
Welcome to the BoxLang AI Module ๐ The official AI library for BoxLang that provides a unified, fluent API to orchestrate multi-model workflows, autonomous agents, RAG pipelines, and AI-powered applications. One API โ Unlimited AI Power! โจ
BoxLang AI eliminates vendor lock-in and simplifies AI integration by providing a single, consistent interface across 16+ AI providers. Whether you're using OpenAI, Claude, Gemini, Grok, DeepSeek, MiniMax, Ollama, or Perplexityโyour code stays the same. Switch providers, combine models, and orchestrate complex workflows with simple configuration changes. ๐
runAsync() on every runnable; aiParallel()
for concurrent parallel pipelinesBoxLang is open source and licensed under the Apache 2 license. ๐ You can also get a professionally supported version with enterprise features and support via our BoxLang +/++ Plans (www.boxlang.io/plans). ๐ผ
You can use BoxLang AI in both operating system applications, AWS Lambda, and web applications. For OS applications, you can use the module installer to install the module globally. For AWS Lambda and web applications, you can use the module installer to install it locally in your project or CommandBox as the package manager, which is our preferred method for web applications.
๐ New to AI concepts? Check out our Key Concepts Guide for terminology and fundamentals, or browse our FAQ for quick answers to common questions. We also have a Quick Start Guide and our intense AI BootCamp available to you as well.
You can easily get started with BoxLang AI by using the module installer for building operating system applications:
install-bx-module bx-ai
This will install the latest version of the BoxLang AI module in your
BoxLang environment. Once installed, configure your default AI
provider and API key in boxlang.json:
{
"modules": {
"bxai": {
"settings": {
"provider": "openai",
"apiKey": "${OPENAI_API_KEY}"
}
}
}
}
๐ก Tip: Use environment variable placeholders like
${OPENAI_API_KEY}so you never commit secrets to source control. Each provider also auto-detects its own env var (e.g.OPENAI_API_KEY,CLAUDE_API_KEY,GEMINI_API_KEY).
Below is the full reference of every setting you can place under
settings in boxlang.json:
{
"modules": {
"bxai": {
"settings": {
"provider": "openai",
"apiKey": "${OPENAI_API_KEY}",
"defaultParams": {
"model": "gpt-4o",
"temperature": 0.7,
"max_tokens": 2000
},
"memory": {
"provider": "window",
"config": {
"maxMessages": 20
}
},
"providers": {
"openai": {
"params": { "model": "gpt-4o", "temperature": 0.7 },
"options": { "timeout": 60 }
},
"claude": {
"params": { "model": "claude-3-5-sonnet-20241022" }
},
"ollama": {
"params": { "model": "qwen2.5:0.5b-instruct" }
}
},
"timeout": 45,
"logRequest": false,
"logRequestToConsole": false,
"logResponse": false,
"logResponseToConsole": false,
"returnFormat": "single",
"skillsDirectory": "/.ai/skills",
"autoLoadSkills": true,
"globalSkills": []
}
}
}
}
| Setting | Type | Default | Description |
|---|---|---|---|
provider
| string
| "openai"
| Default AI provider to use for all requests |
apiKey
| string
| ""
| Default API key; each provider also reads its own env var
(e.g. OPENAI_API_KEY) |
defaultParams
| struct
| {}
| Default request parameters sent to every provider (e.g.
model, temperature, max_tokens) |
memory.provider
| string
| "window"
| Default memory type: window,
cache, file, session,
summary, jdbc, hybrid, or
any vector provider |
memory.config
| struct
| {}
| Provider-specific memory configuration (e.g.
maxMessages, cacheName) |
providers
| struct
| {}
| Per-provider overrides โ keys are provider names, values
have params and options structs |
timeout
| numeric
| 45
| Default HTTP request timeout in seconds |
logRequest
| boolean
| false
| Log outgoing AI requests to ai.log
|
logRequestToConsole
| boolean
| false
| Print outgoing AI requests to the console (useful for debugging) |
logResponse
| boolean
| false
| Log AI responses to ai.log
|
logResponseToConsole
| boolean
| false
| Print AI responses to the console (useful for debugging) |
returnFormat
| string
| "single"
| Default response format: single,
all, raw, json,
xml, or structuredOutput
|
skillsDirectory
| string
| "/.ai/skills"
| Directory scanned for SKILL.md files at
startup. Set to "" to disable auto-discovery |
autoLoadSkills
| boolean
| true
| When true, skills found in
skillsDirectory are auto-loaded and injected into
every aiAgent() as global skills |
globalSkills
| array
| []
| Internal โ populated at startup with auto-discovered
skills; access via aiGlobalSkills()
|
After that you can leverage the global functions (BIFs) in your BoxLang code. Here is a simple example:
// chat.bxs
answer = aiChat( "How amazing is BoxLang?" )
println( answer )
You can then run your BoxLang script like this:
boxlang chat.bxs
In order to build AWS Lambda functions with Boxlang AI for serverless
AI agents and applications, you can use the Boxlang
AWS Runtime and our AWS
Lambda Starter Template. You will use the
install-bx-module as well to install the module locally
using the --local flag in the resources
folder of your project:
cd src/resources
install-bx-module bx-ai --local
Or you can use CommandBox as well and store your dependencies in the
box.json descriptor.
box install bx-ai resources/modules/
To use BoxLang AI in your web applications, you can use CommandBox as the package manager to install the module locally in your project. You can do this by running the following command in your project root:
box install bx-ai
Just make sure you have already a server setup with BoxLang. You can check our Getting Started with BoxLang Web Applications guide for more details on how to get started with BoxLang web applications.
The following are the AI providers supported by this module. Please note that in order to interact with these providers you will need to have an account with them and an API key. ๐
Here is a matrix of the providers and their feature support. Please keep checking as we will be adding more providers and features to this module. ๐
| Provider | Chat & Streaming | Real-time Tools | Embeddings | TTS (Speech) | STT (Transcription) |
|---|---|---|---|---|---|
| AWS Bedrock | โ | โ | โ | โ | โ |
| Claude | โ | โ | โ | โ | โ |
| Cohere | โ | โ | โ | โ | โ |
| DeepSeek | โ | โ | โ | โ | โ |
| Docker Model Runner | โ | โ | โ | โ | โ |
| ElevenLabs | โ | โ | โ | โ (Premium) | โ (Scribe v1) |
| Gemini | โ | [Coming Soon] | โ | โ | โ |
| Grok | โ | โ | โ | โ | โ |
| Groq | โ | โ | โ | โ | โ (Whisper) |
| HuggingFace | โ | โ | โ | โ | โ |
| Mistral | โ | โ | โ | โ (Voxtral) | โ (Voxtral) |
| MiniMax | โ | โ | โ | โ | โ |
| Ollama | โ | โ | โ | โ | โ |
| OpenAI | โ | โ | โ | โ | โ (Whisper) |
| OpenAI-Compatible | โ | โ | โ | โ | โ |
| OpenRouter | โ | โ | โ | โ | โ |
| Perplexity | โ | โ | โ | โ | โ |
| Voyage | โ | โ | โ (Specialized) | โ | โ |
Every provider exposes a runtime capability API so you can introspect what it supports without consulting documentation โ and without risking cryptic errors when you call an unsupported operation. ๐ก๏ธ
// Get all capabilities a provider supports
var provider = aiService( "openai" );
var caps = provider.getCapabilities();
// โ [ "chat", "stream", "embeddings" ]
// Check a specific capability before using it
if ( provider.hasCapability( "embeddings" ) ) {
var embedding = aiEmbed( "Hello world" );
}
// Voyage is embeddings-only โ getCapabilities() reflects this
var voyage = aiService( "voyage" );
voyage.getCapabilities(); // โ [ "embeddings" ]
voyage.hasCapability( "chat" ); // โ false
The built-in BIFs (aiChat, aiChatStream,
aiEmbed) automatically use this system and throw a clear
UnsupportedCapability exception when the selected
provider does not implement the required capability:
// This will throw UnsupportedCapability โ Voyage has no chat capability
aiChat( "Hello?", provider: "voyage" );
// This will throw UnsupportedCapability โ Claude has no embeddings capability
aiEmbed( "some text", provider: "claude" );
Capabilities map to the following capability
interfaces (in models/providers/capabilities/):
| Capability String | Interface | Methods Provided |
|---|---|---|
chat, stream
| IAiChatService
| chat(), chatStream()
|
embeddings
| IAiEmbeddingsService
| embeddings()
|
speech
| IAiSpeechService
| speak()
|
transcription
| IAiTranscriptionService
| transcribe(), translate()
|
BoxLang not only makes it extremely easy to interact with multiple AI
providers, but it also gives you the flexibility to choose how you
want the responses returned to you. You can specify the return format
using the responseFormat parameter in your AI calls. Here
are the available formats:
| Format | Description |
|---|---|
single
| Returns a single message as a string (the content from the first choice). This is the default format for BIFs. |
all
| Returns an array of all choice messages. Each message is
a struct with role and content keys. |
json
| Returns the parsed JSON object from the content string. Automatically parses JSON responses. |
xml
| Returns the parsed XML document from the content string. Automatically parses XML responses. |
raw
| Returns the full raw response from the AI provider. This is useful for debugging or when you need the full response structure with metadata. This is the default for pipelines. |
structuredOutput
| Used internally when .structuredOutput() is
called. Returns a populated class/struct based on the schema. |
In the following sections, we provide a quick overview of the main components of BoxLang AI including Chats, Pipelines, Middleware, Agents, Structured Output, Memory Systems, Document Loaders & RAG, and MCP Client/Server. Each section includes quick examples and links to more detailed documentation. For further details, please refer to the official documentation, this is just a high-level overview to get you started quickly. ๐
Interact with AI models through simple and powerful chat interfaces ๐ฏ supporting both one-shot responses and streaming conversations. BoxLang AI provides fluent APIs for building everything from basic Q&A to complex multi-turn dialogues with system prompts, message history, and structured outputs. ๐ก
aiChat()
aiChatStream()
Simple One-Shot Chat:
// Quick question-answer
response = aiChat( "What is BoxLang?" )
println( response )
// With custom model and options
response = aiChat(
messages: "Explain quantum computing",
params: { model: "gpt-4", temperature: 0.7, max_tokens: 500 }
)
Multi-Turn Conversation with Memory:
// Create agent with memory
agent = aiAgent(
name: "Assistant",
memory: aiMemory( "window", config: { maxMessages: 10 } )
)
// First turn
response = agent.run( "My name is Luis" )
// Second turn - Agent remembers context
response = agent.run( "What's my name?" )
println( response ) // "Your name is Luis"
Streaming Chat:
// Stream tokens as they arrive
aiChatStream(
messages: "Write a short story about a robot",
callback: chunk => {
writeOutput( chunk.choices?.first()?.delta?.content ?: "" )
bx:flush;
},
params: { model: "claude-3-5-sonnet-20241022" }
)
Fluent Message Builder:
// Build complex message chains
messages = aiMessage()
.system( "You are a helpful coding assistant" )
.user( "How do I create a REST API in BoxLang?" )
.image( "diagram.png" )
response = aiChat(
messages: messages,
params: { model: "gpt-4o", temperature: 0.7 }
)
Build composable AI workflows ๐ฏ using BoxLang AI's powerful runnable pipeline system. Chain models, transformers, tools, and custom logic into reusable, testable components that flow data through processing stages. Perfect for complex AI workflows, data transformations, and multi-step reasoning. ๐ก
.to()
Simple Transformation Pipeline:
// Create a pipeline with model and transformers
pipeline = aiModel( provider: "openai" )
.transform( data => data.toUpperCase() )
.transform( data => data.trim() )
// Run input through the pipeline
result = pipeline.run( "hello world" )
println( result ) // "HELLO WORLD"
Multi-Stage AI Pipeline:
// Define transformation stages as closures
summarizer = ( text ) => {
return aiChatAsync(
aiMessage().system( "Summarize in one sentence" ).user( text ),
{ model: "gpt-4o-mini" }
)
}
translator = ( summary ) => {
return aiChatAsync(
aiMessage().system( "Translate to Spanish" ).user( summary ),
{ model: "gpt-4o" }
)
}
formatter = ( translatedText ) => {
return {
summary: translatedText,
timestamp: now()
}
}
// Compose pipeline using async futures
result = summarizer( "Long article text here..." )
.then( summary => translator( summary ) )
.then( translated => formatter( translated ) )
.get()
println( result.summary ) // Spanish summary
Streaming Pipeline:
// Stream through entire pipeline
pipeline = aiModel( provider: "claude", params: { model: "claude-3-5-sonnet-20241022" } )
.transform( chunk => chunk.toUpperCase() )
.stream(
onChunk: ( chunk ) => writeOutput( chunk ),
input: "Tell me a story"
)
Custom Runnable Class:
// Implement IAiRunnable for custom logic
class implements="IAiRunnable" {
function run( input, params = {} ) {
// Custom processing
return processedData;
}
function stream( onChunk, input, params = {} ) {
// Streaming support
onChunk( processedChunk );
}
function to( nextRunnable ) {
// Chain to next stage
return createPipeline( this, nextRunnable );
}
}
// Use in pipeline
customStage = new CustomRunnable()
pipeline = aiModel( provider: "openai", params: { model: "gpt-4o" } )
.to( customStage )
Parallel Pipelines:
Run multiple runnables concurrently with the same input and receive a
named struct of results. Mirrors LangChain's
RunnableParallel โ parallelism is a developer/framework
concern, not something the LLM decides.
// Fan out to multiple agents/models in parallel
results = aiParallel({
summary: summaryAgent,
analysis: analysisAgent,
keywords: keywordModel
}).run( "Some long document..." )
// results.summary, results.analysis, results.keywords โ all ran concurrently
// Compose in a pipeline โ parallel branch then merge
pipeline = aiMessage( "Analyze: ${text}" )
.to( aiParallel({ researcher: researchAgent, writer: writerAgent }) )
.transform( r => "Research: #r.researcher#\nDraft: #r.writer#" )
// Or dispatch the same agent for multiple independent inputs asynchronously
futures = [
researchAgent.runAsync( "Topic A" ),
researchAgent.runAsync( "Topic B" ),
researchAgent.runAsync( "Topic C" )
]
results = futures.map( f => f.get() )
examples/pipelines/ for complete examplesAdd cross-cutting behavior around model and agent execution without changing your business logic. Use middleware for observability, reliability, safety, approvals, and deterministic replay in testing. ๐ฏ
| Middleware | Purpose |
|---|---|
LoggingMiddleware
| Logs agent/model lifecycle activity for observability and troubleshooting. |
RetryMiddleware
| Retries transient LLM/tool failures with configurable backoff. |
MaxToolCallsMiddleware
| Enforces a per-run cap on total tool calls to prevent runaway execution. |
GuardrailMiddleware
| Blocks disallowed tools and rejects risky tool arguments by pattern rules. |
HumanInTheLoopMiddleware
| Requires human approval for selected tool calls (CLI or suspend/resume flow). |
FlightRecorderMiddleware
| Records and replays LLM/tool interactions for deterministic testing and CI. |
Middleware can be attached to agents,
models, or both. They are executed in the order
registered; after* hooks fire in reverse order (cleanup order).
Via aiAgent() / aiModel() BIF parameter:
import bxModules.bxai.models.middleware.core.LoggingMiddleware;
import bxModules.bxai.models.middleware.core.RetryMiddleware;
import bxModules.bxai.models.middleware.core.GuardrailMiddleware;
agent = aiAgent(
name: "Safe Assistant",
middleware: [
new LoggingMiddleware( logToConsole: true ),
new RetryMiddleware( maxRetries: 2, initialDelay: 250 ),
new GuardrailMiddleware( blockedTools: [ "deleteRecord" ] )
]
)
Via withMiddleware() on a runnable (fluent API):
agent
.withMiddleware( new LoggingMiddleware() )
.withMiddleware( new RetryMiddleware( maxRetries: 3 ) )
// Or pass an array โ flattened automatically
agent.withMiddleware( [ mw1, mw2, mw3 ] )
Management methods (available on AiAgent
and AiModel):
| Method | Returns | Description |
|---|---|---|
withMiddleware( any middleware )
| this
| Add one or more middleware (instance, struct, or array) |
clearMiddleware()
| this
| Remove all registered middleware |
listMiddleware()
| array
| Return array of { name, description } for
all middleware |
When an agent runs, its middleware is prepended to any middleware already on the model, so agent-level hooks always fire first.
Middleware exposes two hook styles:
Sequential hooks โ called in order; return
AiMiddlewareResult. Chain stops if any hook returns a
terminal result.
| Hook | Fires | Direction |
|---|---|---|
beforeAgentRun( context )
| Before agent starts | Forward |
afterAgentRun( context )
| After agent completes | Reverse |
beforeLLMCall( context )
| Before each LLM provider call | Forward |
afterLLMCall( context )
| After each LLM provider call | Reverse |
beforeToolCall( context )
| Before each tool is invoked | Forward |
afterToolCall( context )
| After each tool returns | Reverse |
onError( context )
| When any hook throws an exception | โ |
Wrap hooks โ called as nested closures; call
handler() to proceed and return a value.
| Hook | Purpose |
|---|---|
wrapLLMCall( context, handler )
| Surround each LLM provider call (retry, caching, tracing) |
wrapToolCall( context, handler )
| Surround each tool invocation (retry, mocking, sandboxing) |
For wrap hooks, the first registered middleware is the outermost wrapper:
mw1.wrapLLMCall( ctx, () =>
mw2.wrapLLMCall( ctx, () =>
actualProviderCall()
)
)
Every sequential hook must return an AiMiddlewareResult.
Use the static factory methods:
import bxModules.bxai.models.middleware.AiMiddlewareResult;
// Continue the chain normally
return AiMiddlewareResult.continue()
// Stop the chain immediately (terminal)
return AiMiddlewareResult.cancel( "Too many sensitive operations" )
// Human approved (HITL)
return AiMiddlewareResult.approve()
// Human rejected (terminal)
return AiMiddlewareResult.reject( "Operator rejected this action" )
// Human edited the tool arguments (passes modified args to tool)
return AiMiddlewareResult.edit( { correctedArgs: { query: "safe query" } } )
// Suspend for async human review (terminal โ web mode HITL)
return AiMiddlewareResult.suspend( { toolName: "deleteRecord", args: toolArgs } )
Checking results:
| Predicate | Meaning |
|---|---|
isContinue()
| Chain proceeds normally |
isCancelled()
| Chain was stopped (terminal) |
isApproved()
| Human approved |
isRejected()
| Human rejected (terminal) |
isEdit()
| Arguments were modified |
isSuspended()
| Waiting for async human input (terminal) |
isTerminal()
| cancel, reject, or
suspend โ stops the chain |
examples/agents/
and examples/pipelines/
The AI Tool Registry is a global singleton that
stores named ITool instances. Register tools once โ at
module load, on application start, or anywhere in your code โ and then
reference them by string name wherever tools are accepted. This
decouples tool definitions from call sites and makes it easy to share
tools across agents, models, and pipelines. ๐ฏ
"now@bxai" as a string instead of a live objecttoolName@moduleName to avoid collisionsITool instances right before each LLM request@AITool and call scan() to register them
all at oncenow@bxai (current
date/time), speak@bxai (text-to-speech),
transcribe@bxai (speech-to-text), and
translate@bxai (audio-to-English) are registered
automatically on module loadUsing the registry:
// Register a tool once (e.g., in Application.bx or a module's onLoad)
aiToolRegistry().register(
name : "searchProducts",
description : "Search the product catalog",
callback : ( required string query ) => productService.search( query )
)
// Later, reference by name โ no object needed
result = aiChat(
"Find me wireless headphones under $50",
{ tools: [ "searchProducts" ] }
)
Module-namespaced tools:
// Namespaced registration avoids collisions across modules
aiToolRegistry().register(
name : "lookup",
description : "Look up a customer by ID",
callback : id => customerService.find( id ),
module : "my-app"
)
// Retrieve by full key or bare name (auto-resolved if unambiguous)
var tool = aiToolRegistry().get( "lookup@my-app" )
var tool = aiToolRegistry().get( "lookup" ) // works if only one "lookup" exists
Scanning a class for @AITool annotations:
// Annotate methods in a class
class WeatherTools {
@AITool( "Get current weather for a city" )
public string function getWeather( required string city ) {
return weatherAPI.fetch( city )
}
@AITool( "Get a 7-day forecast for a city" )
public string function getForecast( required string city ) {
return weatherAPI.forecast( city )
}
}
// Register all annotated methods in one call
aiToolRegistry().scan( new WeatherTools(), "my-module" )
// Registered as: getWeather@my-module, getForecast@my-module
Using the built-in now@bxai tool:
// Auto-registered on module load โ just reference it by name
result = aiChat(
"What should I have for dinner tonight?",
{ tools: [ "now@bxai" ] }
)
// AI knows the current date/time without any extra wiring
Using the built-in audio tools (speak@bxai,
transcribe@bxai, translate@bxai):
// All three are auto-registered on module load โ opt in by name
var agent = aiAgent(
name : "VoiceAssistant",
instructions : "You are a helpful voice assistant. Speak responses aloud.",
tools : [ "now@bxai", "speak@bxai", "transcribe@bxai", "translate@bxai" ]
)
// Agent can now convert text to speech, transcribe audio files, or translate audio to English
agent.run( "Say hello to the user and tell them today's date" )
// โ AI calls speak@bxai with the greeting text, returns the saved audio file path
// Standalone transcription โ agent calls transcribe@bxai automatically
agent.run( "Please transcribe the file at /recordings/meeting.mp3" )
// Translation โ agent calls translate@bxai for non-English audio
agent.run( "Translate the Spanish audio at /audio/mensaje.mp3 to English" )
Opt-in httpGet tool (NOT auto-registered):
// Register explicitly when your application needs web access
import bxModules.bxai.models.tools.core.CoreTools;
aiToolRegistry().scan( new CoreTools(), "bxai" ) // registers httpGet@bxai too
Opt-in FileSystemTools (NOT auto-registered โ
supply allowedPaths for safety):
import bxModules.bxai.models.tools.filesystem.FileSystemTools;
// Restrict the AI to specific directories (strongly recommended)
aiToolRegistry().scanClass(
new FileSystemTools( allowedPaths: [ "/workspace/data", "/tmp/ai-output" ] ),
"bxai"
)
// Give a coding agent full filesystem capabilities within a project directory
agent = aiAgent(
name : "CodingAssistant",
instructions : "You are a coding assistant. You can read, write, search, and organize files.",
tools : [
// Read / write
"readFile@bxai",
"readMultipleFiles@bxai",
"writeFile@bxai",
"appendFile@bxai",
"editFile@bxai",
// File management
"fileMetadata@bxai",
"pathExists@bxai",
"deleteFile@bxai",
"moveFile@bxai",
"copyFile@bxai",
// Search & utilities
"searchFiles@bxai",
"listAllowedDirectories@bxai",
// Directories
"listDirectory@bxai",
"directoryTree@bxai",
"createDirectory@bxai",
"deleteDirectory@bxai",
// Zip / archive
"zipFiles@bxai",
"unzipFile@bxai",
"checkZipFile@bxai"
]
)
agent.run( "Read src/main/bx/App.bx and add a header comment to it" )
agent.run( "List all .bx files under src/ recursively" )
agent.run( "Create a directory reports/ and write a summary.txt file there" )
agent.run( "Search src/ for all files containing the word 'deprecated'" )
agent.run( "Zip the entire src/ directory to /tmp/src-backup.zip" )
๐ Security note:
FileSystemToolsvalidates every path against theallowedPathslist using canonical path resolution, preventing directory-traversal attacks. LeaveallowedPathsempty only in fully-trusted environments.
BaseTool
For more complex tools ones that need their own state, unit tests, or
reusable class structure extend BaseTool directly instead
of using a closure. You only need to implement two abstract methods:
| Method | Purpose |
|---|---|
doInvoke( required struct args, AiChatRequest
chatRequest )
| The tool logic. Return any value serialization is handled automatically. |
generateSchema()
| Return the OpenAI function-calling schema struct. Called
by getSchema() unless a manual schema override has
been set. |
invoke() is final on BaseTool
it fires the beforeAIToolExecute /
afterAIToolExecute events and serializes the result
before calling your doInvoke(), so you never need to wire
those up yourself.
// MySearchTool.bx
class extends="bxModules.bxai.models.tools.BaseTool" {
property name="searchClient";
function init( required any searchClient ) {
variables.name = "searchProducts"
variables.description = "Search the product catalog and return matching items"
variables.searchClient = arguments.searchClient
return this
}
/**
* Core tool logic return any type, BaseTool serializes it automatically.
*/
public any function doInvoke( required struct args, AiChatRequest chatRequest ) {
return variables.searchClient.search(
query : args.query,
maxResults : args.maxResults ?: 5
)
}
/**
* OpenAI function-calling schema for this tool.
*/
public struct function generateSchema() {
return {
"type": "function",
"function": {
"name" : variables.name,
"description": variables.description,
"parameters" : {
"type" : "object",
"properties": {
"query" : { "type": "string", "description": "Search query text" },
"maxResults": { "type": "integer", "description": "Maximum number of results to return" }
},
"required": [ "query" ]
}
}
}
}
}
Register and use it like any other tool:
// Register in the global registry
aiToolRegistry().register( new MySearchTool( searchClient ), "my-app" )
// Reference by key name anywhere tools are accepted
result = aiChat( "Find wireless headphones", { tools: [ "searchProducts@my-app" ] } )
Fluent schema helpers (inherited from
BaseTool) let you skip writing
generateSchema() manually when ClosureTool's
auto-introspection isn't available:
tool = new MySearchTool( client )
.describeFunction( "Search the product catalog" ) // sets description
.describeQuery( "Search term to look up" ) // describeArg( "query", "..." )
.describeMaxResults( "Max items to return" ) // describeArg( "maxResults", "..." )
Or supply a fully hand-crafted schema with setSchema(
schemaStruct ) when set, it takes precedence over generateSchema().
examples/advanced/ for complete registry examplesBuild autonomous AI agents ๐ฏ that can use tools, maintain memory, and orchestrate complex workflows. BoxLang AI agents combine LLMs with function calling, memory systems, and orchestration patterns to create intelligent assistants that can interact with external systems and solve complex tasks. ๐ก
Simple Agent with Tools:
// Define tools the agent can use
weatherTool = aiTool(
name: "get_weather",
description: "Get current weather for a location",
callable: ( location ) => {
return { temp: 72, condition: "sunny", location: location };
}
)
// Create agent with memory
agent = aiAgent(
name: "Weather Assistant",
description: "Helpful weather assistant",
tools: [ weatherTool ],
memory: aiMemory( "window" )
)
// Agent decides when to call tools
response = agent.run( "What's the weather in Miami?" )
println( response ) // Agent calls get_weather tool and responds
Autonomous Agent with Multiple Tools:
// Agent with database and email tools
agent = aiAgent(
name: "Customer Support Agent",
tools: [
aiTool( name: "query_orders", description: "Query customer orders", callable: orderQueryFunction ),
aiTool( name: "send_email", description: "Send email to customer", callable: emailFunction ),
aiTool( name: "create_ticket", description: "Create support ticket", callable: ticketFunction )
],
memory: aiMemory( "session" ),
params: { max_iterations: 5 }
)
// Agent orchestrates multiple tool calls
agent.run( "Find order #12345, email the customer with status, and create a ticket if there's an issue" )
Multi-Agent Hierarchy (Sub-Agents):
// Create specialist sub-agents
researchAgent = aiAgent(
name: "researcher",
description: "Researches topics in depth",
instructions: "Provide thorough research summaries"
)
writerAgent = aiAgent(
name: "writer",
description: "Writes polished content",
instructions: "Turn research into engaging articles"
)
// Coordinator automatically registers sub-agents as callable tools
coordinator = aiAgent(
name: "coordinator",
description: "Orchestrates research and writing",
subAgents: [ researchAgent, writerAgent ]
)
// Coordinator decides when to delegate
coordinator.run( "Write an article about BoxLang AI" )
// Inspect the hierarchy
writeln( researchAgent.getAgentPath() ) // /coordinator/researcher
writeln( researchAgent.getAgentDepth() ) // 1
writeln( researchAgent.isRootAgent() ) // false
writeln( coordinator.getRootAgent().getAgentName() ) // coordinator
examples/agents/
for complete working examplesGive agents and models reusable, composable knowledge
blocks ๐ that can be injected into the system message at
runtime. Skills follow the Claude Agent
Skills open standard โ a description field tells
the LLM when to apply the skill, and the body contains the full
instructions. ๐งฉ
SKILL.md files in your project, commit alongside codeSKILL.md into .ai/skills/my-skill/ and
it's available automaticallySkills live in named subdirectories under .ai/skills/:
.ai/skills/
sql-optimizer/
SKILL.md
boxlang-expert/
SKILL.md
customer-tone/
SKILL.md
Each SKILL.md file uses optional YAML frontmatter and a
Markdown body:
---
description: Optimise SQL queries for maximum performance. Apply when writing or reviewing database queries.
---
## SQL Optimisation Rules
- Always use indexed columns in WHERE clauses
- Prefer JOINs over subqueries for large datasets
- Use EXPLAIN to verify query plans before deploying
- Avoid SELECT * in production queries
Tip: If you omit the frontmatter, the first paragraph of the body is used as the
description.
Inline skill on a model:
// Create an inline skill (no files needed)
sqlSkill = aiSkill(
name : "sql-optimizer",
description: "Apply SQL optimisation rules when writing or reviewing queries",
content : "Always use indexed columns. Prefer JOINs over subqueries."
)
// Always-on: injected into every call
model = aiModel( "openai" ).withSkills( [ sqlSkill ] )
response = model.run( "Write a query to get all orders" )
Load skills from the filesystem:
// Load all SKILL.md files from .ai/skills/ (recursive by default)
skills = aiSkill( ".ai/skills" )
// Or load a single skill file
sqlSkill = aiSkill( ".ai/skills/sql-optimizer/SKILL.md" )
// Seed an agent with all discovered skills
agent = aiAgent(
name : "data-assistant",
availableSkills: skills // Lazy pool โ LLM loads on demand
)
Always-on vs lazy skills:
// Always-on: full content injected every call (small, universal skills)
coreSkill = aiSkill( ".ai/skills/writing-style/SKILL.md" )
agent.withSkills( [ coreSkill ] )
// Lazy pool: only a compact index is included; LLM calls loadSkill() as needed
bigLibrary = aiSkill( ".ai/skills" ) // Hundreds of skills
agent.withAvailableSkills( bigLibrary )
// activateSkill() promotes a lazy skill to always-on mid-session
agent.activateSkill( "sql-optimizer" )
Global skills auto-injected into every agent:
// In ModuleConfig.bx settings โ all agents get these automatically
settings = {
globalSkills: aiSkill( expandPath( ".ai/skills" ) )
}
// Or register programmatically via the BIF
globalSkillPool = aiGlobalSkills() // returns the current global pool
Inspect skill state:
config = agent.getConfig()
writeln( config.activeSkillCount ) // always-on skills count
writeln( config.availableSkillCount ) // lazy skills count
// Render the full skill system-message block for debugging
writeln( agent.buildSkillsContent() )
All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
Audio Support โ Text-to-Speech, Transcription, and Translation:
aiSpeak( text, params, options ) BIF: Convert text to speech using any provider that supports TTS. Returns an AiSpeechResponse (with hasAudio(), saveToFile(), getBase64(), getMimeType(), getSize()) or saves directly to a file via options.outputFile.aiTranscribe( audio, params, options ) BIF: Transcribe audio (file path, URL, or binary) to text. Returns the transcript string by default or a full AiTranscriptionResponse when options.returnFormat = "response".aiTranslate( audio, params, options ) BIF: Translate non-English audio to English text using supported providers.IAiSpeechService interface: Implemented by providers that support TTS (speak()).IAiTranscriptionService interface: Implemented by providers that support STT (transcribe() + translate()).ElevenLabsService: New provider supporting high-quality TTS via eleven_multilingual_v2 and STT via scribe_v1. Use aiService("elevenlabs", apiKey).beforeAISpeech, afterAISpeech, beforeAITranscription, afterAITranscription, beforeAITranslation, afterAITranslation.audio settings block in module config: defaultVoice, defaultOutputFormat, defaultSpeechModel, defaultTranscriptionModel.Audio Agent Tools โ speak@bxai, transcribe@bxai, translate@bxai: New AudioTools class (models/tools/audio/AudioTools.bx) auto-registered in the global tool registry at module startup. speak@bxai converts text to speech and returns the saved file path (auto-generates a temp file when no outputFile is supplied). transcribe@bxai transcribes a local file or URL to plain text. translate@bxai translates any-language audio to English text. Opt-in by name: aiAgent( tools: [ "speak@bxai", "transcribe@bxai", "translate@bxai" ] ).
FileSystem Agent Tools โ New FileSystemTools class (models/tools/filesystem/FileSystemTools.bx) with 19 @AITool-annotated methods covering the full filesystem lifecycle. NOT auto-registered โ opt-in only via aiToolRegistry().scanClass() so agents never get filesystem access unless explicitly granted. Supports a path-guard constructor (allowedPaths: [...]) that canonicalizes and validates every path argument before execution, blocking directory-traversal attacks. Tool keys: readFile@bxai, readMultipleFiles@bxai, writeFile@bxai, appendFile@bxai, editFile@bxai, fileMetadata@bxai, pathExists@bxai, deleteFile@bxai, moveFile@bxai, copyFile@bxai, searchFiles@bxai, listAllowedDirectories@bxai, listDirectory@bxai, directoryTree@bxai, createDirectory@bxai, deleteDirectory@bxai, zipFiles@bxai, unzipFile@bxai, checkZipFile@bxai.
Async Runnables and Parallel Execution:
runAsync() on all runnables (IAiRunnable, AiBaseRunnable): Every runnable now has a non-blocking runAsync(input, params, options) method that dispatches execution to the io-tasks virtual thread pool and returns a BoxFuture. Mirrors the existing aiChatAsync, loadAsync(), and seedAsync() patterns throughout the module.AiRunnableParallel class (models/runnables/AiRunnableParallel.bx): New runnable that accepts a named struct of runnables, fans them out concurrently via runAsync(), and returns a { name: result } struct once all futures complete. Mirrors LangChain's RunnableParallel โ a structural parallel composition primitive that integrates cleanly into the existing pipeline system via .to(), .run(), and .runAsync().aiParallel() BIF: Creates an AiRunnableParallel from a named struct of runnables. aiParallel({ summary: summaryAgent, analysis: analysisAgent }).run("document") runs both concurrently and returns { summary: "...", analysis: "..." }.chatStream() across all providers never fires the onAITokenCount event, making streaming calls completely invisible to usage tracking, billing, and monitoring. The non-streaming chat() path fires it correctly.AiModel.stream(): inject agent and model middleware into chatRequest, matching the existing pattern in run()DockerModelRunnerService: capture arguments into local vars before retryOnModelLoading closure to prevent ArgumentsScope resolution failureOpenAIService.chat(): capture chatRequest before nested .each() closures for tool callingOpenAIService.chatStream(): scope callback and chatRequest for sendStreamRequest call and tool-calling .each() closureCohereService.chat(): capture chatRequest before .map() tool closureonAITokenCount event and add missing event on the following services: BedrockService, ClaudeService, CohereService, GeminiServicescan() and scanClass() where not working accordingly with all cases and permutations.aiAgent() bif, skills, availableSkills can now be an array or a single skill, we will normalize it to an array internally. This allows for more flexible agent construction with a single skill without needing to wrap it in an array.ModuleConfig.bx listens now to onRuntimeStart() in order to setup skills and more, so caches and other things are properly loaded before the modules._input System Variable: Auto-inject previous stage output into message templates via ${_input}. For struct outputs, individual fields are flattened as ${_input_fieldName} for template access. Enables clean, composable multi-stage AI pipelines without manual transformation steps.aiTransform() needd to process instances of AiTransformRunnable and BaseTransformer classes, allowing for more flexible and reusable transformation logic.config on all BaseTransformer classes was missing.aiTransform() BIF was called with a non-string or closure, the throw() was invalid.aiSkill() BIF + withSkills() / withAvailableSkills() APIs on AiModel and AiAgent): Composable, reusable knowledge blocks โ following the Claude Agent Skills open standard โ that can be injected into any model or agent system message at runtime.
aiSkill( path | name, description, content, recurse ) โ Creates or discovers AiSkill instances. Pass a file path to load a single SKILL.md, a directory path to auto-discover all skills recursively, or name/description/content for inline definitions with no files needed.aiGlobalSkills() โ Returns the globally shared pool of skills auto-injected into every new agent's availableSkills pool. Populated via ModuleConfig.bx โ settings.globalSkills.withSkills() / addSkill()): Full skill content is injected into the system message on every call. Best for small, universally relevant guidance.withAvailableSkills() / addAvailableSkill()): Only a compact index (name + description) is included in the system message. The LLM calls the auto-registered loadSkill( name ) tool to fetch full content on demand. Best for large or rarely needed skill libraries.activateSkill( name ) โ Moves a skill from the lazy pool to always-on, promoting it for the rest of the session.buildSkillsContent() โ Renders the combined skills system-message block for inspection or custom injection..ai/skills/. The file is Markdown with an optional YAML frontmatter block containing description. The body is the instruction content. If frontmatter is absent, the first paragraph of body text is used as the description.AiModel and AiAgent getConfig() now include activeSkillCount, availableSkillCount, and skills (a struct with activeSkills and availableSkills name/description arrays) for full introspection.aiAgent() BIF gains skills: [] and availableSkills: [] construction-time parameters. Global skills from aiGlobalSkills() are automatically prepended to every new agent's available pool.aiModel() BIF gains a skills: [] construction-time parameter.listTools() and registered as MCPTool instances โ no manual Tool construction required.
MCPTool class (models/tools/MCPTool.bx) implements ITool by proxying a single MCP server tool. It converts the MCP inputSchema to the OpenAI function-calling schema format and forwards invocations to the server via MCPClient.send().withMCPServer( server, config ) fluent method on AiAgent and AiModel. Accepts a URL string or a pre-configured MCPClient instance. Optional config struct supports token, timeout, headers, user, and password.withMCPServers( servers ) fluent method on AiAgent and AiModel for seeding from multiple servers in one call. Each entry can be a URL string, a config struct { url, token, timeout, โฆ }, or a pre-configured MCPClient.listMcpServers() method on AiAgent and AiModel returns the list of currently connected MCP servers with their exposed tools for introspection and debugging.aiAgent() and aiModel() BIFs gain an array mcpServers = [] parameter so servers can be provided at construction time.AiAgent now tracks connected MCP servers in a mcpServers property ([{ url, toolNames }]). This list is automatically injected into the system prompt so the LLM can correctly answer questions like "what MCP servers are you connected to?" and "which tools came from which server?"listTools() method on AiAgent returns [{ name, description }] for all registered tools โ useful for programmatic introspection.AiAgent|AiModel.getConfig() now includes tools (full name/description list) and mcpServers (server URL + tool-name list) alongside the existing toolCount.AIToolRegistry (accessible via aiToolRegistry() BIF) provides a module-scoped registry for AI tools. Tools can be registered by name with optional module namespacing (e.g. now@bxai), discovered at runtime by bare name or full key, and resolved lazily before LLM requests via aiToolRegistry().resolveTools(). This means tools can be referenced by string name in params.tools arrays and resolved automatically rather than requiring live object references.BaseTool abstract base class: All tool implementations now extend BaseTool, which provides the shared invocation lifecycle (firing beforeAIToolExecute and afterAIToolExecute interception events), result serialization (primitives pass through, complex values serialize to JSON), and the fluent describeArg() / describe[ArgName]() schema annotation syntax.ClosureTool class: Replaces the retired Tool.bx. A BaseTool subclass backed by any closure or lambda. Auto-introspects the callable's parameter metadata to generate an OpenAI-compatible function schema. Receives the originating AiChatRequest as _chatRequest for context-aware closures.CoreTools built-in tools: Ships two tools out of the box. now (registered automatically as now@bxai on module load) returns the current date/time in ISO 8601 โ ideal for giving the AI temporal awareness. httpGet (opt-in only, not auto-registered for security) fetches any URL via HTTP GET. Register it explicitly if your application requires web access.params.tools arrays in aiChat(), aiModel().run(), and aiAgent().run() now accept string registry keys alongside live ITool instances. AIToolRegistry::resolveTools() converts any string keys to their registered ITool before the request is sent.onAIToolRegistryRegister and onAIToolRegistryUnregister.AiChatRequest object during invocation, allowing for more complex and context-aware tool behavior. They receive a _chatRequest argument that includes all the properties of the original request, such as messages, params, options, and more. This enables tools to make informed decisions based on the full conversation context and request configuration.AiModel and AiAgent, with agent middleware prepended ahead of model middleware.preRequest(), postResponse(),for any custom logic before and after requests to change the shape of the request or response, log additional data, etc. These hooks are provider-specific and allow for custom behavior without needing to override the entire sendChatRequest() method.add(), getAll(), clear(), trim(), seed(), and related methods on every IAiMemory and IVectorMemory implementation now accept optional userId and conversationId arguments. This follows the Spring AI ChatMemory pattern โ a single memory instance can safely serve multiple tenants without creating a new instance per user. Construction-time values remain as fallbacks.models/providers/capabilities/ package introduces IAiChatService and IAiEmbeddingsService โ scoped interfaces that let providers declare exactly which operations they support at the type level rather than through runtime throws.getCapabilities() / hasCapability() on all providers: Every provider now exposes getCapabilities() (returns ["chat", "stream", "embeddings", ...]) and hasCapability( "chat" ) for clean, self-documenting runtime introspection. These are backed by isInstanceOf() checks and stay automatically in sync with the implements declarations on each provider โ no maintenance required.AiAgent parent-child hierarchy: AiAgent now tracks its position in a multi-agent tree through a parentAgent property and a full set of hierarchy helpers:
setParentAgent(parent) โ assign a parent with self-reference and cycle-detection guardsclearParentAgent() โ detach from a parenthasParentAgent() โ returns true if the agent has a parentisRootAgent() โ returns true for top-level agentsgetRootAgent() โ walks up the tree and returns the root agentgetAgentDepth() โ returns the nesting depth (0 = root, 1 = direct child, โฆ)getAgentPath() โ returns a slash-delimited path string, e.g. /coordinator/researchergetAncestors() โ returns an ordered array [immediateParent, โฆ, root]addSubAgent() now automatically calls setParentAgent(this) on the sub-agentsetSubAgents() now calls clearParentAgent() on replaced sub-agents before replacing themgetConfig() now includes parentAgent (name string), agentDepth, and agentPathrunnables folder. This includes AiModel, AiAgent, and AiMessage. This better reflects their purpose as executable entities that can be run with different inputs, and allows for a cleaner separation between the core service logic and the runnable wrappers.BaseService to be truly a base and move all OpenAI specific logic to OpenAIService, which now serves as the default provider implementation. This allows for cleaner implementations of other providers that don't need to override every method.AiAgent is now fully stateless: userId, and conversationId are resolved per-call from the options argument passed to run() and stream(), eliminating shared-state concurrency bugs in multi-user deployments. Seeding a memory with userId and conversationId is still supported, but these values will be overridden by any values passed in at call time.resume() and resumeStream() now require threadId as an explicit required string argument instead of defaulting to the former instance property.IAiService contract trimmed: The base interface now declares only identity/configuration/capability-discovery methods (getName(), configure(), getCapabilities(), hasCapability()). The operation methods (invoke(), invokeStream(), embeddings()) have moved to their respective capability interfaces where they belong.VoyageService now extends BaseService directly and implements only IAiEmbeddingsService โ it no longer extends OpenAIService with stubbed-out chat methods that threw at runtime. The type system now enforces the embeddings-only constraint at compile time.aiChat(), aiChatStream(), and aiEmbed() BIF guards: Each BIF now checks the provider implements the required capability interface before attempting the call and throws a clear UnsupportedCapability exception instead of a cryptic provider error. Zero breaking changes to public BIF signatures.BaseService.sendRequest() to sendChatRequest().onAITokenCount.base_resp.status_code != 0) now surface correctly.OllamaService stale postEmbeddingResponse() hook: The old hook was never wired to the current BaseService lifecycle and silently did nothing. Replaced with the proper postResponse( aiRequest, dataPacket, result, operation ) override that guards on operation != "embeddings", identical to how every other dual-capability provider handles this.minimax provider name and set your API key via the MINIMAX_API_KEY environment variable.getConfig() to not show sensitive info._input System Variable: Auto-inject previous stage output into message templates via ${_input}. For struct outputs, individual fields are flattened as ${_input_fieldName} for template access. Enables clean, composable multi-stage AI pipelines without manual transformation steps.aiTransform() needd to process instances of AiTransformRunnable and BaseTransformer classes, allowing for more flexible and reusable transformation logic.config on all BaseTransformer classes was missing.aiTransform() BIF was called with a non-string or closure, the throw() was invalid.request in the aiChatStream() BIF, which should have been chatRequest.aiChat() and aiChatStream() BIFs was incorrect, causing default options to override user-provided options. Now it merges in the correct order: user options โ module settings โ default options, allowing for proper overrides.aiService() BIF was not correctly applying convention-based API key detection when options.apiKey was already set but empty. Now it checks if options.apiKey is empty before applying the convention key, allowing for proper fallback to environment variables or module settings.What's New: https://ai.ortusbooks.com/readme/release-history/2.1.0
onMissingAiProvider to handle cases where a requested provider is not found.aiModel() BIF now accepts an additional options struct to seed services.providers so you can predefine multiple providers in the module config, with default params and options."providers" : {
"openai" : {
"params" : {
"model" : "gpt-4"
},
"options" : {
"apiKey" : "my-openai-api-key"
}
},
"ollama" : {
"params" : {
"model" : "qwen2.5:0.5b-instruct"
},
"options" : {
"baseUrl" : "http://my-ollama-server:11434/"
}
}
}
options.baseUrl parameter.AiBaseRequest.mergeServiceParams() and AiBaseRequest.mergeServiceHeaders() methods now accept an override boolean argument to control whether existing values should be overwritten when merging.nomic-embed-text model for embeddings support.nomic-embed-text model.tenantId option for attributing AI usage to specific tenantsusageMetadata option for custom tracking data (cost center, project, userId, etc.)onAITokenCount events with tenant context for interceptor-based billingproviderOptions struct for provider-specific settings
providerOptions option for passing provider-specific configuration (e.g., inferenceProfileArn for Bedrock)getProviderOption(key, defaultValue) method on requests for retrieving provider optionsembeddingOptions configuration in BaseVectorMemory for passing options to embedding providerembeddingOptions.baseURL for custom OpenAI-compatible embedding service URLsInvokeModelWithResponseStream API endpointIAiService interface, ensuring consistent behavior across providers.IAiService.configure() method now accepts a generic options argument instead of apiKey, to better reflect its purpose and support more configuration options.AiRequest class renamed to AiChatRequest for clarity, and multi-modality support.onAIChatRequest, onAIChatRequestCreate, and onAIChatResponse.aiChat, aiChatStream BIF was not passing headers to the AiChatRequest.aiChat, aiChatStream, aiChatAsync BIF was not using aiChatRequest() to build the request, but was building it manually.aiChat(), aiChatStream() BIF.chr() --> char() in SSE formatting in MCPRequestProcessor and HTTPTransport.AiModel.getModel() was not returning the model name correctly when using predefined providers from config.url parameter conflict in OpenSearchVectorMemory by using requestUrl for HTTP requestsWhat's New: https://ai.ortusbooks.com/readme/release-history/2.0.0
One of our biggest library updates yet! This release introduces a powerful new document loading system, comprehensive security features for MCP servers, and full support for several major AI providers including Mistral, HuggingFace, Groq, OpenRouter, and Ollama. Additionally, we have implemented complete embeddings functionality and made numerous enhancements and fixes across the board.
aiDocuments() BIF for loading documents with automatic type detectionaiDocumentLoader() BIF for creating loader instances with advanced configurationaiDocumentLoaders() BIF for retrieving all registered loaders with metadataaiMemoryIngest() BIF for ingesting documents into memory with comprehensive reporting:
aiChunk() integrationaiTokens() integrationDocument class for standardized document representation with content and metadataIDocumentLoader interface and BaseDocumentLoader abstract class for custom loadersTextLoader: Plain text files (.txt, .text)MarkdownLoader: Markdown files with header splitting, code block removalHTMLLoader: HTML files and URLs with script/style removal, tag extractionCSVLoader: CSV files with row-as-document mode, column filteringJSONLoader: JSON files with field extraction, array-as-documents modeDirectoryLoader: Batch loading from directories with recursive scanningloadTo() method and aiMemoryIngest() BIFdocs/main-components/document-loaders.mdwithCors(origins) - Configure allowed origins (string or array)addCorsOrigin(origin) - Add origin dynamicallygetCorsAllowedOrigins() - Get configured origins arrayisCorsAllowed(origin) - Check if origin is allowed with wildcard matching*.example.com)*)Access-Control-Allow-Origin header in responseswithBodyLimit(maxBytes) - Set maximum request body size in bytesgetMaxRequestBodySize() - Get current limit (0 = unlimited)withApiKeyProvider(provider) - Set custom API key validation callbackhasApiKeyProvider() - Check if provider is configuredverifyApiKey(apiKey, requestData) - Manual key validationX-API-Key header and Authorization: Bearer tokenX-Content-Type-Options: nosniffX-Frame-Options: DENYX-XSS-Protection: 1; mode=blockReferrer-Policy: strict-origin-when-cross-originContent-Security-Policy: default-src 'none'; frame-ancestors 'none'Strict-Transport-Security: max-age=31536000; includeSubDomainsPermissions-Policy: geolocation=(), microphone=(), camera=()docs/advanced/mcp-server.md with examplesMistralService provider class with OpenAI-compatible APImistral-embed modelmistral-small-latestMISTRAL_API_KEY environment variableHuggingFaceService provider class extending BaseServicerouter.huggingface.co/v1Qwen/Qwen2.5-72B-InstructHUGGINGFACE_API_KEYapi.groq.comllama-3.3-70b-versatileGROQ_API_KEYaiEmbedding() BIF for generating text embeddingsAiEmbeddingRequest class to model embedding requestsembeddings() method in IAiService interfacetext-embedding-3-small and text-embedding-3-large modelstext-embedding-004 modelonAIEmbeddingRequest, onAIEmbeddingResponse, beforeAIEmbedding, afterAIEmbeddingexamples/embeddings-example.bx demonstrating practical use casesformat(bindings) - Formats messages with provided bindings.render() - Renders messages using stored bindings.bind( bindings ) - Binds variables to be used in message formatting.getBindings(), setBindings( bindings ) - Getters and setters for bindings.AIService() BIF: <PROVIDER>_API_KEY from system settingsTool.getArgumentsSchema() method to retrieve the arguments schema for use by any provider.logRequestToConsole, logResponseToConsoleChatMessage helper method: getNonSystemMessages() to retrieve all messages except the system message.ChatRequest now has the original ChatMessage as a property, so you can access the original message in the request.claude-sonnet-4-0 as its default.logRequest, logResponse, timeout, returnFormat, so you can control the behavior of the services globally.onAIResponse event.1.0.0 in the box.json file by accident.settings in the module config.
$
box install bx-ai