BoxLang ๐ A New JVM Dynamic Language Learn More...
|:------------------------------------------------------: |
| โก๏ธ B o x L a n g โก๏ธ
| Dynamic : Modular : Productive
|:------------------------------------------------------: |
Copyright Since 2023 by Ortus Solutions, Corp
ai.boxlang.io | www.boxlang.io
ai.ortussolutions.com | www.ortussolutions.com
ย
Welcome to the BoxLang AI Module ๐ The official AI library for BoxLang that provides a unified, fluent API to orchestrate multi-model workflows, autonomous agents, RAG pipelines, and AI-powered applications. One API โ Unlimited AI Power! โจ
BoxLang AI eliminates vendor lock-in and simplifies AI integration by providing a single, consistent interface across 12+ AI providers. Whether you're using OpenAI, Claude, Gemini, Grok, DeepSeek, Ollama, or Perplexityโyour code stays the same. Switch providers, combine models, and orchestrate complex workflows with simple configuration changes. ๐
BoxLang is open source and licensed under the Apache 2 license. ๐ You can also get a professionally supported version with enterprise features and support via our BoxLang +/++ Plans (www.boxlang.io/plans). ๐ผ
You can use BoxLang AI in both operating system applications, AWS Lambda, and web applications. For OS applications, you can use the module installer to install the module globally. For AWS Lambda and web applications, you can use the module installer to install it locally in your project or CommandBox as the package manager, which is our preferred method for web applications.
๐ New to AI concepts? Check out our Key Concepts Guide for terminology and fundamentals, or browse our FAQ for quick answers to common questions. We also have a Quick Start Guide and our intense AI BootCamp available to you as well.
You can easily get started with BoxLang AI by using the module installer for building operating system applications:
install-bx-module bx-ai
This will install the latest version of the BoxLang AI module in your
BoxLang environment. Once installed, make sure you setup any of the
supported AI providers and their API keys in your
boxlang.json configuration file or environment variables.
After that you can leverage the global functions (BIFs) in your
BoxLang code. Here is a simple example:
// chat.bxs
answer = aiChat( "How amazing is BoxLang?" )
println( answer )
You can then run your BoxLang script like this:
boxlang chat.bxs
In order to build AWS Lambda functions with Boxlang AI for serverless
AI agents and applications, you can use the Boxlang
AWS Runtime and our AWS
Lambda Starter Template. You will use the
install-bx-module as well to install the module locally
using the --local flag in the resources
folder of your project:
cd src/resources
install-bx-module bx-ai --local
Or you can use CommandBox as well and store your dependencies in the
box.json descriptor.
box install bx-ai resources/modules/
To use BoxLang AI in your web applications, you can use CommandBox as the package manager to install the module locally in your project. You can do this by running the following command in your project root:
box install bx-ai
Just make sure you have already a server setup with BoxLang. You can check our Getting Started with BoxLang Web Applications guide for more details on how to get started with BoxLang web applications.
The following are the AI providers supported by this module. Please note that in order to interact with these providers you will need to have an account with them and an API key. ๐
Here is a matrix of the providers and their feature support. Please keep checking as we will be adding more providers and features to this module. ๐
| Provider | Real-time Tools | Embeddings | Structured Output |
|---|---|---|---|
| Claude | โ | โ | โ |
| Cohere | โ | โ | โ |
| DeepSeek | โ | โ | โ |
| Gemini | [Coming Soon] | โ | โ |
| Grok | โ | โ | โ |
| Groq | โ | โ | โ |
| HuggingFace | โ | โ | โ |
| Mistral | โ | โ | โ |
| Ollama | โ | โ | โ |
| OpenAI | โ | โ | โ (Native) |
| OpenRouter | โ | โ | โ |
| Perplexity | โ | โ | โ |
| Voyage | โ | โ (Specialized) | โ |
BoxLang not only makes it extremely easy to interact with multiple AI
providers, but it also gives you the flexibility to choose how you
want the responses returned to you. You can specify the return format
using the responseFormat parameter in your AI calls. Here
are the available formats:
| Format | Description |
|---|---|
single
| Returns a single message as a string (the content from the first choice). This is the default format for BIFs. |
all
| Returns an array of all choice messages. Each message is
a struct with role and content keys. |
json
| Returns the parsed JSON object from the content string. Automatically parses JSON responses. |
xml
| Returns the parsed XML document from the content string. Automatically parses XML responses. |
raw
| Returns the full raw response from the AI provider. This is useful for debugging or when you need the full response structure with metadata. This is the default for pipelines. |
structuredOutput
| Used internally when .structuredOutput() is
called. Returns a populated class/struct based on the schema. |
In the following sections, we provide a quick overview of the main components of BoxLang AI including Chats, Pipelines, Agents, Structured Output, Memory Systems, Document Loaders & RAG, and MCP Client/Server. Each section includes quick examples and links to more detailed documentation. For further details, please refer to the official documentation, this is just a high-level overview to get you started quickly. ๐
Interact with AI models through simple and powerful chat interfaces ๐ฏ supporting both one-shot responses and streaming conversations. BoxLang AI provides fluent APIs for building everything from basic Q&A to complex multi-turn dialogues with system prompts, message history, and structured outputs. ๐ก
aiChat()
aiChatStream()
Simple One-Shot Chat:
// Quick question-answer
response = aiChat( "What is BoxLang?" )
println( response )
// With custom model and options
response = aiChat(
messages: "Explain quantum computing",
params: { model: "gpt-4", temperature: 0.7, max_tokens: 500 }
)
Multi-Turn Conversation with Memory:
// Create agent with memory
agent = aiAgent(
name: "Assistant",
memory: aiMemory( "window", config: { maxMessages: 10 } )
)
// First turn
response = agent.run( "My name is Luis" )
// Second turn - Agent remembers context
response = agent.run( "What's my name?" )
println( response ) // "Your name is Luis"
Streaming Chat:
// Stream tokens as they arrive
aiChatStream(
messages: "Write a short story about a robot",
callback: chunk => {
writeOutput( chunk.choices?.first()?.delta?.content ?: "" )
bx:flush;
},
params: { model: "claude-3-5-sonnet-20241022" }
)
Fluent Message Builder:
// Build complex message chains
messages = aiMessage()
.system( "You are a helpful coding assistant" )
.user( "How do I create a REST API in BoxLang?" )
.image( "diagram.png" )
response = aiChat(
messages: messages,
params: { model: "gpt-4o", temperature: 0.7 }
)
Build composable AI workflows ๐ฏ using BoxLang AI's powerful runnable pipeline system. Chain models, transformers, tools, and custom logic into reusable, testable components that flow data through processing stages. Perfect for complex AI workflows, data transformations, and multi-step reasoning. ๐ก
.to()
Simple Transformation Pipeline:
// Create a pipeline with model and transformers
pipeline = aiModel( provider: "openai" )
.transform( data => data.toUpperCase() )
.transform( data => data.trim() )
// Run input through the pipeline
result = pipeline.run( "hello world" )
println( result ) // "HELLO WORLD"
Multi-Stage AI Pipeline:
// Define transformation stages as closures
summarizer = ( text ) => {
return aiChatAsync(
aiMessage().system( "Summarize in one sentence" ).user( text ),
{ model: "gpt-4o-mini" }
)
}
translator = ( summary ) => {
return aiChatAsync(
aiMessage().system( "Translate to Spanish" ).user( summary ),
{ model: "gpt-4o" }
)
}
formatter = ( translatedText ) => {
return {
summary: translatedText,
timestamp: now()
}
}
// Compose pipeline using async futures
result = summarizer( "Long article text here..." )
.then( summary => translator( summary ) )
.then( translated => formatter( translated ) )
.get()
println( result.summary ) // Spanish summary
Streaming Pipeline:
// Stream through entire pipeline
pipeline = aiModel( provider: "claude", params: { model: "claude-3-5-sonnet-20241022" } )
.transform( chunk => chunk.toUpperCase() )
.stream(
onChunk: ( chunk ) => writeOutput( chunk ),
input: "Tell me a story"
)
Custom Runnable Class:
// Implement IAiRunnable for custom logic
class implements="IAiRunnable" {
function run( input, params = {} ) {
// Custom processing
return processedData;
}
function stream( onChunk, input, params = {} ) {
// Streaming support
onChunk( processedChunk );
}
function to( nextRunnable ) {
// Chain to next stage
return createPipeline( this, nextRunnable );
}
}
// Use in pipeline
customStage = new CustomRunnable()
pipeline = aiModel( provider: "openai", params: { model: "gpt-4o" } )
.to( customStage )
examples/pipelines/ for complete examplesBuild autonomous AI agents ๐ฏ that can use tools, maintain memory, and orchestrate complex workflows. BoxLang AI agents combine LLMs with function calling, memory systems, and orchestration patterns to create intelligent assistants that can interact with external systems and solve complex tasks. ๐ก
Simple Agent with Tools:
// Define tools the agent can use
weatherTool = aiTool(
name: "get_weather",
description: "Get current weather for a location",
callable: ( location ) => {
return { temp: 72, condition: "sunny", location: location };
}
)
// Create agent with memory
agent = aiAgent(
name: "Weather Assistant",
description: "Helpful weather assistant",
tools: [ weatherTool ],
memory: aiMemory( "window" )
)
// Agent decides when to call tools
response = agent.run( "What's the weather in Miami?" )
println( response ) // Agent calls get_weather tool and responds
Autonomous Agent with Multiple Tools:
// Agent with database and email tools
agent = aiAgent(
name: "Customer Support Agent",
tools: [
aiTool( name: "query_orders", description: "Query customer orders", callable: orderQueryFunction ),
aiTool( name: "send_email", description: "Send email to customer", callable: emailFunction ),
aiTool( name: "create_ticket", description: "Create support ticket", callable: ticketFunction )
],
memory: aiMemory( "session" ),
params: { max_iterations: 5 }
)
// Agent orchestrates multiple tool calls
agent.run( "Find order #12345, email the customer with status, and create a ticket if there's an issue" )
examples/agents/
for complete working examplesGet type-safe, validated responses โ from AI providers by defining expected output schemas using BoxLang classes, structs, or JSON schemas. The module automatically converts AI responses into properly typed objects, eliminating manual parsing and validation. ๐ฏ
aiPopulate() to create objects from JSON for tests or
cached responsesUsing a Class:
class Person {
property name="name" type="string";
property name="age" type="numeric";
property name="email" type="string";
}
result = aiChat(
messages: "Extract person info: John Doe, 30, [email protected]",
options: { returnFormat: new Person() }
)
writeOutput( "Name: #result.getName()#, Age: #result.getAge()#" );
Using a Struct Template:
template = {
"title": "",
"summary": "",
"tags": [],
"sentiment": ""
};
result = aiChat(
messages: "Analyze this article: [long text]",
options: { returnFormat: template }
)
writeOutput( "Tags: #result.tags.toList()#" );
Extracting Arrays:
class Task {
property name="title" type="string";
property name="priority" type="string";
property name="dueDate" type="string";
}
tasks = aiChat(
messages: "Extract tasks from: Finish report by Friday (high priority), Review code tomorrow",
options: { returnFormat: [ new Task() ] }
)
for( task in tasks ) {
writeOutput( "#task.getTitle()# - Priority: #task.getPriority()#<br>" );
}
Multiple Schemas (Extract Different Types Simultaneously):
result = aiChat(
messages: "Extract person and company: John Doe, 30 works at Acme Corp, founded 2020",
options: {
returnFormat: {
"person": new Person(),
"company": new Company()
}
}
)
writeOutput( "Person: #result.person.getName()#<br>" );
writeOutput( "Company: #result.company.getName()#<br>" );
Convert JSON responses or cached data into typed objects without making AI calls:
// From JSON string
jsonData = '{"name":"John Doe","age":30,"email":"[email protected]"}';
person = aiPopulate( new Person(), jsonData );
// From struct
data = { name: "Jane", age: 25, email: "[email protected]" };
person = aiPopulate( new Person(), data );
// Populate array
tasksJson = '[{"title":"Task 1","priority":"high"},{"title":"Task 2","priority":"low"}]';
tasks = aiPopulate( [ new Task() ], tasksJson );
Perfect for: โญ
All providers support structured output! ๐ OpenAI offers native structured output with strict validation, while others use JSON mode with schema guidance (which works excellently in practice). ๐ช
examples/structured/ for complete working examplesBuild stateful, context-aware AI applications ๐ฏ with flexible memory systems that maintain conversation history, enable semantic search, and preserve context across interactions. BoxLang AI provides both traditional conversation memory and advanced vector-based memory for semantic understanding. ๐ก
Standard Memory ๐ฌ (Conversation History):
| Type | Description | Best For |
|---|---|---|
| Windowed | Keeps last N messages | Quick chats, cost-conscious apps |
| Summary | Auto-summarizes old messages | Long conversations, context preservation |
| Session | Web session persistence | Multi-page web applications |
| File | File-based storage | Audit trails, long-term storage |
| Cache | CacheBox-backed | Distributed applications |
| JDBC | Database storage | Enterprise apps, multi-user systems |
Vector Memory ๐ (Semantic Search):
| Type | Description | Best For |
|---|---|---|
| BoxVector | In-memory vectors | Development, testing, small datasets |
| Hybrid | Recent + semantic | Best of both worlds approach |
| Chroma | ChromaDB integration | Python-based infrastructure |
| Postgres | PostgreSQL pgvector | Existing PostgreSQL deployments |
| MySQL | MySQL 9 native vectors | Existing MySQL infrastructure |
| TypeSense | Fast typo-tolerant search | Low-latency search, autocomplete |
| Pinecone | Cloud vector database | Production, scalable semantic search |
| Qdrant | High-performance vectors | Large-scale deployments |
| Weaviate | GraphQL vector database | Complex queries, knowledge graphs |
| Milvus | Enterprise vector DB | Massive datasets, high throughput |
Windowed Memory (Multi-Tenant):
// Automatic per-user isolation
memory = aiMemory(
memory: "window",
key: createUUID(),
userId: "user123",
config: { maxMessages: 10 }
)
agent = aiAgent( name: "Assistant", memory: memory )
agent.run( "My name is John" )
agent.run( "What's my name?" ) // "Your name is John"
Summary Memory (Preserves Full Context):
memory = aiMemory( "summary", config: {
maxMessages: 30,
summaryThreshold: 15,
summaryModel: "gpt-4o-mini"
} )
agent = aiAgent( name: "Support", memory: memory )
// Long conversation - older messages summarized automatically
Vector Memory (Semantic Search + Multi-Tenant):
memory = aiMemory(
memory: "chroma",
key: createUUID(),
userId: "user123",
conversationId: "support",
config: {
collection: "customer_support",
embeddingProvider: "openai",
embeddingModel: "text-embedding-3-small"
}
)
// Retrieves semantically relevant past conversations
// Automatically filtered by userId/conversationId
Hybrid Memory (Recent + Semantic):
memory = aiMemory( "hybrid", config: {
recentLimit: 5, // Keep last 5 messages
semanticLimit: 5, // Add 5 semantic matches
vectorProvider: "chroma"
} )
// Combines recency with relevance
examples/advanced/ and
examples/vector-memory/ for complete examplesBoxLang AI provides 12+ built-in document loaders for ingesting content from files, databases, web sources, and more. These loaders integrate seamlessly with vector memory systems to enable Retrieval-Augmented Generation (RAG) workflows.
graph LR
LOAD[๐ Load Documents] --> CHUNK[โ๏ธ Chunk Text]
CHUNK --> EMBED[๐งฌ Generate Embeddings]
EMBED --> STORE[๐พ Store in Vector Memory]
STORE --> QUERY[โ User Query]
QUERY --> RETRIEVE[๐ Retrieve Relevant Docs]
RETRIEVE --> INJECT[๐ Inject into Context]
INJECT --> AI[๐ค AI Response]
style LOAD fill:#4A90E2
style EMBED fill:#BD10E0
style STORE fill:#50E3C2
style RETRIEVE fill:#F5A623
style AI fill:#7ED321
| Loader | Type | Use Case | Example |
|---|---|---|---|
| ๐ TextLoader | text
| Plain text files | .txt, .log
|
| ๐ MarkdownLoader | markdown
| Markdown files | .md documents |
| ๐ CSVLoader | csv
| CSV files | Data files, exports |
| ๐๏ธ JSONLoader | json
| JSON files | Configuration, data |
| ๐ท๏ธ XMLLoader | xml
| XML files | Config, structured data |
| ๐ PDFLoader | pdf
| PDF documents | Reports, documentation |
| ๐ LogLoader | log
| Log files | Application logs |
| ๐ HTTPLoader | http
| Web pages | Documentation, articles |
| ๐ฐ FeedLoader | feed
| RSS/Atom feeds | News, blogs |
| ๐พ SQLLoader | sql
| Database queries | Query results |
| ๐ DirectoryLoader | directory
| File directories | Batch processing |
| ๐ท๏ธ WebCrawlerLoader | webcrawler
| Website crawling | Multi-page docs |
Load a Single Document:
// Load a PDF document
docs = aiDocuments(
source: "/path/to/document.pdf",
config: { type: "pdf" }
).load()
println( "#docs.len()# documents loaded" )
// Load with configuration
docs = aiDocuments(
source: "/path/to/document.pdf",
config: {
type: "pdf",
sortByPosition: true,
addMoreFormatting: true,
startPage: 1,
endPage: 10
}
).load()
Load Multiple Documents:
// Load all markdown files from a directory
docs = aiDocuments(
source: "/knowledge-base",
config: {
type: "directory",
recursive: true,
extensions: ["md", "txt"],
excludePatterns: ["node_modules", ".git"]
}
).load()
Ingest into Vector Memory:
// Create vector memory
vectorMemory = aiMemory( "chroma", config: {
collection: "docs",
embeddingProvider: "openai",
embeddingModel: "text-embedding-3-small"
} )
// Ingest documents with chunking and embedding
result = aiDocuments(
source: "/knowledge-base",
config: {
type: "directory",
recursive: true,
extensions: ["md", "txt", "pdf"]
}
).toMemory(
memory: vectorMemory,
options: { chunkSize: 1000, overlap: 200 }
)
println( "โ
Loaded #result.documentsIn# docs as #result.chunksOut# chunks" )
println( "๐ฐ Estimated cost: $#result.estimatedCost#" )
RAG with Agent:
// Create agent with vector memory
agent = aiAgent(
name: "KnowledgeAssistant",
description: "AI assistant with access to knowledge base",
memory: vectorMemory
)
// Query automatically retrieves relevant documents
response = agent.run( "What is BoxLang?" )
println( response )
examples/loaders/
and examples/rag/ for complete examplesConnect to Model Context Protocol (MCP) servers ๐ฏ and use their tools, prompts, and resources in your AI applications. BoxLang AI's MCP client provides seamless integration with the growing MCP ecosystem, allowing your agents to access databases, APIs, filesystems, and more through standardized interfaces. ๐ก
Connect to MCP Server:
// Connect to MCP server via HTTP
mcpClient = MCP( "http://localhost:3000" )
.withTimeout( 5000 )
// List available tools
tools = mcpClient.listTools()
println( tools ) // Returns available MCP tools
Use MCP Tools in Agent:
// Connect to MCP servers
filesystemMcp = MCP( "http://localhost:3001" ).withTimeout( 5000 )
databaseMcp = MCP( "http://localhost:3002" ).withTimeout( 5000 )
// Create agent (MCP integration depends on agent implementation)
agent = aiAgent(
name: "Data Assistant",
description: "Assistant with MCP tool access"
)
// Agent automatically discovers and uses MCP tools
response = agent.run( "Read config.json and update the database with its contents" )
// Agent automatically uses MCP tools
agent.run( "Read config.json and update the database with its contents" )
Access MCP Resources:
// List available resources
resources = mcpClient.listResources()
// Read resource content
content = mcpClient.readResource( "file:///docs/readme.md" )
println( content )
// Use prompts from server
prompts = mcpClient.listPrompts()
prompt = mcpClient.getPrompt( "code-review", { language: "BoxLang" } )
examples/mcp/ for
complete examplesExpose your BoxLang functions and data as MCP tools ๐ฏ for use by AI agents and applications. Build custom MCP servers that provide tools, prompts, and resources through the standardized Model Context Protocol, making your functionality accessible to any MCP client. ๐ก
Simple MCP Server:
// Create server with tools
server = mcpServer(
name: "my-tools",
description: "Custom BoxLang tools"
)
// Register tool
server.registerTool(
aiTool(
name: "calculate_tax",
description: "Calculate tax for a given amount",
callable: ( amount, rate = 0.08 ) => {
return amount * rate;
}
)
)
// Start server
server.start() // Listens on stdio by default
Advanced Server with Resources:
// Create server with tools, prompts, and resources
server = mcpServer(
name: "enterprise-api",
description: "Internal enterprise tools"
)
// Register multiple tools
server.registerTool( aiTool(
name: "query_orders",
description: "Query customer orders",
callable: queryOrdersFunction
) )
server.registerTool( aiTool(
name: "create_invoice",
description: "Create customer invoice",
callable: createInvoiceFunction
) )
server.registerTool( aiTool(
name: "send_notification",
description: "Send customer notification",
callable: notifyFunction
) )
// Provide templates as prompts
server.registerPrompt(
name: "customer-email",
description: "Generate customer email",
template: ( orderNumber ) => {
return "Write a professional email about order ##orderNumber#";
}
)
// Expose data resources
server.registerResource(
uri: "config://database",
description: "Database configuration",
getData: () => {
return fileRead( "/config/database.json" );
}
)
// Start with custom transport
server.start( transport: "http", port: 3000 )
Integration with BoxLang Web App:
// In your BoxLang app's Application.bx
component {
function onApplicationStart() {
// Start MCP server on app startup
application.mcpServer = aiMcpServer( "myapp-api" )
.registerTool( "search", variables.searchFunction )
.registerTool( "create", variables.createFunction )
.start( background: true )
}
function onApplicationEnd() {
application.mcpServer.stop()
}
}
examples/mcp/server/ for complete examplesHere are the settings you can place in your boxlang.json file:
{
"modules" : {
"bxai" : {
"settings": {
// The default provider to use: openai, claude, deepseek, gemini, grok, mistral, ollama, openrouter, perplexity
"provider" : "openai",
// The default API Key for the provider
"apiKey" : "",
// The default request params to use when calling a provider
// Ex: { temperature: 0.5, max_tokens: 100, model: "gpt-3.5-turbo" }
"defaultParams" : {
// model: "gpt-3.5-turbo"
},
// The default timeout of the ai requests
"timeout" : 30,
// If true, log request to the ai.log
"logRequest" : false,
// If true, log request to the console
"logRequestToConsole" : false,
// If true, log the response to the ai.log
"logResponse" : false,
// If true, log the response to the console
"logResponseToConsole" : false,
// The default return format of the AI response: single, all, raw
"returnFormat" : "single"
}
}
}
}
Ollama allows you to run AI models locally on your machine. It's perfect for privacy, offline use, and cost savings. ๐ฐ
ollama pull
llama3.2 (or any supported model)http://localhost:11434 by default{
"modules": {
"bxai": {
"settings": {
"provider": "ollama",
"apiKey": "", // Optional: for remote/secured Ollama instances
"chatURL": "http://localhost:11434", // Default local instance
"defaultParams": {
"model": "llama3.2" // Any Ollama model you have pulled
}
}
}
}
}
llama3.2 - Latest Llama model (recommended)llama3.2:1b - Smaller, faster modelcodellama - Code-focused modelmistral - High-quality general modelphi3 - Microsoft's efficient model| Function | Purpose | Parameters | Return Type | Async Support |
|---|---|---|---|---|
aiAgent()
| Create autonomous AI agent | name,
description, instructions,
model, memory, tools,
subAgents, params, options
| AiAgent Object | โ |
aiChat()
| Chat with AI provider | messages,
params={}, options={}
| String/Array/Struct | โ |
aiChatAsync()
| Async chat with AI provider | messages, params={}, options={}
| BoxLang Future | โ |
aiChatRequest()
| Compose raw chat request | messages,
params, options, headers
| AiRequestObject | N/A |
aiChatStream()
| Stream chat responses from AI provider | messages, callback,
params={}, options={}
| void | N/A |
aiChunk()
| Split text into chunks | text, options={}
| Array of Strings | N/A |
aiDocuments()
| Create fluent document loader | source, config={}
| IDocumentLoader Object | N/A |
aiEmbed()
| Generate embeddings | input,
params={}, options={}
| Array/Struct | N/A |
aiMemory()
| Create memory instance | memory,
key, userId,
conversationId, config={}
| IAiMemory Object | N/A |
aiMessage()
| Build message object | message
| ChatMessage Object | N/A |
aiModel()
| Create AI model wrapper | provider,
apiKey, tools
| AiModel Object | N/A |
aiPopulate()
| Populate class/struct from JSON | target, data
| Populated Object | N/A |
aiService()
| Create AI service provider | provider, apiKey
| IService Object | N/A |
aiTokens()
| Estimate token count | text, options={}
| Numeric | N/A |
aiTool()
| Create tool for real-time processing | name, description, callable
| Tool Object | N/A |
aiTransform()
| Create data transformer | transformer, config={}
| Transformer Runnable | N/A |
MCP()
| Create MCP client for Model Context Protocol servers | baseURL
| MCPClient Object | N/A |
mcpServer()
| Get or create MCP server for exposing tools | name="default",
description, version,
cors, statsEnabled, force
| MCPServer Object | N/A |
Note on Return Formats: When using pipelines (runnable chains), the default return format is
raw(full API response), giving you access to all metadata. Use.singleMessage(),.allMessages(), or.withFormat()to extract specific data. TheaiChat()BIF defaults tosingleformat (content string) for convenience. See the Pipeline Return Formats documentation for details.
The BoxLang AI module emits several events throughout the AI processing lifecycle that allow you to intercept, modify, or extend functionality. These events are useful for logging, debugging, custom providers, and response processing.
Read more about Events in BoxLang AI.
| Event | When Fired | Data Emitted | Use Cases |
|---|---|---|---|
afterAIAgentRun
| After agent completes execution | agent, response
| Agent monitoring, result tracking |
afterAIEmbed
| After generating embeddings | embeddingRequest,
service, result
| Result processing, caching |
afterAIModelInvoke
| After model invocation completes | model, aiRequest, results
| Performance tracking, validation |
afterAIPipelineRun
| After pipeline execution completes | sequence, result, executionTime
| Pipeline monitoring, metrics |
afterAIToolExecute
| After tool execution completes | tool,
results, executionTime
| Tool performance tracking |
beforeAIAgentRun
| Before agent starts execution | agent,
input, messages, params
| Agent validation, preprocessing |
beforeAIEmbed
| Before generating embeddings | embeddingRequest, service
| Request validation, preprocessing |
beforeAIModelInvoke
| Before model invocation starts | model, aiRequest
| Request validation, cost estimation |
beforeAIPipelineRun
| Before pipeline execution starts | sequence, stepCount,
steps, input
| Pipeline validation, tracking |
beforeAIToolExecute
| Before tool execution starts | tool,
name, arguments
| Permission checks, validation |
onAIAgentCreate
| When agent is created | agent
| Agent registration, configuration |
onAIEmbedRequest
| Before sending embedding request | dataPacket,
embeddingRequest, provider
| Request logging, modification |
onAIEmbedResponse
| After receiving embedding response | embeddingRequest,
response, provider
| Response processing, caching |
onAIError
| When AI operation error occurs | error, errorMessage,
provider, operation, canRetry
| Error handling, retry logic, alerts |
onAiMemoryCreate
| When memory instance is created | memory, type, config
| Memory configuration, tracking |
onAIMessageCreate
| When message is created | message
| Message validation, formatting |
onAIModelCreate
| When model wrapper is created | model, service
| Model configuration, tracking |
onAIProviderCreate
| After provider is created | provider
| Provider initialization, configuration |
onAIProviderRequest
| When provider is requested | provider,
apiKey, service
| Custom provider registration |
onAIRateLimitHit
| When rate limit (429) is encountered | provider,
statusCode, retryAfter
| Rate limit handling, provider switching |
onAIRequest
| Before sending HTTP request | dataPacket, aiRequest, provider
| Request logging, modification, authentication |
onAIRequestCreate
| When request object is created | aiRequest
| Request validation, modification |
onAIResponse
| After receiving HTTP response | aiRequest, response,
rawResponse, provider
| Response processing, logging, caching |
onAITokenCount
| When token usage data is available | provider, model,
promptTokens, completionTokens, totalTokens
| Cost tracking, budget enforcement |
onAIToolCreate
| When tool is created | tool,
name, description
| Tool registration, validation |
onAITransformerCreate
| When transformer is created | transform
| Transform configuration, tracking |
Visit the GitHub repository for release notes. You can also file a bug report or improvement suggestion via GitHub Issues.
This module includes tests for all AI providers. To run the tests:
./gradlew test
For Ollama provider tests, you need to start the test Ollama service first:
# Start the Ollama test service
docker-compose up -d ollama-test
# Wait for it to be ready (this may take a few minutes for the first run)
# The service will automatically pull the qwen2.5:0.5b model
# Run the tests
./gradlew test --tests "ortus.boxlang.ai.providers.OllamaTest"
# Clean up when done
docker-compose down -v
You can also use the provided test script:
./test-ollama.sh
This will start the service, verify it's working, and run a basic test.
Note: The first time you run this, it will download
the qwen2.5:0.5b model (~500MB), so it may take several minutes.
BoxLang is a professional open-source project and it is completely funded by the community and Ortus Solutions, Corp. Ortus Patreons get many benefits like a cfcasts account, a FORGEBOX Pro account and so much more. If you are interested in becoming a sponsor, please visit our patronage page: https://patreon.com/ortussolutions
"I am the way, and the truth, and the life; no one comes to the Father, but by me (JESUS)" Jn 14:1-12
All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
One of our biggest library updates yet! This release introduces a powerful new document loading system, comprehensive security features for MCP servers, and full support for several major AI providers including Mistral, HuggingFace, Groq, OpenRouter, and Ollama. Additionally, we have implemented complete embeddings functionality and made numerous enhancements and fixes across the board.
aiDocuments() BIF for loading documents with automatic type detectionaiDocumentLoader() BIF for creating loader instances with advanced configurationaiDocumentLoaders() BIF for retrieving all registered loaders with metadataaiMemoryIngest() BIF for ingesting documents into memory with comprehensive reporting:
aiChunk() integrationaiTokens() integrationDocument class for standardized document representation with content and metadataIDocumentLoader interface and BaseDocumentLoader abstract class for custom loadersTextLoader: Plain text files (.txt, .text)MarkdownLoader: Markdown files with header splitting, code block removalHTMLLoader: HTML files and URLs with script/style removal, tag extractionCSVLoader: CSV files with row-as-document mode, column filteringJSONLoader: JSON files with field extraction, array-as-documents modeDirectoryLoader: Batch loading from directories with recursive scanningloadTo() method and aiMemoryIngest() BIFdocs/main-components/document-loaders.mdwithCors(origins) - Configure allowed origins (string or array)addCorsOrigin(origin) - Add origin dynamicallygetCorsAllowedOrigins() - Get configured origins arrayisCorsAllowed(origin) - Check if origin is allowed with wildcard matching*.example.com)*)Access-Control-Allow-Origin header in responseswithBodyLimit(maxBytes) - Set maximum request body size in bytesgetMaxRequestBodySize() - Get current limit (0 = unlimited)withApiKeyProvider(provider) - Set custom API key validation callbackhasApiKeyProvider() - Check if provider is configuredverifyApiKey(apiKey, requestData) - Manual key validationX-API-Key header and Authorization: Bearer tokenX-Content-Type-Options: nosniffX-Frame-Options: DENYX-XSS-Protection: 1; mode=blockReferrer-Policy: strict-origin-when-cross-originContent-Security-Policy: default-src 'none'; frame-ancestors 'none'Strict-Transport-Security: max-age=31536000; includeSubDomainsPermissions-Policy: geolocation=(), microphone=(), camera=()docs/advanced/mcp-server.md with examplesMistralService provider class with OpenAI-compatible APImistral-embed modelmistral-small-latestMISTRAL_API_KEY environment variableHuggingFaceService provider class extending BaseServicerouter.huggingface.co/v1Qwen/Qwen2.5-72B-InstructHUGGINGFACE_API_KEYapi.groq.comllama-3.3-70b-versatileGROQ_API_KEYaiEmbedding() BIF for generating text embeddingsAiEmbeddingRequest class to model embedding requestsembeddings() method in IAiService interfacetext-embedding-3-small and text-embedding-3-large modelstext-embedding-004 modelonAIEmbeddingRequest, onAIEmbeddingResponse, beforeAIEmbedding, afterAIEmbeddingexamples/embeddings-example.bx demonstrating practical use casesformat(bindings) - Formats messages with provided bindings.render() - Renders messages using stored bindings.bind( bindings ) - Binds variables to be used in message formatting.getBindings(), setBindings( bindings ) - Getters and setters for bindings.AIService() BIF: <PROVIDER>_API_KEY from system settingsTool.getArgumentsSchema() method to retrieve the arguments schema for use by any provider.logRequestToConsole, logResponseToConsoleChatMessage helper method: getNonSystemMessages() to retrieve all messages except the system message.ChatRequest now has the original ChatMessage as a property, so you can access the original message in the request.claude-sonnet-4-0 as its default.logRequest, logResponse, timeout, returnFormat, so you can control the behavior of the services globally.onAIResponse event.1.0.0 in the box.json file by accident.settings in the module config.
$
box install bx-ai