Limeblock Docs

Exporting Limeblock AI

This guide explains how to integrate Limeblock AI responses into custom interfaces. You'll learn to make direct API requests and display formatted responses anywhere in your application.

Summary:
Limeblock uses headless implementation for AI endpoints. This means you control the entire UI and UX of your AI interface, including how user prompts are collected.

All you do is send a request to our API endpoint -
https://limeblockbackend.onrender.com/api/ai_action/

with the necessary parameters, and we handle the AI processing. You can use any frontend framework or even plain JavaScript to implement this.

Core Integration Workflow

  1. Obtain your API keys and endpoint identifiers
  2. Construct the API request payload
  3. Send request to ai_action endpoint
  4. Handle and display the response

Step 1: API Configuration

First, get your credentials from the Limeblock dashboard:

  • API Key: Found in Settings → API Keys
  • Endpoint ID: From your endpoint tree (click ID icon)
  • Folder ID: Parent folder of your endpoint
javascript
const API_KEY = process.env.NEXT_PUBLIC_LIMEBLOCK_API_KEY || "lime_YOUR_API_KEY";

Security Note: Always store API keys in environment variables. Never hardcode in client-side applications.

Step 2: Constructing the Payload

The payload defines what you send to the AI endpoint. Here's the complete structure:

javascript
1
2
3
4
5
6
7
8
9
10
11
12
const payload = {
  prompt: "User's question or command",      // User input
  endpoint_id: "endpoint_1745333462205",     // From your endpoint tree
  folder_id: "folder_1741747825504",         // Parent folder ID
  api_key: API_KEY,                          // Your Limeblock API key
  formatting_needed: true,                   // Request formatted response
  context: {                                 // Additional parameters
    user_id: "user_12345",
    board_id: "board_67890",
    // Add custom context here
  }
};

Payload Properties Explained

PropertyRequiredDescription
promptYesUser's input text that triggers the AI action
endpoint_idYesSpecific endpoint ID from your dashboard
folder_idYesParent folder containing the endpoint
api_keyYesYour authentication key
formatting_neededNoSet to true when displaying responses. Returns formatted string in formatted_response
contextNoAdditional parameters your endpoints require. Must match your backend's expected structure exactly.

Step 3: Sending the Request

Use this function to communicate with the AI endpoint:

javascript
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
const handleAIAction = async (userPrompt) => {
  const payload = {
    prompt: userPrompt,
    endpoint_id: "endpoint_1745333462205",
    folder_id: "folder_1741747825504",
    api_key: API_KEY,
    formatting_needed: true,   // Request formatted string response
    context: { 
      user_id: "current_user_id",
      board_id: "your_board_id"
    },
  };

  try {
    const response = await fetch("https://limeblockbackend.onrender.com/api/ai_action/", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify(payload),
    });

    return await response.json();
  } catch (error) {
    console.error("Request failed:", error);
  }
};

Step 4: Handling the Response

The API returns JSON with these key properties:

javascript
1
2
3
4
5
6
7
8
9
10
// Successful response structure
{
  "response": "Raw AI response text",
  "formatted_response": "Formatted string response (Markdown or plain text)",
  "context": {
    "user_id": "current_user_id",
    "board_id": "your_board_id"
  },
  "endpoint_id": "endpoint_1745333462205"
}

Key Response Properties

  • response: Raw AI-generated text (always present)
  • formatted_response:Only when formatting_needed: trueFormatted string response
  • context: Echoes back the context you sent
  • endpoint_id: Confirms which endpoint processed the request

Display Tip: The formatted_response is a string, not HTML. Use <pre> with whitespace-pre-wrap for proper formatting, or convert Markdown to HTML using libraries like react-markdown.

Another Tip: The formatted_response is not needed at all if you don't do anything with AI response and use just to commit an AI action. Fomratted responses are useful for GET requests when AI used as a search or summarizer tool.

Note: The formatted_response costs tokens. Getting back a large response from AI will cost you.

Complete React Implementation

jsx
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
import React, { useState } from 'react';

const AIAssistant = () => {
  const [prompt, setPrompt] = useState('');
  const [response, setResponse] = useState(null);
  
  const handleSubmit = async () => {
    const result = await handleAIAction(prompt);
    setResponse(result);
  };

  return (
    <div className="assistant-container">
      <input 
        value={prompt} 
        onChange={(e) => setPrompt(e.target.value)}
        placeholder="Ask me anything..."
      />
      <button onClick={handleSubmit}>Submit</button>
      
      {response && (
        <div className="ai-response">
          {/* formatted_response is a string - render as text */}
          <pre className="whitespace-pre-wrap">
            {response.formatted_response || response.response}
          </pre>
        </div>
      )}
    </div>
  );
};

Context Parameters Deep Dive

The context object is critical for personalizing responses. Follow these rules:

  • Use exact same property names as defined in your endpoint schema
  • Maintain consistent data types (string, number, boolean)
  • Include all required parameters your backend expects
  • Nested objects must match your endpoint's expected structure

Example Context: If your endpoint requires { user: { id: string }, product: { sku: string } }, ensure your payload matches this exact structure.

Security Best Practices

  • Always serve requests from your backend in production
  • Validate and sanitize all user inputs before sending
  • Use short-lived tokens for sensitive operations
  • Implement rate limiting on your endpoints

Troubleshooting

Missing formatted_response

Ensure you set formatting_needed: true in payload

Unexpected string formatting

Response strings may contain Markdown. Use a Markdown parser if needed

Endpoint errors

Verify endpoint_id and folder_id match your dashboard exactly

Context mismatches

Check property names and structure match endpoint requirements