FlowEditor Executor Documentation¶
Welcome to the FlowEditor Executor documentation! FlowEditor Executor is a powerful Python backend application that compiles and executes workflows using LangGraph, providing a flexible and scalable workflow orchestration system.
What is FlowEditor Executor?¶
FlowEditor Executor is a workflow engine that allows you to create, compile, and execute complex workflows with various node types including Python code execution, LLM calls, conditional branching, human-in-the-loop interactions, and more.
Key Features¶
- 🔄 Workflow Engine - LangGraph-powered execution with edge-based flow control
- 🧩 Multiple Node Types - Python code, LLM calls, REST APIs, switches, subworkflows, and more
- 🐛 Visual Debugging - Breakpoint-based debugging with step-through execution
- 👤 Human-in-the-Loop - Pause workflows for human input with typed form fields
- 🚀 Streaming Execution - Real-time updates via Server-Sent Events and gRPC
- 💾 State Management - Persistent state with user and global storage
- 📊 Observability - Comprehensive execution tracking and analytics
- 🔐 Authentication - JWT, Keycloak, and simple auth support
- ☁️ Cloud Ready - Docker and Kubernetes deployment with Helm charts
Quick Start¶
Get started with FlowEditor Executor in minutes:
# Clone the repository
git clone https://github.com/your-org/floweditor-executor.git
cd floweditor-executor
# Install dependencies
pip install -r requirements.txt
# Run the server
python -m app.start_servers
Visit our Installation Guide for detailed setup instructions.
Documentation Structure¶
-
Getting Started
Quick installation, first workflow creation, and core concepts
-
Architecture
System design, workflow engine internals, and technical concepts
-
Nodes
Complete reference for all node types and their capabilities
-
API Reference
REST and gRPC API documentation with authentication
-
Features
Debug mode, HITL, LLM history, templating, and more
-
Deployment
Docker, Kubernetes, cloud providers, and configuration
-
Development
Contributing, testing, code standards, and debugging
-
Integrations
Telegram, MCP tools, OpenSearch, LLM providers
Example Workflow¶
Here's a simple workflow that uses LLM to process user input:
Example Workflow
{
"id": "simple-llm-workflow",
"name": "Simple LLM Workflow",
"start_node": "trigger",
"nodes": [
{
"id": "trigger",
"name": "Start",
"type": "ManualTrigger"
},
{
"id": "process",
"name": "Process with LLM",
"type": "CallLLM",
"config": {
"system_prompt": "You are a helpful assistant.",
"user_prompt": "Analyze this: {{data.user_input}}"
}
}
],
"edges": [
{
"id": "e1",
"source": "trigger",
"target": "process"
}
]
}
Learn more in our First Workflow tutorial.
Quick Links¶
- Installation Guide - Get FlowEditor Executor running
- Core Concepts - Understand workflows, nodes, and state
- Node Reference - Complete node type documentation
- API Reference - REST and gRPC APIs
- Debug Mode - Visual workflow debugging
- Human-in-the-Loop - Interactive workflows
- Docker Deployment - Deploy with Docker
- Kubernetes Deployment - Deploy to Kubernetes
Recent Updates¶
Check our Changelog for the latest features and improvements.
Ready to build workflows? Start with our Getting Started Guide →