by aditya vadaganadam
This n8n template turns chat questions into structured financial reports using Gemini and posts them to a Discord channel via webhook. Ask about tickers, sectors, or theses (e.g., “NVDA long‑term outlook?” or “Gold ETF short‑term drivers?”) and receive a concise, shareable report. Good to know Not financial advice: Use for insights only; verify independently. Model availability can vary by region. If you see “model not found,” it may be geo‑restricted. Costs depend on model and tokens. Check current Gemini pricing for updates. Discord messages are limited to ~2000 characters per post; long reports may need splitting. Rate limits: Discord webhooks are rate‑limited; add short waits for bursts. How it works Chat Trigger collects the user’s question (public chat supported when the workflow is activated). Conversation Memory keeps a short window of recent messages to maintain context. Connect Gemini provides the LLM (e.g., gemini‑2.5‑flash‑lite) and parameters (temperature, tokens). Agent (agent1) applies a financial analysis System Message to produce structured insights. Structured Output Parser enforces a simple JSON schema: idea (one‑line thesis) + analysis (Markdown sections). Code formats a Discord‑ready Markdown report (title, question, executive summary, sections, disclaimer). Edit Fields maps the formatted report to a clean content field. Discord Webhook posts the final report to your channel. How to use Start with the built‑in Chat Trigger: click Open chat, ask a question, and verify the Discord post. Replace or augment with a Cron or Webhook trigger for scheduled or programmatic runs. For richer context, add HTTP Request nodes (prices, news, filings) and pass summaries to the agent. Requirements n8n instance with internet access Google AI (Gemini) API key Discord server with a webhook URL Customising this workflow System Message: Adjust tone, depth, risk profile, and required sections (Summary, Drivers, Risks, Metrics, Next Steps, Takeaway). Model settings: Switch models or tune temperature/tokens in Connect Gemini. Schema: Extend the parser and formatter with fields like drivers[], risks[], or metrics{}. Formatting: Edit the Code node to change headings, emojis, disclaimers, or add timestamps. Operations: Add retries, message splitting for long outputs, and rate‑limit handling for Discord.
by DIGITAL BIZ TECH
AI-Powered Website Chatbot with Google Drive Knowledge Base Overview This workflow combines website chatbot intelligence with automated document ingestion and vectorization — enabling live Q&A from both chat input and processed Google Drive files. It uses Mistral AI for OCR + embeddings, and Qdrant for vector search. Chatbot Flow Trigger:** When chat message received or webhook based upon deployed chatbot Model:** OpenAI gpt-4.1-mini Memory:** Simple Memory (Buffer Window) Vector Search Tool:** Qdrant Vector Store Embeddings:** Mistral Cloud Agent:** website chat agent Responds based on chatdbtai Supabase content Enforces brand tone and informative documents. Integratration with both: Embedded chat UI Webhook Document → Knowledge Base Pipeline Triggered manually to keep vector store up-to-date. Steps Google Drive (brand folder) → Fetch files from folder Website kb (ID: 1o3DK9Ceka5Lqb8irvFSfEeB8SVGG_OL7) Loop Over Items → For each file: Set metadata Download file Upload to Mistral for OCR Get Signed URL Run OCR extraction (mistral-ocr-latest) If OCR success → Pass to chunking pipeline Else → skip and continue Chunking Logic (Code node) Splits document into 1,000-character JSON chunks Adds metadata (source, char positions, file ID) Default Data Loader + Text Splitter → Prepares chunks for embedding Embeddings (Mistral Cloud) → Generates embeddings for text chunks Qdrant Vector Store (Insert mode) → Saves embeddings into docragtestkb collection Wait → Optional delay between batches Integrations Used | Service | Purpose | Credential | |----------|----------|------------| | Google Drive | File source | Google Drive account 6 rn dbt | | Mistral Cloud | OCR + embeddings | Mistral Cloud account 2 dbt rn | | Qdrant | Vector storage | QdrantApi account | | OpenAI | Chat model | OpenAi account 8 dbt digi | Agent System Prompt Summary > “You are the official AI assistant for this website. Use chatdbtai only as your knowledge source. Respond conversationally, list offerings clearly, link blogs, and say ‘I couldn’t find that on this site’ if no match.” Key Features ✅ Automated OCR + chunking → vectorization ✅ Persistent memory for chat sessions ✅ Multi-channel (Webhook + Embedded Chat) ✅ Fully brand-guided, structured responses ✅ Live data retrieval from Qdrant vector store Summary > A unified workflow that turns brand files + web content into a knowledge base that powers a intelligent chatbot — capable of responding to visitors in real time, powered by Mistral, OpenAI, and Qdrant. Need Help or More Workflows? Want to customize this workflow for your business or integrate it with your existing tools? Our team at Digital Biz Tech can tailor it precisely to your use case from automation logic to AI-powered enhancements. 💡 We can help you set it up for free — from connecting credentials to deploying it live. Contact: shilpa.raju@digitalbiz.tech Website: https://www.digitalbiz.tech LinkedIn: https://www.linkedin.com/company/digital-biz-tech/ You can also DM us on LinkedIn for any help.
by Trung Tran
Create AI-Powered Chatbot for Candidate Evaluation on Slack > This workflow connects a Slack chatbot with AI agents and Google Sheets to automate candidate resume evaluation. It extracts resume details, identifies the applied job from the message, fetches the correct job description, and provides a summarized evaluation via Slack and tracking sheet. Perfect for HR teams using Slack. Who’s it for This workflow is designed for: HR Teams, **Recruiters, and Hiring Managers Working in software or tech companies using Slack, Google Sheets, and n8n Who want to automate candidate evaluation based on uploaded profiles and applied job positions How it works / What it does This workflow is triggered when a Slack user mentions the HR bot and attaches a candidate profile PDF. The workflow performs the following steps: Trigger from Slack Mention A user mentions the bot in Slack with a message like: @HRBot Please evaluate this candidate for the AI Engineer role. (with PDF attached) Input Validation If no file is attached, the bot replies: "Please upload the candidate profile file before sending the message." Extract Candidate Profile Downloads the attached PDF from Slack Uses Extract from File to parse the resume into text Profile Analysis (AI Agent) Sends the resume text and message to the Profile Analyzer Agent Identifies: Candidate name, email, and summary Applied position (from message) Looks up the Job Description PDF URL using Google Sheets Job Description Retrieval Downloads and parses the matching JD PDF HR Evaluation (AI Agent) Sends both the candidate profile and job description to HR Expert Agent Receives a summarized fit evaluation and insights Output and Logging Sends evaluation result back to Slack in the original thread Updates a Google Sheet with evaluation data for tracking How to set up Slack Setup Create a Slack bot and install it into your workspace Enable the app_mention event and generate a bot token Connect Slack to n8n using Slack Bot credentials Google Sheets Setup Create a sheet mapping Position Title → Job Description URL Create another sheet for logging evaluation results n8n Setup Add a Webhook Trigger for Slack mentions Connect Slack, Google Sheets, and GPT-4 credentials Set up agents (Profile Analyzer Agent, HR Expert Agent) with appropriate prompts Deploy & Test Mention your bot in Slack with a message and file Confirm the reply and entry in the evaluation tracking sheet Requirements n8n (self-hosted or cloud) Slack App with Bot Token OpenAI or Azure OpenAI account (for GPT-4) Google Sheets (2 sheets: job mapping + evaluation log) Candidate profiles in PDF format Defined job titles and descriptions How to customize the workflow You can easily adapt this workflow to your team’s needs: | Customization Area | How to Customize | |--------------------------|----------------------------------------------------------------------------------| | Job Mapping Source | Replace Google Sheet with Airtable or Notion DB | | JD Format | Use Markdown or inline descriptions instead of PDF | | Evaluation Output Format | Change from Slack message to Email or Notion update | | HR Agent Prompt | Customize to match your company tone or include scoring rubrics | | Language Support | Add support for bilingual input/output (e.g., Vietnamese & English) | | Workflow Trigger | Trigger from slash command or form instead of @mention |
by Guillaume Duvernay
This template introduces a revolutionary approach to automated web research. Instead of a rigid workflow that can only find one type of information, this system uses a "thinker" and "doer" AI architecture. It dynamically interprets your plain-English research request, designs a custom spreadsheet (CSV) with the perfect columns for your goal, and then deploys a web-scraping AI to fill it out. It's like having an expert research assistant who not only finds the data you need but also builds the perfect container for it on the fly. Whether you're looking for sales leads, competitor data, or market trends, this workflow adapts to your request and delivers a perfectly structured, ready-to-use dataset every time. Who is this for? Sales & marketing teams:** Generate targeted lead lists, compile competitor analysis, or gather market intelligence with a simple text prompt. Researchers & analysts:** Quickly gather and structure data from the web for any topic without needing to write custom scrapers. Entrepreneurs & business owners:** Perform rapid market research to validate ideas, find suppliers, or identify opportunities. Anyone who needs structured data:** Transform unstructured, natural language requests into clean, organized spreadsheets. What problem does this solve? Eliminates rigid, single-purpose workflows:** This workflow isn't hardcoded to find just one thing. It dynamically adapts its entire research plan and data structure based on your request. Automates the entire research process:** It handles everything from understanding the goal and planning the research to executing the web search and structuring the final data. Bridges the gap between questions and data:** It translates your high-level goal (e.g., "I need sales leads") into a concrete, structured spreadsheet with all the necessary columns (Company Name, Website, Key Contacts, etc.). Optimizes for cost and efficiency:* It intelligently uses a combination of deep-dive and standard web searches from *Linkup.so** to gather high-quality initial results and then enrich them cost-effectively. How it works (The "Thinker & Doer" Method) The process is cleverly split into two main phases: The "Thinker" (AI Planner): You submit a research request via the built-in form (e.g., "Find 50 US-based fashion companies for a sales outreach campaign"). The first AI node acts as the "thinker." It analyzes your request and determines the optimal structure for your final spreadsheet. It dynamically generates a plan, which includes a discoveryQuery to find the initial list, an enrichmentQuery to get details for each item, and the JSON schemas that define the exact columns for your CSV. The "Doer" (AI Researcher): The rest of the workflow is the "doer," which executes the plan. Discovery: It uses a powerful "deep search" with Linkup.so to execute the discoveryQuery and find the initial list of items (e.g., the 50 fashion companies). Enrichment: It then loops through each item in the list. For each one, it performs a fast and cost-effective "standard search" with Linkup to execute the enrichmentQuery, filling in all the detailed columns defined by the "thinker." Final Output: The workflow consolidates all the enriched data and converts it into a final CSV file, ready for download or further processing. Setup Connect your AI provider: In the OpenAI Chat Model node, add your AI provider's credentials. Connect your Linkup account: In the two Linkup (HTTP Request) nodes, add your Linkup API key (free account at linkup.so). We recommend creating a "Generic Credential" of type "Bearer Token" for this. Linkup offers €5 of free credits monthly, which is enough for 1k standard searches or 100 deep queries. Activate the workflow: Toggle the workflow to "Active." You can now use the form to submit your first research request! Taking it further Add a custom dashboard:** Replace the form trigger and final CSV output with a more polished user experience. For example, build a simple web app where users can submit requests and download their completed research files. Make it company-aware:** Modify the "thinker" AI's prompt to include context about your company. This will allow it to generate research plans that are automatically tailored to finding leads or data relevant to your specific products and services. Add an AI summary layer:** After the CSV is generated, add a final AI node to read the entire file and produce a high-level summary, such as "Here are the top 5 leads to contact first and why," turning the raw data into an instant, actionable report.
by Cheng Siong Chin
How It Works This workflow automates financial transaction surveillance by monitoring multiple payment systems, analyzing transaction patterns with AI, and triggering instant fraud alerts. Designed for finance teams, compliance officers, and fintech operations, it solves the challenge of real-time fraud detection across high-volume transaction streams without manual oversight. The system continuously fetches transactions from banking APIs and payment gateways via scheduled triggers or webhooks. Each transaction flows through validation layers checking for irregular amounts, velocity patterns, and geolocation anomalies. AI models analyze transaction metadata against historical patterns to calculate fraud risk scores. High-risk transactions trigger immediate alerts to designated teams via Gmail and Slack, while audit trails are logged to Google Sheets for compliance documentation. Approved transactions proceed to reconciliation, aggregating financial reports automatically. This eliminates delayed fraud discovery, reduces false positives through intelligent scoring, and ensures regulatory compliance through comprehensive audit logging. Setup Steps Configure banking API credentials for transaction access Set up webhook endpoints for real-time transaction notifications Add OpenAI API key for fraud pattern analysis and risk scoring Configure NVIDIA NIM API for advanced anomaly detection models Set Gmail OAuth credentials for automated fraud alert delivery Connect Slack workspace and specify alert channels for urgent notifications Link Google Sheets for transaction logging and compliance audit trails Prerequisites Active accounts for payment processors (Stripe, PayPal) or banking APIs (Plaid) Use Cases Real-time credit card transaction monitoring with instant fraud blocks Customization Adjust fraud risk scoring thresholds based on business risk tolerance Benefits Reduces fraud detection time from hours to seconds through real-time monitoring.
by Growth AI
N8N UGC Video Generator - Setup Instructions Transform Product Images into Professional UGC Videos with AI This powerful n8n workflow automatically converts product images into professional User-Generated Content (UGC) videos using cutting-edge AI technologies including Gemini 2.5 Flash, Claude 4 Sonnet, and VEO3 Fast. Who's it for Content creators** looking to scale video production E-commerce businesses** needing authentic product videos Marketing agencies** creating UGC campaigns for clients Social media managers** requiring quick video content How it works The workflow operates in 4 distinct phases: Phase 0: Setup - Configure all required API credentials and services Phase 1: Image Enhancement - AI analyzes and optimizes your product image Phase 2: Script Generation - Creates authentic dialogue scripts based on your input Phase 3: Video Production - Generates and merges professional video segments Requirements Essential Services & APIs Telegram Bot Token** (create via @BotFather) OpenRouter API** with Gemini 2.5 Flash access Anthropic API** for Claude 4 Sonnet KIE.AI Account** with VEO3 Fast access N8N Instance** (cloud or self-hosted) Technical Prerequisites Basic understanding of n8n workflows API key management experience Telegram bot creation knowledge How to set up Step 1: Service Configuration Create Telegram Bot Message @BotFather on Telegram Use /newbot command and follow instructions Save the bot token for later use OpenRouter Setup Sign up at openrouter.ai Purchase credits for Gemini 2.5 Flash access Generate and save API key Anthropic Configuration Create account at console.anthropic.com Add credits to your account Generate Claude API key KIE.AI Access Register at kie.ai Subscribe to VEO3 Fast plan Obtain bearer token Step 2: N8N Credential Setup Configure these credentials in your n8n instance: Telegram API Credential Name: telegramApi Bot Token: Your Telegram bot token OpenRouter API Credential Name: openRouterApi API Key: Your OpenRouter key Anthropic API Credential Name: anthropicApi API Key: Your Anthropic key HTTP Bearer Auth Credential Name: httpBearerAuth Token: Your KIE.AI bearer token Step 3: Workflow Configuration Import the Workflow Copy the provided JSON workflow Import into your n8n instance Update Telegram Token Locate the "Edit Fields" node Replace "Your Telegram Token" with your actual bot token Configure Webhook URLs Ensure all Telegram nodes have proper webhook configurations Test webhook connectivity Step 4: Testing & Validation Test Individual Nodes Verify each API connection Check credential configurations Confirm node responses End-to-End Testing Send a test image to your Telegram bot Follow the complete workflow process Verify final video output How to customize the workflow Modify Image Enhancement Prompts Edit the HTTP Request node for Gemini Adjust the prompt text to match your style preferences Test different aspect ratios (current: 1:1 square format) Customize Script Generation Modify the Basic LLM Chain node prompt Adjust video segment duration (current: 7-8 seconds each) Change dialogue style and tone requirements Video Generation Settings Update VEO3 API parameters in HTTP Request1 node Modify aspect ratio (current: 16:9) Adjust model settings and seeds for consistency Output Customization Change final video format in MediaFX node Modify Telegram message templates Add additional processing steps before delivery Workflow Operation Phase 1: Image Reception and Enhancement User sends product image via Telegram System prompts for enhancement instructions Gemini AI analyzes and optimizes image Enhanced square-format image returned Phase 2: Analysis and Script Creation System requests dialogue concept from user AI analyzes image details and environment Claude generates realistic 2-segment script Scripts respect physical constraints of original image Phase 3: Video Generation Two separate videos generated using VEO3 System monitors generation status Videos merged into single flowing sequence Final video delivered via Telegram Troubleshooting Common Issues API Rate Limits**: Implement delays between requests Webhook Failures**: Verify URL configurations and SSL certificates Video Generation Timeouts**: Increase wait node duration Credential Errors**: Double-check all API keys and permissions Error Handling The workflow includes automatic error detection: Failed video generation triggers error message Status checking prevents infinite loops Alternative outputs for different scenarios Advanced Features Batch Processing Modify trigger to handle multiple images Add queue management for high-volume usage Implement user session tracking Custom Branding Add watermarks or logos to generated videos Customize color schemes and styling Include brand-specific dialogue templates Analytics Integration Track usage metrics and success rates Monitor API costs and optimization opportunities Implement user behavior analytics Cost Optimization API Usage Management Monitor token consumption across services Implement caching for repeated requests Use lower-cost models for testing phases Efficiency Improvements Optimize image sizes before processing Implement smart retry mechanisms Use batch processing where possible This workflow transforms static product images into engaging, professional UGC videos automatically, saving hours of manual video creation while maintaining high quality output perfect for social media platforms.
by Jameson Kanakulya
Automated Content Page Generator with AI, Tavily Research, and Supabase Storage > ⚠️ Self-Hosted Disclaimer: This template requires self-hosted n8n installation and external service credentials (OpenAI, Tavily, Google Drive, NextCloud, Supabase). It cannot run on n8n Cloud due to dependency requirements. Overview Transform simple topic inputs into professional, multi-platform content automatically. This workflow combines AI-powered content generation with intelligent research and seamless storage integration to create website content, blog articles, and landing pages optimized for different audiences. Key Features Automated Research**: Uses Tavily's advanced search to gather relevant, up-to-date information Multi-Platform Content**: Generates optimized content for websites, blogs, and landing pages Image Management**: Downloads from Google Drive and uploads to NextCloud with public URL generation Database Integration**: Stores all content in Supabase for easy retrieval Error Handling**: Built-in error management workflow for reliability Content Optimization**: AI-driven content strategy with trend analysis and SEO optimization Required Services & APIs Core Services n8n**: Self-hosted instance (required) OpenAI**: GPT-4 API access for content generation Tavily**: Research API for content discovery Google Drive**: Image storage and retrieval Google Sheets**: Content input and workflow triggering NextCloud**: Image hosting and public URL generation Supabase**: Database storage for generated content Setup Instructions ## Prerequisites Before setting up this workflow, ensure you have: Self-hosted n8n installation API credentials for all required services Database table created in Supabase ## Step 1: Service Account Configuration OpenAI Setup Create an OpenAI account at platform.openai.com Generate API key from the API Keys section In n8n, create new OpenAI credentials using your API key Test connection to ensure GPT-4 access Tavily Research Setup Sign up at tavily.com Get your API key from the dashboard Add Tavily credentials in n8n Configure search depth to "advanced" for best results Google Services Setup Create Google Cloud Project Enable Google Drive API and Google Sheets API Create OAuth2 credentials Configure Google Drive and Google Sheets credentials in n8n Share your input spreadsheet with the service account NextCloud Setup Install NextCloud or use hosted solution Create application password for API access Configure NextCloud credentials in n8n Create /images/ folder for content storage Supabase Setup Create Supabase project at supabase.com Create table with the following structure: CREATE TABLE works ( id SERIAL PRIMARY KEY, title TEXT NOT NULL, content TEXT NOT NULL, image_url TEXT, category TEXT, created_at TIMESTAMP DEFAULT NOW() ); Get project URL and service key from settings Configure Supabase credentials in n8n ## Step 2: Google Sheets Input Setup Create a Google Sheets document with the following columns: TITLE**: Topic or title for content generation IMAGE_URL**: Google Drive sharing URL for associated image Example format: TITLE | IMAGE_URL AI Chatbot Implementation | https://drive.google.com/file/d/your-file-id/view Digital Marketing Trends 2024 | https://drive.google.com/file/d/another-file-id/view ## Step 3: Workflow Import and Configuration Import the workflow JSON into your n8n instance Configure all credential connections: Link OpenAI credentials to "OpenAI_GPT4_Model" node Link Tavily credentials to "Tavily_Research_Agent" node Link Google credentials to "Google_Sheets_Trigger" and "Google_Drive_Image_Downloader" nodes Link NextCloud credentials to "NextCloud_Image_Uploader" and "NextCloud_Public_URL_Generator" nodes Link Supabase credentials to "Supabase_Content_Storage" node Update the Google Sheets Trigger node: Set your spreadsheet ID in the documentId field Configure polling frequency (default: every minute) Test each node connection individually before activating ## Step 4: Error Handler Setup (Optional) The workflow references an error handler workflow (GWQ4UI1i3Z0jp3GF). Either: Create a simple error notification workflow with this ID Remove the error handling references if not needed Update the workflow ID to match your error handler ## Step 5: Workflow Activation Save all node configurations Test the workflow with a sample row in your Google Sheet Verify content generation and storage in Supabase Activate the workflow for continuous monitoring How It Works ## Workflow Process Trigger: Google Sheets monitors for new rows with content topics Research: Tavily searches for 3 relevant articles about the topic Content Generation: AI agent creates multi-platform content (website, blog, landing page) Content Cleaning: Text processing removes formatting artifacts Image Processing: Downloads image from Google Drive, uploads to NextCloud URL Generation: Creates public sharing links for images Storage: Saves final content package to Supabase database ## Content Output Structure Each execution generates: Optimized Title**: SEO-friendly, platform-appropriate headline Multi-Platform Content**: Website content (professional, authority-building) Blog content (educational, SEO-optimized) Landing page content (conversion-focused) Category Classification**: Automated content categorization Image Assets**: Processed and publicly accessible images Customization Options ## Content Strategy Modification Edit the AI agent's system message to change content style Adjust character limits for different platform requirements Modify category classifications for your industry ## Research Parameters Change Tavily search depth (basic, advanced) Adjust number of research sources (1-10) Modify search topic focus ## Storage Configuration Update Supabase table structure for additional fields Change NextCloud folder organization Modify image naming conventions Troubleshooting ## Common Issues Workflow not triggering: Check Google Sheets permissions Verify polling frequency settings Ensure spreadsheet format matches requirements Content generation errors: Verify OpenAI API key and credits Check GPT-4 model access Review system message formatting Image processing failures: Confirm Google Drive sharing permissions Check NextCloud storage space and permissions Verify file formats are supported Database storage issues: Validate Supabase table structure Check API key permissions Review field mapping in storage node ## Performance Optimization Adjust polling frequency based on your content volume Monitor API usage to stay within limits Consider batch processing for high-volume scenarios Support and Updates This template is designed for self-hosted n8n environments and requires technical setup. For issues: Check n8n community forums Review service-specific documentation Test individual nodes in isolation Monitor execution logs for detailed error information
by Trung Tran
📘 Code of Conduct Q&A Slack Chatbot with RAG Powered > Empower employees to instantly access and understand the company’s Code of Conduct via a Slack chatbot, powered by Retrieval-Augmented Generation (RAG) and LLMs. 🧑💼 Who’s it for This workflow is designed for: HR and compliance teams** to automate policy-related inquiries Employees** who want quick answers to Code of Conduct questions directly inside Slack Startups or enterprises** that need internal compliance self-service tools powered by AI ⚙️ How it works / What it does This RAG-powered Slack chatbot answers user questions based on your uploaded Code of Conduct PDF using GPT-4 and embedded document chunks. Here's the flow: Receive Message from Slack: A webhook triggers when a message is posted in Slack. Check if it’s a valid query: Filters out non-user messages (e.g., bot mentions). Run Agent with RAG: Uses GPT-4 with Query Data Tool to retrieve relevant document chunks. Returns a well-formatted, context-aware answer. Send Response to Slack: Fetches user info and posts the answer back in the same channel. Document Upload Flow: HR can upload the PDF Code of Conduct file. It’s parsed, chunked, embedded using OpenAI, and stored for future query retrieval. A backup copy is saved to Google Drive. 🛠️ How to set up Prepare your environment: Slack Bot token & webhook configured (Sample slack app manifest: https://wisestackai.s3.ap-southeast-1.amazonaws.com/slack_bot_manifest.json) OpenAI API key (for GPT-4 & embedding) Google Drive credentials (optional for backup) Upload the Code of Conduct PDF: Use the designated node to upload your document (Sample file: https://wisestackai.s3.ap-southeast-1.amazonaws.com/20220419-ingrs-code-of-conduct-policy-en.pdf) This triggers chunking → embedding → data store. Deploy the chatbot: Host the webhook and connect it to your Slack app. Share the command format with employees (e.g., @CodeBot Can I accept gifts from partners?) Monitor and iterate: Improve chunk size or embed model if queries aren’t accurate. Review unanswered queries to enhance coverage. 📋 Requirements n8n (Self-hosted or Cloud) Slack App (with chat:write, users:read, commands) OpenAI account (embedding + GPT-4 access) Google Drive integration (for backups) Uploaded Code of Conduct in PDF format 🧩 How to customize the workflow | What to Customize | How to Do It | |-----------------------------|------------------------------------------------------------------------------| | 🔤 Prompt style | Edit the System & User prompts inside the Code Of Conduct Agent node | | 📄 Document types | Upload additional policy PDFs and tag them differently in metadata | | 🤖 Agent behavior | Tune GPT temperature or replace with different LLM | | 💬 Slack interaction | Customize message formats or trigger phrases | | 📁 Data Store engine | Swap to Pinecone, Weaviate, Supabase, etc. depending on use case | | 🌐 Multilingual support | Preprocess text and support locale detection via Slack metadata |
by Trung Tran
Multi-Agent Architecture Free Bootstrap Template for Beginners Free template to learn and reuse a multi-agent architecture in n8n. The company metaphor: a CEO (orchestrator) delegates to Marketing, Operations, Finance to produce a short sales-season plan, export it to PDF, and share it. Who’s it for Builders who want a clear, minimal pattern for multi-agent orchestration in n8n. Teams demoing/teaching agent collaboration with one coordinator + three specialists. Anyone needing a repeatable template to generate plans from multiple “departments”. How it works / What it does Trigger (Manual) — Click Execute workflow to start. Edit Fields — Provide brief inputs (company, products, dates, constraints, channels, goals). CEO Agent (Orchestrator) — Reads the brief, calls 3 tool agents once, merges results, resolves conflicts. Marketing Agent — Proposes top campaigns + channels + content calendar. Operations Agent — Outlines inventory/staffing readiness, fulfillment steps, risks. Finance Agent — Suggests pricing/discounts, budget split, targets. Compose Document — CEO produces Markdown; node converts to Google Doc → PDF. Share — Upload the PDF to Slack (or Drive) for review. Outputs Markdown plan** with sections (Summary, Timeline, Marketing, Ops, Pricing, Risks, Next Actions). Compact JSON** for automation (campaigns, budget, dates, actions). PDF** file for stakeholders. How to set up Add credentials OpenAI (or your LLM provider) for all agents. Google (Drive/Docs) to create the document and export PDF. Slack (optional) to upload/share the PDF. Map nodes (suggested) When clicking ‘Execute workflow’ → Edit Fields (form with: company, products, audience, start_date, end_date, channels, constraints, metrics). CEO Agent (AI Tool Node) → calls Marketing Agent, Operations Agent, Finance Agent (AI Tool Nodes). Configure metadata (doc title from company + window). Create document file (Google Docs API) with CEO Markdown. Convert to PDF (export). Upload a file (Slack) to share. Prompts (drop-in) CEO (system): orchestrate 3 tools; request concise JSON+Markdown; merge & resolve; output sections + JSON. Marketing / Operations / Finance (system): each returns a small JSON per its scope (campaigns/calendar; staffing/steps/risks; discounts/budget/targets). Test — Run once; verify the PDF and Slack message. Requirements n8n (current version with AI Tool Node). LLM credentials (e.g., OpenAI). Google credentials for Docs/Drive (to create & export). Optional Slack bot token for file uploads. How to customize the workflow Swap roles**: Replace departments (e.g., Product, Legal, Support) or add more tool agents. Change outputs: Export to **DOCX/HTML/Notion; add a cover page; attach brand styles. Approval step: Insert **Slack “Send & Wait” before PDF generation for review/edits. Data grounding**: Add RAG (Sheets/DB/Docs) so agents cite inventory, pricing, or past campaign KPIs. Automation JSON**: Extend the schema to match your CRM/PM tool and push next_actions into Jira/Asana. Scheduling: Replace manual trigger with a **cron (weekly/monthly planning). Localization**: Add a Translation agent or set language via input field. Guardrails**: Add length limits, cost caps (max tokens), and validation on agent JSON.
by Dzaky Jaya
This n8n workflow demonstrate how to configure AI Agent for financial research purposes especially for IDX data through Sectors App API. use cases: research stock market in Indonesia. analyze the performance of companies belonging to certain subsectors or company comparing financial metrics between BBCA and BBRI providing technical analysis for certain ticker stock movement and many more all from conversational agent UI chat. Main components Input-n8nChatNative**: handling and process input from native n8n chat ui Input-TelegramBot**: handling and process input from Telegram Bot Input-WebUI(Webhook)**: handling and process input from hosted Web UI through webhook Main Agent**: processing raw user queries and delegate task to specialized agent if needed. Spec Agent - Sectors App**: make request to Sectors App API to get real time financial data listed in IDX from available endpoint Spec Agent - Web Search**: make web search from Google Grounding (Gemini API) and Tavily Search. Vector Document Processing**: process document upload from user into embedding and vector store. How it works user queries may received from multiple platform (we use three here: Telegram, hosted Web UI, and native n8n chat UI) if user also uploading document, it will process the document and store it in vector store the request send to the Main Agent to process the queries the Main Agent decide the task to delegate to Specialized Agent if nedded. the result then sent back to user based on the platform How to use You need this API: Gemini API: get it free from https://aistudio.google.com/ Tavily API: get it free from https://www.tavily.com/ Sectors App API: get it from https://sectors.app/api/ you can optionally change the model or adding fallback model to handle token request, cause it may use quite many tokens.
by Cheng Siong Chin
How It Works A scheduled trigger initiates automated retrieval of TOTO/4D data, including both current and historical records. The datasets are merged and validated to ensure structural consistency before branching into parallel analytical pipelines. One track performs pattern mining and anomaly detection, while the other generates statistical and time-series forecasts. Results are then routed to an AI agent that integrates multi-model insights, evaluates prediction confidence, and synthesizes the final output. The system formats the results and delivers them through the selected export channel. Setup Instructions 1. Scheduler Config: Adjust the trigger frequency (daily or weekly). 2. Data Sources: Configure API endpoints or database connectors for TOTO/4D retrieval. 3. Data Mapping: Align and map column structures for both 1D and 4D datasets in merge nodes. 4. AI Integration: Insert the OpenAI API key and connect the required model nodes. 5. Export Paths: Select and configure output channels (email, Google Sheets, webhook, or API). Prerequisites TOTO/4D historical data source with API access OpenAI API key (GPT-4 recommended) n8n environment with HTTP/database connectivity Basic time series analysis knowledge Use Cases Traders: Pattern recognition for draw prediction with confidence scoring Analysts: Multivariate forecasting across cycles with validation Customization Data: Swap TOTO/4D with stock prices, crypto, sensors, or any time series Models: Replace OpenAI with Claude, local LLMs, or HuggingFace models Benefits Automation: Runs 24/7 without manual intervention Intelligence: Ensemble approach prevents overfitting and single-model bias
by AlphaInsider
AlphaInsider Telegram Chat Bot Automate trading on AlphaInsider by monitoring Telegram messages. Uses AI to analyze signals and execute trades, create posts, or answer questions. How It Works Message Flow: Telegram → Route (DM/Channel) → Detect Type (Text/Voice) → Transcribe (if voice) → Global Settings → Fetch Positions → AI Analysis → Route Action (Trade/Post/Q&A) → Execute → Reply (if DM) Three AI Actions: Trade - Executes orders on AlphaInsider based on clear trading signals Post - Creates audience posts on AlphaInsider from commentary or analysis Q&A - Responds to direct questions about positions or trading The AI agent analyzes messages with current portfolio context, determines intent, and routes to the appropriate action. Prerequisites n8n instance** (self-hosted or cloud) AlphaInsider account** with API access - Sign up OpenAI API key** - Get key Telegram Bot** - Create via @BotFather Quick Setup 1. Configure Credentials Telegram API: Create bot with @BotFather and copy auth token Add token to Telegram Channel Listener node (Optional) Add bot as admin to channel with no permissions OpenAI API: Get API key from OpenAI Platform Add to OpenAI Model and Transcribe Voice Message nodes AlphaInsider API: Go to Developer Settings Click n8n button and copy API key Add to Get Positions, Search Stocks, and Create Orders nodes 2. Configure Workflow Global Settings Node: strategy_id** (required): Copy from AlphaInsider strategy URL (e.g. URL: https://alphainsider.com/strategy/niAlE-cMI8TdsYQllZLmf, Strategy ID: niAlE-cMI8TdsYQllZLmf) whitelist** (optional): Restrict trading to specific securities using ["SYMBOL:EXCHANGE"] format, or leave as [] for all Channel Check Node (optional): Set channel_id to monitor specific channel (e.g., -1001234567890) Leave default to monitor all messages 3. Activate Toggle Active in the workflow editor. Your bot is now monitoring Telegram. Features Real-time monitoring** via webhook for instant processing Dual input modes**: Direct messages and channel posts Voice message support**: Automatic transcription with OpenAI Whisper Portfolio context**: Considers existing positions before trading Leverage support**: Up to 2x (200%) portfolio leverage Security whitelist**: Optional restriction to approved securities Interactive replies**: Responds to DMs with confirmations or answers Trading Logic Trade Signals (action = trade): Long**: buy, bullish, loading up, moon, strong buy Short**: sell, dumping, overvalued, bearish, exiting Close**: take profit, exiting, holding cash, getting out Posts (action = post): Broadcasting information or analysis to followers No explicit trading intention Q&A (action = none): Position queries and trading questions Pure observation or technical analysis Weak/ambiguous signals Allocation Rules: Percentages: 0 to 2.0 (1.5 = 150% leverage) Total must not exceed 2.0 Cannot mix stocks and crypto Conservative approach - only acts on clear signals Example Usage Buy Signal: Message: "Extremely bullish on Nvidia. Adding more NVDA" Current: 50% TSLA Result: 150% NVDA + 50% TSLA = 200% Reallocation: Message: "Selling Tesla to buy Microsoft" Current: 100% TSLA Result: 100% MSFT (TSLA closed) Post to Audience: Message: "Bitcoin breaking key resistance at $45k!" Result: Post created on AlphaInsider Position Query: Message: "What are my current positions?" Reply: "Your current positions are: TSLA:NASDAQ long 100%, MSFT:NASDAQ long 50%." Supported Exchanges COINBASE** - Cryptocurrencies NYSE** - New York Stock Exchange NASDAQ** - NASDAQ Stock Market Troubleshooting | Issue | Solution | |-------|----------| | Webhook not triggering | Verify bot is admin in channel with correct permissions | | AI not trading | Ensure messages contain clear action words (buying, selling, closing) | | Trades not executing | Verify AlphaInsider credentials and strategy_id | | Voice messages failing | Check OpenAI API key in Transcribe Voice Message node | | Wrong channel messages | Update channel_id in Channel Check node | | Not replying to DMs | Verify Telegram credentials in User Reply node | Customization Open Parse Stock Allocations node to modify trading logic: Signal recognition keywords Leverage limits (default: 2.0) Allocation strategies Risk filters Disclaimer Trading involves risk. Always test thoroughly before using with real money. Start with small allocations and security whitelists. Users are responsible for their own trading decisions and risk management.