by David Ashby
Complete MCP server exposing 3 Background Removal API operations to AI agents. β‘ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Background Removal API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL π§ How it Works This workflow converts the Background Removal API into an MCP-compatible interface for AI agents. β’ MCP Trigger: Serves as your server endpoint for AI agent requests β’ HTTP Request Nodes: Handle API calls to https://api.remove.bg/v1.0 β’ AI Expressions: Automatically populate parameters via $fromAI() placeholders β’ Native Integration: Returns responses directly to the AI agent π Available Operations (3 total) π§ Account (1 endpoints) β’ GET /account: Fetch Account Balance π§ Improve (1 endpoints) β’ POST /improve: Submit Image for Improvement π§ Removebg (1 endpoints) β’ POST /removebg: Remove Image Background π€ AI Integration Parameter Handling: AI agents automatically provide values for: β’ Path parameters and identifiers β’ Query parameters and filters β’ Request body data β’ Headers and authentication Response Format: Native Background Removal API responses with full data structure Error Handling: Built-in n8n HTTP request error management π‘ Usage Examples Connect this MCP server to any AI agent or workflow: β’ Claude Desktop: Add MCP server URL to configuration β’ Cursor: Add MCP server SSE URL to configuration β’ Custom AI Apps: Use MCP URL as tool endpoint β’ API Integration: Direct HTTP calls to MCP endpoints β¨ Benefits β’ Zero Setup: No parameter mapping or configuration needed β’ AI-Ready: Built-in $fromAI() expressions for all parameters β’ Production Ready: Native n8n HTTP request handling and logging β’ Extensible: Easily modify or add custom logic > π Free for community use! Ready to deploy in under 2 minutes.
by RedOne
ποΈ AI Audio Assistant with Voice-to-Voice Response Who is this for? Businesses, customer service teams, content creators, and organizations who want to provide intelligent voice-based interactions through Telegram. Perfect for accessibility-focused services, multilingual support, or hands-free customer assistance. What problem does this solve? Enables natural voice conversations with AI Breaks down language and accessibility barriers Provides instant voice responses to customer queries Reduces typing requirements for users Offers 24/7 voice-based customer support Maintains conversation context across voice interactions What this workflow does: Receives voice messages via Telegram bot Transcribes audio using Deepgram's advanced speech-to-text Processes transcribed text through AI agent with knowledge base access Generates intelligent responses based on conversation context Converts AI response to natural-sounding speech using Deepgram TTS Sends audio response back to user via Telegram Maintains conversation memory for contextual interactions π§ Technical Architecture Core Components: Telegram Bot**: Receives and sends voice messages Deepgram STT**: Transcribes voice to text with high accuracy OpenAI GPT**: Processes queries and generates responses Supabase Knowledge Base**: Stores and retrieves business information Memory Management**: Maintains conversation context Deepgram TTS**: Converts text responses to natural speech Data Flow: Voice Message β Telegram API β File Download Audio File β Deepgram STT β Transcript Transcript β AI Agent β Response Generation Response β Deepgram TTS β Audio File Audio Response β Telegram β User π οΈ Setup Instructions Prerequisites Telegram Bot Token Create bot via @BotFather Get bot token and configure webhook Deepgram API Key Sign up at deepgram.com Get API key for STT and TTS services Note: Currently hardcoded in workflow OpenAI API Key OpenAI account with API access Configure in OpenAI Chat Model node Supabase Database Create Supabase project Set up knowledge_base table Configure API credentials Step-by-Step Setup Configure Telegram Bot Update telegramToken in "Prepare Voice Message Data" node Set correct bot token in Telegram nodes Test bot connectivity Set Up Deepgram Integration Replace API key in "Transcribe with Deepgram" node Update TTS endpoint in "HTTP Request" node Test voice transcription accuracy Configure Knowledge Base -- Create knowledge_base table in Supabase CREATE TABLE knowledge_base ( id UUID DEFAULT gen_random_uuid() PRIMARY KEY, question TEXT NOT NULL, answer TEXT NOT NULL, category VARCHAR(100), keywords TEXT[], created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW() ); Customize AI Prompts Update system message in "Telegram AI Agent" node Adjust temperature and max tokens in OpenAI model Configure memory session keys Test End-to-End Flow Send test voice message to bot Verify transcription accuracy Check AI response quality Validate audio output clarity ποΈ Configuration Options Voice Recognition Settings Model**: nova-2 (Deepgram's latest model) Language**: English (en) - can be changed Smart Format**: Enabled for better punctuation AI Response Settings Temperature**: 0.3 (conservative responses) Max Tokens**: 100 (adjust based on needs) Memory**: Session-based conversation context Text-to-Speech Settings Model**: aura-2-thalia-en (natural female voice) Alternative voices**: Available in Deepgram TTS API Audio Format**: Optimized for Telegram π Security Considerations API Key Management // Current implementation has hardcoded tokens // Recommended: Use environment variables const telegramToken = process.env.TELEGRAM_BOT_TOKEN; const deepgramKey = process.env.DEEPGRAM_API_KEY; Data Privacy Voice messages are processed by external APIs Consider data retention policies Implement user consent mechanisms Ensure GDPR compliance if applicable π Monitoring & Analytics Key Metrics to Track Voice message processing time Transcription accuracy rates AI response quality scores User engagement metrics Error rates and failure points Recommended Logging // Add to workflow for monitoring console.log({ timestamp: new Date().toISOString(), user_id: userData.user_id, transcript_confidence: transcriptData.confidence, response_length: aiResponse.length, processing_time: processingTime }); π Customization Ideas Enhanced Features Multi-language Support Add language detection Support multiple TTS voices Translate responses Voice Commands Implement wake words Add voice shortcuts Create voice menus Advanced AI Features Sentiment analysis Intent classification Escalation triggers Integration Expansions Connect to CRM systems Add calendar scheduling Integrate with help desk tools Performance Optimizations Implement audio preprocessing Add response caching Optimize API call sequences Implement retry mechanisms π Troubleshooting Common Issues Voice Not Transcribing Check Deepgram API key validity Verify audio format compatibility Test with shorter voice messages Poor Audio Quality Adjust TTS model settings Check network connectivity Verify Telegram audio limits AI Responses Too Generic Improve knowledge base content Adjust system prompts Increase context window Memory Not Working Check session key configuration Verify user ID extraction Test conversation continuity π‘ Best Practices Voice Interface Design Keep responses concise and clear Use natural speech patterns Avoid technical jargon Provide clear next steps Knowledge Base Management Regular content updates Clear categorization Keyword optimization Quality assurance testing User Experience Fast response times (<5 seconds) Consistent voice personality Graceful error handling Clear capability communication π Success Metrics Technical KPIs Response time: <3 seconds average Transcription accuracy: >95% User satisfaction: >4.5/5 Uptime: >99.5% Business KPIs Customer query resolution rate Support ticket reduction User engagement increase Cost per interaction decrease π Maintenance Schedule Daily Monitor error logs Check API rate limits Verify service uptime Weekly Review conversation quality Update knowledge base Analyze usage patterns Monthly Performance optimization Security audit Feature updates User feedback review π Additional Resources Documentation Links Deepgram STT API Deepgram TTS API Telegram Bot API OpenAI API Supabase Documentation Community Support n8n Community Forum Telegram Bot Developers Group Deepgram Developer Discord OpenAI Developer Community Note: This template requires active API subscriptions for Deepgram and OpenAI services. Costs may apply based on usage volume.
by DataMinex
π Real-Time Flight Data Analytics Bot with Dynamic Chart Generation via Telegram π Template Overview This advanced n8n workflow creates an intelligent Telegram bot that transforms raw CSV flight data into stunning, interactive visualizations. Users can generate professional charts on-demand through a conversational interface, making data analytics accessible to anyone via messaging. Key Innovation: Combines real-time data processing, Chart.js visualization engine, and Telegram's messaging platform to deliver instant business intelligence insights. π― What This Template Does Transform your flight booking data into actionable insights with four powerful visualization types: π Bar Charts**: Top 10 busiest airlines by flight volume π₯§ Pie Charts**: Flight duration distribution (Short/Medium/Long-haul) π© Doughnut Charts**: Price range segmentation with average pricing π Line Charts**: Price trend analysis across flight durations Each chart includes auto-generated insights, percentages, and key business metrics delivered instantly to users' phones. ποΈ Technical Architecture Core Components Telegram Webhook Trigger: Captures user interactions and button clicks Smart Routing Engine: Conditional logic for command detection and chart selection CSV Data Pipeline: File reading β parsing β JSON transformation Chart Generation Engine: JavaScript-powered data processing with Chart.js Image Rendering Service: QuickChart API for high-quality PNG generation Response Delivery: Binary image transmission back to Telegram Data Flow Architecture User Input β Command Detection β CSV Processing β Data Aggregation β Chart Configuration β Image Generation β Telegram Delivery π οΈ Setup Requirements Prerequisites n8n instance** (self-hosted or cloud) Telegram Bot Token** from @BotFather CSV dataset** with flight information Internet connectivity** for QuickChart API Dataset Source This template uses the Airlines Flights Data dataset from GitHub: π Dataset: Airlines Flights Data by Rohit Grewal Required Data Schema Your CSV file should contain these columns: airline,flight,source_city,departure_time,arrival_time,duration,price,class,destination_city,stops File Structure /data/ βββ flights.csv (download from GitHub dataset above) βοΈ Configuration Steps 1. Telegram Bot Setup Create a new bot via @BotFather on Telegram Copy your bot token Configure the Telegram Trigger node with your token Set webhook URL in your n8n instance 2. Data Preparation Download the dataset from Airlines Flights Data Upload the CSV file to /data/flights.csv in your n8n instance Ensure UTF-8 encoding Verify column headers match the dataset schema Test file accessibility from n8n 3. Workflow Activation Import the workflow JSON Configure all Telegram nodes with your bot token Test the /start command Activate the workflow π§ Technical Implementation Details Chart Generation Process Bar Chart Logic: // Aggregate airline counts const airlineCounts = {}; flights.forEach(flight => { const airline = flight.airline || 'Unknown'; airlineCounts[airline] = (airlineCounts[airline] || 0) + 1; }); // Generate Chart.js configuration const chartConfig = { type: 'bar', data: { labels, datasets }, options: { responsive: true, plugins: {...} } }; Dynamic Color Schemes: Bar Charts: Professional blue gradient palette Pie Charts: Duration-based color coding (lightβdark blue) Doughnut Charts: Price-tier specific colors (greenβpurple) Line Charts: Trend-focused red gradient with smooth curves Performance Optimizations Efficient Data Processing: Single-pass aggregations with O(n) complexity Smart Caching: QuickChart handles image caching automatically Minimal Memory Usage: Stream processing for large datasets Error Handling: Graceful fallbacks for missing data fields Advanced Features Auto-Generated Insights: Statistical calculations (percentages, averages, totals) Trend analysis and pattern detection Business intelligence summaries Contextual recommendations User Experience Enhancements: Reply keyboards for easy navigation Visual progress indicators Error recovery mechanisms Mobile-optimized chart dimensions (800x600px) π Use Cases & Business Applications Airlines & Travel Companies Fleet Analysis**: Monitor airline performance and market share Pricing Strategy**: Analyze competitor pricing across routes Operational Insights**: Track duration patterns and efficiency Data Analytics Teams Self-Service BI**: Enable non-technical users to generate reports Mobile Dashboards**: Access insights anywhere via Telegram Rapid Prototyping**: Quick data exploration without complex tools Business Intelligence Executive Reporting**: Instant charts for presentations Market Research**: Compare industry trends and benchmarks Performance Monitoring**: Track KPIs in real-time π¨ Customization Options Adding New Chart Types Create new Switch condition Add corresponding data processing node Configure Chart.js options Update user interface menu Data Source Extensions Replace CSV with database connections Add real-time API integrations Implement data refresh mechanisms Support multiple file formats Visual Customizations // Custom color palette backgroundColor: ['#your-colors'], // Advanced styling borderRadius: 8, borderSkipped: false, // Animation effects animation: { duration: 2000, easing: 'easeInOutQuart' } π Security & Best Practices Data Protection Validate CSV input format Sanitize user inputs Implement rate limiting Secure file access permissions Error Handling Graceful degradation for API failures User-friendly error messages Automatic retry mechanisms Comprehensive logging π Expected Outputs Sample Generated Insights "βοΈ Vistara leads with 350+ flights, capturing 23.4% market share" "π Long-haul flights dominate at 61.1% of total bookings" "π° Budget category (βΉ0-10K) represents 47.5% of all bookings" "π Average prices peak at βΉ14K for 6-8 hour duration flights" Performance Metrics Response Time**: <3 seconds for chart generation Image Quality**: 800x600px high-resolution PNG Data Capacity**: Handles 10K+ records efficiently Concurrent Users**: Scales with n8n instance capacity π Getting Started Download the workflow JSON Import into your n8n instance Configure Telegram bot credentials Upload your flight data CSV Test with /start command Deploy and share with your team π‘ Pro Tips Data Quality**: Clean data produces better insights Mobile First**: Charts are optimized for mobile viewing Batch Processing**: Handles large datasets efficiently Extensible Design**: Easy to add new visualization types Ready to transform your data into actionable insights? Import this template and start generating professional charts in minutes! π
by Davide
This workflow optimizes the management of inquiries received through a contact form (Contact Form 7 - CF7 Plugin) on a WordPress site, automating the process of classification, response drafting, and data storage. This workflow is particularly useful for businesses that receive multiple daily inquiries and want to improve their efficiency in managing customer communications. Benefits: β Automation & Speed β Reduces the time needed to handle inquiries manually. β Better Email Management β Ensures every message receives a timely and accurate response. β Customization β The generated draft can be edited before sending, maintaining a personal touch. β Inquiry History β Storing data in Google Sheets allows for easy tracking of customer interactions. β Easy Integration β Works seamlessly with Contact Form 7 without complex configurations. How It Works Form Submission Handling: The workflow starts with a WordPress form submission captured via a webhook. The form data (first name, last name, email, phone, and message) is extracted and structured using the "Set Fields" node. Message Classification: The submitted message is classified into predefined categories (e.g., "Product Info," "Order Info," or "Other") using the "Message Classifier" node, powered by Google Gemini. Automated Email Drafting: Based on the classification, the workflow generates a professional email draft using one of three "Email Writer" nodes (for Product, Order, or Other requests). Each node uses Google Gemini to craft a personalized response with a structured format (subject and body). Email Draft Creation: The drafted email is sent as a Gmail draft to the appropriate department, including the original form data for context. Data Logging: All submissions, along with their classifications and email drafts, are logged in a Google Sheets spreadsheet for record-keeping and further action. Set Up Steps Install WordPress Plugin: Install the "CF7 to Webhook" plugin on WordPress and configure it to send form submissions to the n8n webhook URL. Configure Webhook in n8n: Set up the "From Wordpress" webhook node in n8n to receive POST requests from the WordPress form. Google Gemini Integration: Ensure the Google Gemini nodes are properly authenticated with the correct API credentials. Gmail and Google Sheets Setup: Authenticate the Gmail and Google Sheets nodes with the appropriate OAuth2 credentials and specify the target spreadsheet and sheet name. Customize Classification Categories: Adjust the categories in the "Message Classifier" node to match your business needs. Test the Workflow: Trigger a test form submission to verify the workflow processes data correctly, classifies the message, generates an email draft, and logs the data. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Ranjan Dailata
Who this is for The Crunchbase B2B Lead Discovery Pipeline is designed for sales teams, B2B marketers, business analysts, and data operations teams who need a reliable way to extract, structure, and summarize company information from Crunchbase to fuel lead generation and market intelligence. This workflow is ideal for: Sales Development Reps (SDRs) - Needing structured leads from Crunchbase Marketing Analysts - Generating segmented outreach lists Growth Teams - Identifying trending B2B startups RevOps Teams - Automating company research pipelines Data Teams - Consolidating insights into Google Sheets for dashboards What problem is this workflow solving? Manual extraction of company data from Crunchbase is time-consuming, inconsistent, and often lacks the contextual summary required for sales enablement or growth targeting. This workflow automates the extraction, transformation, summarization, and delivery of Crunchbase company data into structured formats, making it instantly usable for B2B targeting and analysis. It solves: The difficulty of scaling lead discovery from Crunchbase The need to summarize raw textual content for quick insights The lack of integration between web scraping, LLM processing, and storage What this workflow does Markdown to Textual Data Extractor**: Takes raw scraped markdown from Crunchbase and converts it into readable plain text using a basic LLM chain Structured Data Extraction**: Applies a parsing model (OpenAI) to extract structured fields such as company name, funding rounds, industry tags, location, and founding year Summarization Chain**: Generates an executive summary from the raw Crunchbase text using a summarization prompt template Send to Google Sheets**: Adds the structured data and summary into a Google Sheet for team access and further processing Persist to Disk**: Saves both raw and structured data files locally for archiving or further use Webhook Notification**: Sends a structured payload to a webhook endpoint (e.g., Slack, CRM, internal tools) with lead insights Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. You need to have an OpenAI Account. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, Configure the Google Sheet Credentials with your own account. Follow this documentation - Set Google Sheet Credential In n8n, configure the OpenAi account credentials. Ensure the URL and Bright Data zone name are correctly set in the Set URL, Filename and Bright Data Zone node. Set the desired local path in the Write a file to disk node to save the responses. How to customize this workflow to your needs LLM Prompt Customization : Modify the extraction prompt to include additional fields like revenue, social links, leadership team Adjust summarization tone (e.g., executive summary, sales-focused snapshot or marketing digest) File Persistence Store raw markdown, extracted JSON, and summary text separately for audit/debug Webhook Notification Connect to CRM (e.g., HubSpot, Salesforce) via webhook to automatically create leads Send Slack notifications to alert sales reps when a new high-potential company is discovered
by Oneclick AI Squad
Transform your meetings into actionable insights automatically! This workflow captures meeting audio, transcribes conversations, generates AI summaries, and emails the results to participantsβall without manual intervention. What's the Goal? Auto-record meetings** when they start and stop when they end Transcribe audio** to text using Vexa Bot integration Generate intelligent summaries** with AI-powered analysis Email summaries** to meeting participants automatically Eliminate manual note-taking** and post-meeting admin work Never miss important discussions** or action items again Why Does It Matter? Save 90% of Post-Meeting Time**: No more manual transcription or summary writing Never Lose Key Information**: Automatic capture ensures nothing falls through cracks Improve Team Productivity**: Focus on discussions, not note-taking Perfect Meeting Records**: Searchable transcripts and summaries for future reference Instant Distribution**: Summaries reach all participants immediately after meetings How It Works Step 1: Meeting Detection & Recording Start Meeting Trigger**: Detects when meeting begins via Google Meet webhook Launch Vexa Bot**: Automatically joins meeting and starts recording End Meeting Trigger**: Detects meeting end and stops recording Step 2: Audio Processing & Transcription Stop Vexa Bot**: Ends recording and retrieves audio file Fetch Meeting Audio**: Downloads recorded audio from Vexa Bot Transcribe Audio**: Converts speech to text using AI transcription Step 3: AI Summary Generation Prepare Transcript**: Formats transcribed text for AI processing Generate Summary**: AI model creates concise meeting summary with: Key discussion points Decisions made Action items assigned Next steps identified Step 4: Distribution Send Email**: Automatically emails summary to all meeting participants Setup Requirements Google Meet Integration: Configure Google Meet webhook and API credentials Set up meeting detection triggers Test with sample meeting Vexa Bot Configuration: Add Vexa Bot API credentials for recording Configure audio file retrieval settings Set recording quality and format preferences AI Model Setup: Configure AI transcription service (e.g., OpenAI Whisper, Google Speech-to-Text) Set up AI summary generation with custom prompts Define summary format and length preferences Email Configuration: Set up SMTP credentials for email distribution Create email templates for meeting summaries Configure participant list extraction from meeting metadata Import Instructions Get Workflow JSON: Copy the workflow JSON code Open n8n Editor: Navigate to your n8n dashboard Import Workflow: Click menu (β―) β "Import from Clipboard" β Paste JSON β Import Configure Credentials: Add API keys for Google Meet, Vexa Bot, AI services, and SMTP Test Workflow: Run a test meeting to verify end-to-end functionality Your meetings will now automatically transform into actionable summaries delivered to your inbox!
by David Olusola
Learn Customer Onboarding Automation with n8n β How It Works This smart onboarding automation handles new customer signups by: Receiving signup data via webhook Validating required customer info Creating a contact in HubSpot CRM Sending a personalized welcome email Delivering onboarding documents after 2 hours Sending a personal check-in email after 1 day Sending a Week 1 success guide after 3 days Updating CRM status and notifying the team at each milestone Itβs designed for professional onboarding, with built-in timing, CRM integration, and smart notifications to improve engagement and retention. π οΈ Setup Steps Create Webhook Add a Webhook node in n8n with POST method β this triggers when a new customer signs up. Validate Customer Data Add an IF node to check if email and customerName are present. Create CRM Contact Use a HubSpot node to create a new contact, map fields (e.g., split name into first/last). Send Notifications Add a Telegram or Slack node to alert your team instantly. Send Welcome Email Use an Email Send node for a warm welcome, customized with customer details. Wait 2 Hours Add a Wait node to delay next steps and avoid overwhelming the customer. Send Onboarding Documents Use another Email Send node to deliver helpful PDFs or guides. Wait 1 Day & Send Check-in Another Wait node, followed by a personal check-in email using the customerβs name. Wait 2 More Days & Send Success Guide Deliver Week 1 content via email to reinforce engagement. Update CRM & Notify Team Use HubSpot to update status and Telegram/Slack to notify your team of completion.
by Derek Cheung
How it works: Using a Crew of AI agents (Senior Researcher, Visionary, and Senior Editor), this crew will automatically determine the right questions to ask to produce a detailed fundamental stock analysis. This application has two components: a front-end and a Stock Q&A engine. The front end is the team of agents automatically figuring out the questions to ask, and the back-end part is the ability to answer those questions with the SEC 10K data. This template implements the Stock Q&A engine. For the front-end of the application, you can choose one of two options: using CrewAI with the Replit environment (code approach) fully visual approach with n8n template (AI-powered automated stock analysis) Setup steps: Use first workflow in template to upsert a company annual report PDF (such as from SEC 10K filling) Get URL for Webhook in second workflow template CrewAI front-end: Youtube overview video Fork this AI Agent environment Crew Agent Environment Set the webhook URL into N8N_WEBHOOK_URL variable Set OpenAI_API_KEY variable
by scrapeless official
AI-Powered Web Data Pipeline with n8n How It Works This n8n workflow builds an AI-powered web data pipeline that automates the entire process of: Extraction** Structuring** Vectorization** Storage** It integrates multiple advanced tools to transform messy web pages into clean, searchable vector databases. Integrated Tools Scrapeless** Bypasses JavaScript-heavy websites and anti-bot protections to reliably extract HTML content. Claude AI** Uses LLMs to analyze unstructured HTML and generate clean, structured JSON data. Ollama Embeddings** Generates local vector embeddings from structured text using the all-minilm model. Qdrant Vector DB** Stores semantic vector data for fast and meaningful search capabilities. Webhook Notifications** Sends real-time updates when workflows complete or errors occur. From messy webpages to structured vector data β this pipeline is perfect for building intelligent agents, knowledge bases, or research automation tools. Setup Steps 1. Install n8n > Requires Node.js v18 / v20 / v22 npm install -g n8n n8n After installation, access the n8n interface via: URL: http://localhost:5678 2. Set Up Scrapeless Register at: Scrapeless Copy your API token Paste the token into the HTTP Request node labeled "Scrapeless Web Request" 3. Set Up Claude API (Anthropic) Sign up at Anthropic Console Generate your Claude API key Add the API key to the following nodes: Claude Extractor AI Data Checker Claude AI Agent 4. Install and Run Ollama macOS brew install ollama Linux curl -fsSL https://ollama.com/install.sh | sh Windows Download the installer from: https://ollama.com Start Ollama Server ollama serve Pull Embedding Model ollama pull all-minilm 5. Install Qdrant (via Docker) docker pull qdrant/qdrant docker run -d \ --name qdrant-server \ -p 6333:6333 -p 6334:6334 \ -v $(pwd)/qdrant_storage:/qdrant/storage \ qdrant/qdrant Test if Qdrant is running: curl http://localhost:6333/healthz 6. Configure the n8n Workflow Modify the Trigger (Manual or Scheduled) Input your Target URLs and Collection Name in the designated nodes Paste all required API Tokens / Keys into their corresponding nodes Ensure your Qdrant and Ollama services are running Ideal Use Cases Custom AI Chatbots Private Search Engines Research Tools Internal Knowledge Bases Content Monitoring Pipelines
by Eduard
This workflow demonstrates three distinct approaches to chaining LLM operations using Claude 3.7 Sonnet. Connect to any section to experience the differences in implementation, performance, and capabilities. What you'll find: 1οΈβ£ Naive Sequential Chaining The simplest but least efficient approach - connecting LLM nodes in a direct sequence. Easy to set up for beginners but becomes unwieldy and slow as your chain grows. 2οΈβ£ Agent-Based Processing with Memory Process a list of instructions through a single AI Agent that maintains conversation history. This structured approach provides better context management while keeping your workflow organized. 3οΈβ£ Parallel Processing for Maximum Speed Split your prompts and process them simultaneously for much faster results. Ideal when you need to run multiple independent tasks without shared context. Setup Instructions: API Credentials: Configure your Anthropic API key in the credentials manager. This workflow uses Claude 3.7 Sonnet, but you can modify the model in each Anthropic Chat Model node, or pick an entirely different LLM. For Cloud Users: If using the parallel processing method (section 3), replace {{ $env.WEBHOOK_URL }} in the "LLM steps - parallel" HTTP Request node with your n8n instance URL. Test Data: The workflow fetches content from the n8n blog by default. You can modify this part to use a different content or a data source. Customization: Each section contains a set of example prompts. Modify the "Initial prompts" nodes to change the questions asked to the LLM. Compare these methods to understand the trade-offs between simplicity, speed, and context management in your AI workflows! Follow me on LinkedIn for more tips on AI automation and n8n workflows!
by Trung Tran
π§ IT Voice Support Automation Bot β Telegram Voice Message to JIRA ticket with OpenAI Whisper > Automatically process IT support requests submitted via Telegram voice messages by transcribing, extracting structured data, creating a JIRA ticket, and notifying relevant parties. π§βπΌ Whoβs it for Internal teams that handle IT support but want to streamline voice-based requests. Employees who prefer using mobile/voice to report incidents or ask for support. Organizations aiming to integrate conversational AI into existing support workflows. βοΈ How it works / What it does A user sends a voice message to a Telegram bot. The system checks whether itβs an audio message. If valid, the audio is: Downloaded Transcribed via OpenAI Whisper Backed up to Google Drive The transcription and file metadata are merged. The merged content is processed through an AI Agent (GPT) to extract structured request info. A JIRA ticket is created using the extracted data. The IT team is notified via Slack (or other channels). The requester receives a Telegram confirmation message with the JIRA ticket link. If the input is not audio, a polite rejection message is sent. π Key Features Supports voice-based ticket creation Accurate transcription using Whisper Context-aware request parsing using GPT-4.1 mini Fully automated ticket creation in JIRA Notifies both IT and the original requester Cloud backup of original voice messages (Google Drive) π οΈ Setup Instructions Prerequisites | Component | Required | |----------|----------| | Telegram Bot & API Key | β | | OpenAI Whisper / Transcription Model | β | | Google Drive Credentials (OAuth2) | β | | Google Sheets or other storage (optional) | β¬ | | JIRA Cloud API Access | β | | Slack Bot or Webhook | β | Workflow Steps Telegram Voice Message Trigger: Starts the flow when a user sends a voice message. Is Audio Message?: If false β reply "only voice is supported" Download Audio: Download .oga file from Telegram. Transcribe Audio: Use OpenAI Whisper to get text transcript. Backup to Google Drive: Upload original voice file with metadata. Merge Results: Combine transcript and metadata. Pre-process Output: Clean formatting before AI extraction. Transcript Processing Agent: GPT-based agent extracts: Requester name, department Request title & description Priority & request type Submit JIRA Request Ticket: Create ticket from AI-extracted data. Setup Slack / Email / Manual Steps: Optional internal routing or approvals. Inform Reporter via Telegram: Sends confirmation message with JIRA ticket link. π§ How to Customize Replace JIRA with Zendesk, GitHub Issues, or other ticketing tools. Change Slack to Microsoft Teams or Email. Add Notion/Airtable logging. Enhance agent to extract department from user ID or metadata. π¦ Requirements | Integration | Notes | |-------------|-------| | Telegram Bot | Used for input/output | | Google Drive | Audio backup | | OpenAI GPT + Whisper | Transcript & Extraction | | JIRA | Ticketing platform | | Slack | Team notification | Built with β€οΈ using n8n
by vinci-king-01
Smart Supplier Health Monitor with ScrapeGraphAI Risk Detection and Multi-Channel Alerts π― Target Audience Procurement managers and directors Supply chain risk analysts CFOs and financial controllers Vendor management teams Enterprise risk managers Operations managers Contract administrators Business continuity planners π Problem Statement Manual supplier monitoring is reactive and time-consuming, often missing early warning signs of financial distress that could disrupt your supply chain. This template solves the challenge of proactive supplier health surveillance by automatically monitoring financial indicators, news sentiment, and market conditions to predict supplier risks before they impact your business operations. π§ How it Works This workflow automatically monitors your critical suppliers' financial health using AI-powered web scraping, analyzes multiple risk factors, identifies alternative suppliers when needed, and sends intelligent alerts through multiple channels to ensure your procurement team can act quickly on emerging risks. Key Components Weekly Health Check Scheduler - Automated trigger based on supplier criticality levels Supplier Database Loader - Dynamic supplier portfolio management with risk-based monitoring frequency ScrapeGraphAI Website Analyzer - AI-powered extraction of financial health indicators from company websites Financial News Scraper - Intelligent monitoring of financial news and sentiment analysis Advanced Risk Scorer - Industry-adjusted risk calculation with failure probability modeling Alternative Supplier Finder - Automated identification and ranking of backup suppliers Multi-Channel Alert System - Email, Slack, and API notifications with escalation rules π Risk Analysis Specifications The template performs comprehensive financial health analysis with the following parameters: | Risk Factor | Weight | Score Impact | Description | |-------------|--------|--------------|-------------| | Financial Issues | 40% | +0-24 points | Revenue decline, debt levels, cash flow problems | | Operational Risks | 30% | +0-18 points | Management changes, restructuring, capacity issues | | Market Risks | 20% | +0-12 points | Industry disruption, regulatory changes, competition | | Reputational Risks | 10% | +0-6 points | Negative news, legal issues, public sentiment | Industry Risk Multipliers: Technology: 1.1x (Higher volatility) Manufacturing: 1.0x (Baseline) Energy: 1.2x (Regulatory risks) Financial: 1.3x (Market sensitivity) Logistics: 0.9x (Generally stable) Risk Levels & Actions: Critical Risk**: Score β₯ 75 (CEO/CFO escalation, immediate transition planning) High Risk**: Score β₯ 55 (Procurement director escalation, backup activation) Medium Risk**: Score β₯ 35 (Manager review, increased monitoring) Low Risk**: Score < 35 (Standard monitoring) π’ Supplier Management Features | Feature | Critical Suppliers | High Priority | Medium Priority | |---------|-------------------|---------------|-----------------| | Monitoring Frequency | Weekly | Bi-weekly | Monthly | | Risk Threshold | 35+ points | 40+ points | 50+ points | | Alert Recipients | C-Level + Directors | Directors + Managers | Managers only | | Alternative Suppliers | 3+ pre-qualified | 2+ identified | 1+ researched | | Transition Timeline | 24-48 hours | 1-2 weeks | 1-3 months | π οΈ Setup Instructions Estimated setup time: 25-30 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Gmail account for email alerts (or alternative email service) Slack workspace with webhook or bot token Supplier database or CRM system API access Basic understanding of procurement processes Step-by-Step Configuration 1. Configure ScrapeGraphAI Credentials Sign up for ScrapeGraphAI API account Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials with your API key Test the connection to ensure proper functionality 2. Set up Email Integration Add Gmail OAuth2 credentials in n8n Configure sender email and authentication Test email delivery with sample message Set up email templates for different risk levels 3. Configure Slack Integration Create Slack webhook URL or bot token Add Slack credentials to n8n Configure target channels for different alert types Customize Slack message formatting and buttons 4. Load Supplier Database Update the "Supplier Database Loader" node with your supplier data Configure supplier categories, contract values, and criticality levels Set monitoring frequencies based on supplier importance Add supplier website URLs and contact information 5. Customize Risk Parameters Adjust industry risk multipliers for your business context Modify risk scoring thresholds based on risk tolerance Configure economic factor adjustments Set failure probability calculation parameters 6. Configure Alternative Supplier Database Populate the alternative supplier database in the "Alternative Supplier Finder" node Add supplier ratings, capacities, and specialties Configure geographic coverage and certification requirements Set suitability scoring parameters 7. Set up Procurement System Integration Configure the procurement system webhook endpoint Add API authentication credentials Test webhook payload delivery Set up automated data synchronization 8. Test and Validate Run test scenarios with sample supplier data Verify ScrapeGraphAI extraction accuracy Check risk scoring calculations and thresholds Confirm all alert channels are working properly Test alternative supplier recommendations π Workflow Customization Options Modify Risk Analysis Add custom risk indicators specific to your industry Implement sector-specific economic adjustments Configure contract-specific risk factors Add ESG (Environmental, Social, Governance) scoring Extend Data Sources Integrate credit rating agency APIs (Dun & Bradstreet, Experian) Add financial database connections (Bloomberg, Reuters) Include social media sentiment analysis Connect to government regulatory databases Enhance Alternative Supplier Management Add automated supplier qualification workflows Implement dynamic pricing comparison Create supplier performance scorecards Add geographic risk assessment Advanced Analytics Implement predictive failure modeling Add supplier portfolio optimization Create supply chain risk heatmaps Generate automated compliance reports π Use Cases Supply Chain Risk Management**: Proactive monitoring of supplier financial stability Procurement Optimization**: Data-driven supplier selection and management Business Continuity Planning**: Automated backup supplier identification Financial Risk Assessment**: Early warning system for supplier defaults Contract Management**: Risk-based contract renewal and negotiation Vendor Diversification**: Strategic supplier portfolio management π¨ Important Notes Respect ScrapeGraphAI API rate limits and terms of service Implement appropriate delays between supplier assessments Keep all API credentials secure and rotate them regularly Monitor API usage to manage costs effectively Ensure compliance with data privacy regulations (GDPR, CCPA) Regularly update supplier databases and contact information Review and adjust risk parameters based on market conditions Maintain confidentiality of supplier financial information π§ Troubleshooting Common Issues: ScrapeGraphAI extraction errors: Check API key validity and rate limits Email delivery failures: Verify Gmail credentials and permissions Slack notification failures: Check webhook URL and channel permissions False positive alerts: Adjust risk scoring thresholds and industry multipliers Missing supplier data: Verify website URLs and accessibility Alternative supplier errors: Check supplier database completeness Monitoring Best Practices: Set up workflow execution monitoring and error alerts Regularly review and update supplier information Monitor API usage and costs across all integrations Validate risk scoring accuracy with historical data Test disaster recovery and backup procedures Support Resources: ScrapeGraphAI documentation and API reference n8n community forums for workflow assistance Procurement best practices and industry standards Financial risk assessment methodologies Supply chain management resources and tools