by Avkash Kakdiya
🔁 What This Workflow Does This automation fetches daily AI-related articles from trusted RSS feeds, summarizes them using OpenAI (GPT), and generates a ready-to-post LinkedIn update in your writing style. It then emails the post to you every morning for review and publishing. High-Level Steps: Triggers every morning via Cron. Fetches latest AI news from multiple RSS sources. Filters recent articles (last 24 hrs). Summarizes each article using OpenAI (ChatGPT). Generates a LinkedIn-style post using your tone. Sends the post to your Gmail for review. ⚙️ Setup Steps Estimated setup time: 15–30 minutes You’ll need: OpenAI API key Gmail account connected in n8n RSS feed URLs (defaults are provided) Add your email in the Gmail node to receive daily posts. Add your tone/style prompt in the ChatGPT nodes (instructions inside workflow).
by Xiaoyuan Zhang
Description This workflow creates a sophisticated bilingual dictionary that provides literary-style definitions and examples for English and German words. The system automatically detects the input language, generates comprehensive definitions in Chinese, creates three literary-style example sentences with translations, and stores everything in a Supabase database for future reference. Who Is This For? Language Learners & Students: Perfect for those studying English or German who want to understand words in literary contexts with Chinese translations. Writers & Content Creators: Ideal for bilingual writers working with English, German, and Chinese who need rich, literary examples for their work. Educators & Translators: Excellent tool for language teachers and professional translators who need comprehensive word definitions with contextual examples. Literary Enthusiasts: Great for readers of literature who encounter unfamiliar words and want to understand their poetic or literary usage. What Problem Does This Workflow Solve? Traditional dictionaries often provide basic definitions without literary context or cross-language examples. This workflow addresses several key challenges: Limited Literary Context: Most dictionaries lack poetic, expressive, or literary-style examples that help understand how words are used in sophisticated writing. Cross-Language Learning: Provides seamless translation between English/German and Chinese with culturally appropriate examples. Data Persistence: Automatically saves all lookups to a database, creating a personalized vocabulary collection over time. API Accessibility: Provides a clean webhook interface that can be integrated into apps, websites, or other tools. How It Works Main Dictionary Lookup Flow Input Processing: Receives a word via webhook POST request and automatically detects if it's English or German AI Analysis: Uses OpenAI GPT-4o-mini to generate comprehensive definitions with literary context Response Formatting: Processes the AI response to extract structured data (word, meaning, examples) Quality Control: Validates the response and handles unclear or invalid inputs gracefully Database Storage: Saves the word, Chinese meaning, and examples to Supabase for future reference API Response: Returns formatted JSON with the complete dictionary entry Data Storage Flow Parallel Processing: Simultaneously returns the dictionary data to the user and saves it to the database Structured Storage: Organizes data in Supabase with fields for words, Chinese meanings, and example arrays Success Confirmation: Provides confirmation when data is successfully stored Setup Instructions Prerequisites & Accounts You'll need accounts and API access for: n8n (Cloud or self-hosted) OpenAI (API key required) Supabase (Database and API credentials) Webhook Configuration The workflow uses two webhook endpoints with the same path for different operations Note the webhook URL provided by n8n for API integration Test the webhook endpoints to ensure they're accessible approach Customization Options Extend to support additional input languages by modifying the AI prompt Add support for other target languages beyond Chinese Customize the literary style for different cultural contexts This workflow transforms simple word lookups into rich, contextual learning experiences while building a personalized vocabulary database over time.
by John Alejandro SIlva
🤖📨 Telegram AI Assistant with Multi-File Media Group Handling, Smart File Processing & PostgreSQL Integration > AI-powered Telegram bot for text, voice, video, documents & media — with database-driven grouping and Telegram-safe formatting. 📋 Description This n8n template creates a next-generation Telegram AI assistant 🧠💬 capable of handling text messages, media files, and documents with advanced processing, PostgreSQL integration, and AI-powered responses. It is designed to solve Telegram’s media group challenge 📦 — when multiple files are sent together, they are stored, processed, and combined into one coherent AI-generated reply. ✨ Key Features 📂 Multi-file media group management with PostgreSQL: media_group media_queue chat_histories 📑 Document parsing for CSV, HTML, ICS, JSON, ODS, PDF (with AI fallback), RTF, TXT, XML, and spreadsheets. 🎤 Voice & video transcription for AI analysis. 🖼️ Image, audio, and video description for richer AI context. 🛡️ Telegram-safe MarkdownV2 formatting with auto-splitting for messages over 4096 chars. ⚠️ Error fallback for unsupported file types. 💡 Acknowledgment A huge thank you to Ezema Gingsley Chibuzo 🙌 for the inspiration of the first version of this workflow: Create a Multi-Modal Telegram Support Bot with GPT-4 and Supabase RAG Your pioneering work laid the foundation for this improved, database-powered multi-modal assistant 🚀 🏷 Tags telegram ai-assistant postgresql multi-file media-group file-processing voice-transcription document-parser pdf-extraction markdown-formatting n8n-template 💼 Use Case Use this template if you need an AI-powered Telegram bot that can: 📦 Handle multiple files sent in a single message (albums, multiple PDFs, etc.). 🧾 Extract & analyze content from many file formats. 🎙️ Transcribe voice and video messages. 🗂️ Maintain chat memory for contextual AI answers. 🛡️ Avoid Telegram formatting errors and length limit issues. This workflow automates the full chain: Receive → Process → AI Analysis → Telegram-safe Reply. 💬 Example User Interactions 📄 Multiple PDFs with a caption** → AI extracts and summarizes all PDFs in one combined reply. 🎤 Voice message** → AI transcribes and replies with a contextual answer. 📊 CSV or spreadsheet file** → AI parses and summarizes the data. 🖼️ Multiple images** → AI describes each image and replies in a single message. 🔑 Required Credentials Telegram Bot API** (Bot Token) PostgreSQL** (Connection credentials) AI Provider API** (OpenAI, Google Gemini, or compatible LLM) ⚙️ Setup Instructions 🗄️ Create the PostgreSQL tables (Gray section SQL): media_group media_queue chat_histories 🔌 Configure the Telegram Trigger with your bot token. 🤖 Connect your AI provider credentials. 🗂️ Set up PostgreSQL credentials in the database nodes. ▶️ Deploy the workflow in n8n. 🎯 Start sending messages and files to your bot. 📌 Extra Notes ✅ Green section ensures only one trigger per media group. 📌 Yellow section guarantees captions and files are stored in the correct sequence. ✨ Purple section formats AI output to be Telegram-safe and split if needed. 🧠 AI prompt is not fixed, allowing full customization. 💡 Need Assistance? If you’d like help customizing or extending this workflow, feel free to reach out: 📧 Email: johnsilva11031@gmail.com 🔗 LinkedIn: John Alejandro Silva Rodríguez
by Niklas Hatje
Use Case When trying to maximize your outreach, website visitors are often an overlooked source of qualified new leads. This workflow allows your to track and enrich new website visitors and saves them to a Google Sheet once they meet a pre-defined criteria. What this workflow does This workflow fires once a day and gets all your leads saved in Leadfeeder. It then takes the leads that meet a pre-defined engagement criteria, e.g. that they visited your site 3 times, and enriches them additionally with Clearbit. From there it filters the leads again by a criteria on the company, e.g. a minimum employee count, and saves matching leads into a Google Sheet document. Setup Add your Leedfeeder credentials. The name should be Authorization and the value Token token=yourapitoken. You can find your token via Settings -> Personal -> API-Token Add your Google Sheet credentials Save the Leedfeeder account names you want to use in the Setup node Copy the Google Sheets Template and add its URL to the Setup node How to adjust this to your needs Adjust and/or remove the engagement and company criteria Add more ways to enrich a company Potential ideas to enhance the use of this workflow Automatically reach out to users that meet the criteria / that get added to the sheet Create a workflow that finds the right employee in companies that are identified by this workflow
by Robert Breen
This n8n workflow finds experts on any topic, scrapes their websites, and pulls out contact emails automatically. Core services used: SerpAPI (google search) · Apify (website crawler) · OpenAI (GPT-4o email extraction). 🛠️ Step-by-Step Setup & Execution 1️⃣ Run Workflow (Manual Trigger) | Node | Type | Purpose | |------|------|---------| | Run Workflow | Manual Trigger | Start the workflow on demand while you test. | 2️⃣ Set Your Topic | Node | Type | How to configure | |------|------|------------------| | Set Topic | Set | Add a string field Topic – e.g. "n8n". This keyword drives every subsequent step. | 3️⃣ Search Google (Results 1-10) | Node | Type | API Credential | |------|------|----------------| | Search Google (top 10) | SerpAPI | Create SerpAPI credential1. Sign up → copy API key → n8n → Credentials → New → SerpAPI → paste.2. Select the credential in this node. | | Key Params | | | | q | | ={{ $json.Topic }} Expert | | location | | Region code (ex 585069efee19ad271e9c9b36) | | additionalFields.start | | "10" (Google position 1-10)| 4️⃣ Search Google (Results 11-20) | Node | Type | Notes | |------|------|-------| | Search Google (11-20) | SerpAPI (same credential) | Remove start or set to 20+ to fetch next page. | 5️⃣ Extract URL Lists | Node | Type | Script Purpose | |------|------|----------------| | Extract Url & Extract Url 2 | Code | Loop data.organic_results → output { title, link, displayed_link } for each result. | 6️⃣ Combine Both Result Sets | Node | Type | Details | |------|------|---------| | Append Results | Merge (combineAll) | Merges arrays from steps 3 & 4 into a single list for processing. | 7️⃣ Loop Over Every URL | Node | Type | Configuration | |------|------|---------------| | Loop Over Items1 | Split In Batches | Default batch = 1 (process one page at a time).onError = continueRegularOutput keeps loop alive on failures. | 8️⃣ Scrape Webpage Content (Apify) | Node | Type | API Credential | |------|------|----------------| | Scrape URL with apify | HTTP Request | Create Apify credential1. Sign up at https://console.apify.com2. Account → API tokens → copy.3. n8n → Credentials → New → HTTP Query Auth → set query param token=YOUR_TOKEN. | | Request Details | | | | Method | POST | | URL | https://api.apify.com/v2/acts/6sigmag~fast-website-content-crawler/run-sync-get-dataset-items | | JSON Body | 9️⃣ Extract Email with OpenAI | Node | Type | API Credential | |------|------|----------------| | Extract Email from webpage | LangChain Agent | Create OpenAI credential1. Generate key at https://platform.openai.com/account/api-keys2. n8n → Credentials → New → OpenAI API → paste key. | | Prompt (system) | | Output Parser | Structured Output Parser2 expects → { "email": "address OR null" } | 🔟 Loop Continues & Final Data The extracted result returns to Loop Over Items1 until every URL is processed. Typical final item JSON**: { "title": "How to Build n8n Workflows", "link": "https://example.com", "email": "info@example.com" } 💡 Optional Enhancements Idea How Save Leads Add a Google Sheets or Airtable node after the loop. Validate Emails Chain a ZeroBounce / Hunter.io verification API before saving. Parallel Crawling Increase SplitInBatches size (watch Apify rate limits). 🙋♂️ Need More Help? Robert Breen – Automation Consultant & n8n Expert 📧 robert.j.breen@gmail.com 🔗 https://www.linkedin.com/in/robert-breen-29429625/ 🌐 https://ynteractive.com
by Davide
This Workflow streamlines the process of publishing posts (image or video) to multiple social media platforms using a unified form and a third-party API service called Upload-Post. The automation starts with a form trigger, allowing users to submit content (text and media) through a simple frontend interface. Users select the platform (Instagram, LinkedIn, Facebook, X, TikTok, Threads), choose the profile name, write a caption, and upload a photo or video. How It Works Automates cross-platform social media posting via Upload-Post, handling both images (JPEG) and videos (MP4). Here’s the process: Trigger**: A form submission captures user inputs: Platform (Instagram, LinkedIn, Facebook, X, TikTok, Threads). Account (pre-configured profile name). Caption and file (image/video). Optional Facebook Page ID for targeted posting. Routing**: The "Video or Photo?" Switch node checks the file’s MIME type: Image: Routes to the "Post photo" HTTP node (uploads via upload_photos API). Video: Routes to the "Post video" HTTP node (uploads via upload API). API Integration**: Both nodes send data to Upload-Post.com’s API, including: Caption, account name, platform, and file binary. Facebook ID (if provided). Success/Failure Handling**: The "Result Photo/Video" nodes parse the API response. Setup Steps Prerequisites: Upload-Post.com API Key**: Get it from the API Keys dashboard. Free tier allows 10 uploads/month. Configuration: API Authentication: In the HTTP Request nodes (Post photo/Post video), set the Authorization header: Name: Authorization Value: Apikey YOUR_API_KEY_HERE. Form Customization: Adjust the "On form submission" node to: Add/remove platforms (e.g., YouTube when approved). Modify file type restrictions (default: .jpg, .mp4). Account Mapping: Ensure the "Account" field matches profiles configured in Upload-Post.com (e.g., test1, test2). Facebook Page Integration: Optional: Add a Facebook Page ID field for page-specific posts. Testing: Submit test forms with images/videos. Verify API responses and success/failure messages. Optional Enhancements: Add error logging (e.g., save failed attempts to a database). Extend to YouTube once API support is confirmed. Key Features: Multi-Platform**: Post to 6+ social networks simultaneously. User-Friendly**: Simple form interface for non-technical users. Error Handling**: Clear feedback for success/failure cases. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Avkash Kakdiya
How it works This workflow enhances contact intelligence by retrieving new or updated contact data, enriching it using AI and external APIs, and then updating your CRM or contact management system with intelligent insights. It automates the process of gathering, enriching, and organizing contact information to improve targeting, personalization, and engagement. Step-by-step 1. Trigger & Input The workflow is triggered by a scheduler or webhook event. It reads a new contact entry (or an updated one) from your source, such as a spreadsheet or form. Basic fields like name, email, and company are used as the starting point for enrichment. 2. Contact Lookup & Parsing The contact's domain or company is extracted and used to perform a lookup via an external data source. Data such as company details, job title, or LinkedIn profile is retrieved. Parsed and cleaned to remove duplicates, missing values, or invalid results. 3. AI Enrichment The enriched contact is passed through an AI model (such as GPT or another NLP service). The model analyzes job role, seniority, and inferred interests based on available data. Insights like intent, persona category, or engagement score are generated. 4. Validation & Tagging The AI-enriched data is validated to ensure consistency and accuracy. Tags and segments (e.g., "Decision Maker", "Technical Buyer", etc.) are assigned based on rules or AI inference. This enables smart filtering, targeting, and routing later in your CRM or campaigns. 5. Output & Integration The final enriched and validated contact is written back to your CRM, sheet, or marketing platform. The system also: Sends a Slack/Email alert with a summary. Updates the original contact entry with a "Processed" or "Enriched" status. Triggers next steps, such as personalized outreach or nurture sequences. Benefits Enhances Contact Profiles with AI-generated insights and third-party data. Improves Segmentation & Targeting through smart tags and persona classification. Automates Manual Research, saving time and improving accuracy. Easily Extendable by adding more AI models, data sources, or CRM integrations.
by Matt Chong
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Who is this for? This workflow is for anyone who receives invoices by email and wants to stay on top of payment deadlines without manual tracking. What problem is this workflow solving? Invoices often get buried in your inbox. This workflow uses AI to find them, extracts key details, and adds a task to remind you to pay before it’s overdue. No more missed payments. No more manual tracking. How it works? This workflow is triggered on a schedule (By default, every hour). It checks your Gmail inbox for unread messages. Each email is passed to an AI agent (using OpenAI), which decides whether it’s an invoice. If an invoice is found: A task is created in your Google Tasks with the payment reminder and due date. The email is labeled (for tracking) and marked as read. If not an invoice: The email is skipped (no action taken). How to set up? Connect these services in your n8n credentials: Gmail (OAuth2) OpenAI Google Tasks Create Gmail label Go to Gmail and create a label named Invoice. This label will be applied to processed invoice emails. Choose your Google Task list In the task creation node, select the correct task list for your reminders. Set the schedule In the Schedule Trigger node, choose how often it should check your inbox. How to customize this workflow to your needs? Change the Gmail label** Update the label applied to emails after they are processed. Edit the AI prompt** Adjust the system prompt in the OpenAI node if your invoices follow a unique format. Update the task format** Modify the task title and notes to suit how you like your reminders to look. Adjust the schedule** Run it more or less frequently based on how many invoices you receive.
by Emmanuel Bernard
This workflow illustrates how to use Perplexity AI in your n8n workflow. Perplexity is a free AI-powered answer engine that provides accurate, trusted, and real-time answers to any question. Credentials Setup 1/ Go to the perplexity dashboard, purchase some credits and create an API Key https://www.perplexity.ai/settings/api 2/ In the perplexity Request node, use Generic Credentials, Header Auth. For the name, use the value "Authorization" And for the value "Bearer pplx-e4...59ea" (Your Perplexity Api Key) AI Model Sonar Pro is the current top model used by perplexity. If you want to use a different one, check this page: https://docs.perplexity.ai/guides/model-cards
by Yaron Been
Automated monitoring system that tracks startup activities, funding events, and company updates in real-time, providing valuable market intelligence. 🚀 What It Does Real-time monitoring of startup activities Funding alerts and updates Competitor tracking Industry trend analysis Customizable watchlists 🎯 Perfect For Venture capitalists Startup founders Business development teams Market researchers Investment analysts ⚙️ Key Benefits ✅ Stay ahead of market movements ✅ Never miss important funding rounds ✅ Track competitor activities ✅ Identify emerging trends ✅ Save hours of manual research 🔧 What You Need Crunchbase API access n8n instance Notification preferences (email/Slack/Teams) 📊 Data Points Tracked New funding rounds Company updates Leadership changes Product launches Market expansions 🛠️ Setup & Support Quick Setup Deploy in 20 minutes with our step-by-step configuration guide 📺 Watch Tutorial 💼 Get Expert Support 📧 Direct Help Stay informed about the startup ecosystem with automated monitoring and alerts. Make data-driven decisions with timely, relevant information.
by Robert Breen
This no-code n8n workflow finds recent Instagram posts by hashtag, scrapes profile data, and uses an AI agent to evaluate whether each account is a good collaboration lead. The workflow filters based on the number of followers and the content of their bio, and outputs structured reasoning for outreach decisions. Perfect for creators, marketers, or business developers looking to automate influencer or community partnership prospecting—especially in niche ecosystems like n8n. ✅ Key Features 🔍 Hashtag Discovery**: Finds recent Instagram posts from a specified hashtag (e.g., #n8n) 👤 Account Scraping**: Retrieves profile details such as follower count and biography 🧠 AI Evaluation**: Uses OpenAI and LangChain to determine if the profile is a good fit for outreach 📦 Structured Output**: Returns a JSON object with "Yes/No" lead status and reasoning 🛠️ Manual Execution**: Run on demand using the manual trigger 🧰 What You'll Need | Tool / API | Purpose | Setup Steps | |-------------------------|------------------------------------------|-------------| | Apify Account | To access Instagram scraping actors | Create account → Generate API Token → Use in httpQueryAuth credential in n8n | | OpenAI API Key | To power the AI decision-making agent | Sign up at OpenAI → Create API key → Paste into OpenAI credential in n8n | | LangChain Plugin for n8n | AI Orchestration with System Message | Install LangChain nodes from Community Nodes (already installed in this workflow) | 🔧 Step-by-Step Setup 1️⃣ Manual Trigger Node**: When clicking ‘Execute workflow’ Use**: Allows you to run the workflow manually while testing. 2️⃣ Define Hashtag Node**: Create Search Term Value**: Sets "n8n" as the default Instagram hashtag to scan. You can edit this to any other hashtag you'd like. 3️⃣ Find Recent Posts Node**: Find Recent Posts API**: Apify Instagram Hashtag Scraper Auth Setup**: Go to your Apify Console Click “Create new token” In n8n, create a new HTTP Query Auth credential Set token in the token query param (e.g., ?token=yourTokenHere) Choose the credential in this node 4️⃣ Scrape Each Profile Node**: Scrape Accounts API**: Apify Instagram Profile Scraper Body**: JSON with usernames from the hashtag search Note**: Uses the same httpQueryAuth credential as the previous node. 5️⃣ Extract Fields Node**: Set bio and follower count What it does**: Extracts biography and followersCount from the profile JSON and stores them in clean variables for AI input. 6️⃣ AI Lead Scoring Node**: AI Agent Purpose**: Uses GPT-4o-mini to analyze the bio and follower count Prompt Details**: 7️⃣ AI Model Node**: OpenAI Chat Model Model**: gpt-4o-mini Credential**: Connect your OpenAI account via API Key. Go to OpenAI API Keys Copy your key and create a new OpenAI API credential in n8n. 8️⃣ Output Parser Node**: Structured Output Parser What it does**: Parses the response from the AI into structured JSON for further use (e.g., storing leads, sending to Airtable, etc.) 🧪 Sample Output { "lead status": "Yes", "Reasoning": "The user has 3.5k followers and their bio shows they build automations with n8n." } 📬 Need More Help? If you'd like assistance setting this up, customizing it to your niche, or expanding it to score and store leads automatically — I can help! 👤 Robert Breen Automation Consultant | AI Workflow Designer | n8n Expert 📧 robert@ynteractive.com 🌐 ynteractive.com 🔗 LinkedIn
by explorium
Explorium Prospects Search Chatbot Template Download the following json file and import it to a new n8n workflow: mcp\_to\_prospects\_to\_csv.json Overview This n8n workflow creates a chatbot that understands natural language requests for finding business prospects and automatically: Interprets your query using AI (Claude Sonnet 3.7) Converts it to proper Explorium API filters Validates the API request structure Fetches prospect data from Explorium Exports results as a downloadable CSV file Perfect for sales teams, recruiters, and business development professionals who need to quickly find and export targeted prospect lists without learning complex API syntax. Key Features Natural Language Interface**: Simply describe who you're looking for in plain English Smart Query Translation**: AI converts your request to valid API parameters Built-in Validation**: Ensures API calls meet Explorium's requirements Error Recovery**: Automatically retries with corrections if validation fails Pagination Support**: Handles large result sets automatically CSV Export**: Clean, formatted output ready for CRM import Conversation Memory**: Maintains context for follow-up queries Example Queries The chatbot understands queries like: "Find marketing directors at SaaS companies in New York with 50-200 employees" "Get me CTOs from fintech startups in California" "Show me sales managers at healthcare companies with revenue over $10M" "Find engineers at Microsoft with 3-5 years experience" "Get customer service leads from e-commerce companies in Europe" Prerequisites Before setting up this workflow, ensure you have: n8n instance with chat interface enabled Anthropic API key for Claude Explorium API credentials (Bearer token) - Get explorium api key Basic understanding of n8n chat workflows Supported Filters The chatbot can search using these criteria: Company Filters Size**: 1-10, 11-50, 51-200, 201-500, 501-1000, 1001-5000, 5001-10000, 10001+ employees Revenue**: Ranges from $0-500K up to $10T+ Age**: 0-3, 3-6, 6-10, 10-20, 20+ years Location**: Countries, regions, cities Industry**: Google categories, NAICS codes, LinkedIn categories Name**: Specific company names Prospect Filters Job Level**: CXO, VP, Director, Manager, Senior, Entry, etc. Department**: Sales, Marketing, Engineering, Finance, HR, etc. Experience**: Total months and current role duration Location**: Country and region codes Contact Info**: Filter by email/phone availability Installation & Setup Step 1: Import the Workflow Copy the workflow JSON from the template In n8n: Workflows → Add Workflow → Import from File Paste the JSON and click Import Step 2: Configure Anthropic Credentials Click on the Anthropic Chat Model1 node Under Credentials, click Create New Add your Anthropic API key Name: "Anthropic API" Save credentials Step 3: Configure Explorium Credentials You'll need to set up Explorium credentials in two places: For MCP Client: Click on the MCP Client node Under Credentials, create new Header Auth Add your authentication header (usually Authorization: Bearer YOUR_TOKEN) Save credentials For API Calls: Click on the Prospects API Call node Use the same Header Auth credentials created above Verify the API endpoint is correct Step 4: Activate the Workflow Save the workflow Click the Active toggle to enable it The chat interface will now be available Step 5: Access the Chat Interface Click on the When chat message received node Copy the webhook URL Access this URL in your browser to start chatting How It Works Workflow Architecture Chat Trigger: Receives natural language queries from users Memory Buffer: Maintains conversation context AI Agent: Interprets queries and generates API parameters Validation: Checks API structure against Explorium requirements API Call: Fetches prospect data with pagination Data Processing: Formats results for CSV export File Conversion: Creates downloadable CSV file Processing Flow User Query → AI Interpretation → Validation → API Call → CSV Export ↑ ↓ └──── Error Correction Loop ←──────┘ Validation Rules The workflow validates: Filter keys are allowed by Explorium API Values match expected formats (e.g., valid country codes) Range filters have proper gte/lte values No duplicate values in arrays Required structure is maintained Usage Guide Basic Conversation Flow Start with your query: "Find me VPs of Sales at software companies in the US" Bot processes and responds: Generates API filters Validates the structure Fetches data Returns CSV download link Refine if needed: "Can you also include directors and filter for companies with 100+ employees?" Query Tips Be specific**: Include job titles, departments, company details Use standard terms**: "CTO" instead of "Chief Technology Officer" Specify locations**: Use country names or standard codes Include size/revenue**: Helps narrow results effectively Advanced Queries Combine multiple criteria: "Find engineering managers and senior engineers at B2B SaaS companies in New York and California with 50-500 employees and revenue over $5M who have been in their role for at least 1 year" Output Format The CSV file includes: Prospect ID Name (first, last, full) Location (country, region, city) LinkedIn profile Experience summary Skills and interests Company details Job information Business ID Troubleshooting Common Issues "Validation failed" errors Check that your query uses supported filter values Ensure location names are spelled correctly Verify company sizes/revenues match allowed ranges No results returned Broaden your search criteria Check if the company exists in Explorium's database Verify filter combinations aren't too restrictive Chat not responding Ensure workflow is activated Check all credentials are properly configured Verify webhook URL is accessible Large result sets timing out Try adding more specific filters Limit results by location or company size Use the size parameter (max 10,000) Error Messages The bot provides clear feedback: Invalid filters**: Shows which filters aren't supported Value errors**: Lists correct options for each field API failures**: Explains connection or authentication issues Performance Optimization Best Practices Start broad, then narrow: Begin with basic criteria and add filters Use business IDs: When targeting specific companies Limit by contact info: Add has_email: true for actionable leads Batch by location: Process regions separately for large searches API Limits Maximum 10,000 results per search Pagination handles up to 100 records per page Rate limits apply based on your Explorium subscription Customization Options Modify AI Behavior Edit the AI Agent system message to: Change response format Add custom filters Adjust interpretation logic Include additional instructions Extend Functionality Add nodes to: Send results via email Import directly to CRM Schedule recurring searches Create custom reports Integration Ideas Connect to Slack for team queries Add to CRM workflows Create lead scoring systems Build automated outreach campaigns Security Considerations API credentials are stored securely in n8n Chat sessions are isolated No prospect data is stored permanently CSV files are generated on-demand Support Resources For issues with: n8n platform**: Check n8n documentation Explorium API**: Contact Explorium support Anthropic/Claude**: Refer to Anthropic docs Workflow logic**: Review node configurations