by Yaron Been
Zsxkib Canary Qwen 2.5b Text Generator Description 🎤The best open-source speech-to-text model as of Jul 2025, transcribing audio with record 5.63% WER and enabling AI tasks like summarization directly from speech✨ Overview This n8n workflow integrates with the Replicate API to use the zsxkib/canary-qwen-2.5b model. This powerful AI model can generate high-quality text content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters audio** (string): Audio file to transcribe Optional Parameters llm_prompt** (string, default: None): Optional LLM analysis prompt show_confidence** (boolean, default: False): Show AI reasoning in analysis include_timestamps** (boolean, default: True): Include timestamps in transcript How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate text content Access the generated output from the final node API Reference Model: zsxkib/canary-qwen-2.5b API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of text generation parameters
by Varritech
Workflow: Auto-Ticket Maker ⚡ About the Creators This workflow was created by Varritech Technologies, an innovative agency that leverages AI to engineer, design, and deliver software development projects 500% faster than traditional agencies. Based in New York City, we specialize in custom software development, web applications, and digital transformation solutions. If you need assistance implementing this workflow or have questions about content management solutions, please reach out to our team. 🏗️ Architecture Overview This workflow transforms your Slack conversations into complete project tickets, effectively replacing the need for a dedicated PM for task creation: Slack Webhook → Captures team conversation Code Transformation → Parses Slack message structure AI PM Agent → Analyzes requirements and creates complete tickets Memory Buffer → Maintains conversation context Slack Output → Returns formatted tickets to your channel Say goodbye to endless PM meetings just to create tickets! Simply describe what you need in Slack, and our AI PM handles the rest, breaking down complex projects into structured epics and tasks with all the necessary details. 📦 Node-by-Node Breakdown flowchart LR A[Webhook: Slack Trigger] --> B[Code: Parse Message] B --> C[AI PM Agent] C --> D[Slack: Post Tickets] E[Memory Buffer] --> C F[OpenAI Model] --> C Webhook: Slack Trigger Type: HTTP Webhook (POST /slack-ticket-maker) Purpose: Captures messages from your designated Slack channel. Code Transformation Function: Parses complex Slack payload structure Extracts: User ID, channel, message text, timestamp, thread information AI PM Agent Inputs: Parsed Slack message Process: Evaluates project complexity Requests project name if needed Asks clarifying questions (up to 2 rounds) Breaks down into epics and tasks Formats with comprehensive structure Ticket Structure: Title Description Objectives/Goals Definition of Done Requirements/Acceptance Criteria Implementation Details Risks & Challenges Testing & Validation Timeline & Milestones Related Notes & References Open Questions Memory Buffer Type: Window Buffer Memory Purpose: Maintains context across conversation Slack Output Posts fully-formatted tickets back to your channel Uses markdown for clean, structured presentation 🔍 Design Rationale & Best Practices Replace Your PM's Ticket Creation Time Let your PM focus on strategy while AI handles the documentation. Cut ticket creation time by 90%. Standardized Quality Every ticket follows best practices with consistent structure, detail level, and formatting. No Training Required Describe your needs conversationally - the AI adapts to your communication style. Seamless Integration Works within your existing Slack workflow - no new tools to learn.
by scrapeless official
Brief Overview This automation template helps you track the latest real estate listings from the LoopNet platform. By using Scrapeless to scrape property listings, n8n to orchestrate the workflow, and Google Sheets to store the results, you can build a real estate data pipeline that runs automatically on a weekly schedule. How It Works Trigger on a Schedule:** The workflow runs automatically every week (can be adjusted to every 6 hours, daily, etc.). Scrape Property Listings:** Scrapeless crawls the LoopNet real estate website and returns structured Markdown data. Extract & Parse Content:** JavaScript nodes use regex to parse property titles, links, sizes, year built from Markdown. Flatten Data:** Each property listing becomes a single row with structured fields. Save to Google Sheets:** Property data is appended to your Google Sheet for easy analysis, sharing, and reporting. Features No-code, automated real estate listing scraper. Scrapes and structures the latest commercial property listings (for sale or lease). Saves structured listing data directly to Google Sheets. Fully automated, scheduled scraping—no manual scraping is required. Extensible: Add filters, deduplication, Slack/Email notifications, or multi-city scraping. Requirements Scrapeless API Key:** Sign up on the Scrapeless Dashboard. Go to Settings → API Key Management → Create API Key, then copy the generated key. n8n Instance:** Self-hosted or n8n.cloud account. Google Account:** For Google Sheets API access. Target Site:** This template is configured for LoopNet real estate listings but can be adapted for other property platforms like Crexi. Installation Deploy n8n on your preferred platform. Install the Scrapeless node from the community marketplace. Import this workflow JSON file into your n8n workspace. Create and add your Scrapeless API Key in n8n’s credential manager. Connect your Google Sheets account in n8n. Update the target LoopNet URL and Google Sheet details. Usage This automated real estate scraper is ideal for: | Industry / Role | Use Case | | ---------------------- | ----------------------------------------------------------------- | | Real Estate Agencies | Monitor new commercial properties and streamline lead generation. | | Market Research Teams | Track market dynamics and property availability in real-time. | | BI/Data Analysts | Automate data collection for dashboards and market insights. | | Investors | Keep tabs on the latest commercial property opportunities. | | Automation Enthusiasts | Example use case for learning web scraping + automation. | Output Example
by Hemanth Arety
Generate AEO strategy from brand input using AI competitor analysis This workflow automatically creates a comprehensive Answer Engine Optimization (AEO) strategy by identifying your top competitors, analyzing their positioning, and generating custom recommendations to help your brand rank in AI-powered search engines like ChatGPT, Perplexity, and Google SGE. Who it's for This template is perfect for: Digital marketing agencies** offering AEO services to clients In-house marketers** optimizing content for AI search engines Brand strategists** analyzing competitive positioning Content teams** creating AI-optimized content strategies SEO professionals** expanding into Answer Engine Optimization What it does The workflow automates the entire AEO research and strategy process in 6 steps: Collects brand information via a user-friendly web form (brand name, website, niche, product type, email) Identifies top 3 competitors using Google Gemini AI based on product overlap, market position, digital presence, and geographic factors Scrapes target brand website with Firecrawl to extract value propositions, features, and content themes Scrapes competitor websites in parallel to gather competitive intelligence Generates comprehensive AEO strategy using OpenAI GPT-4 with 15+ actionable recommendations Delivers formatted report via email with executive summary, competitive analysis, and implementation roadmap The entire process runs automatically and takes approximately 5-7 minutes to complete. How to set up Requirements You'll need API credentials for: Google Gemini API** (for competitor analysis) - Get API key OpenAI API** (for strategy generation) - Get API key Firecrawl API** (for web scraping) - Get API key Gmail account** (for email delivery) - Use OAuth2 authentication Setup Steps Import the workflow into your n8n instance Configure credentials: Add your Google Gemini API key to the "Google Gemini Chat Model" node Add your OpenAI API key to the "OpenAI Chat Model" node Add your Firecrawl API key as HTTP Header Auth credentials Connect your Gmail account using OAuth2 Activate the workflow and copy the form webhook URL Test the workflow by submitting a real brand through the form Check your email for the generated AEO strategy report Credentials Setup Tips For Firecrawl: Create HTTP Header Auth credentials with header name Authorization and value Bearer YOUR_API_KEY For Gmail: Use OAuth2 to avoid authentication issues with 2FA Test each API credential individually before running the full workflow How it works Competitor Identification The Google Gemini AI agent analyzes your brand based on 4 weighted criteria: product/service overlap (40%), market position (30%), digital presence (20%), and geographic overlap (10%). It returns structured JSON data with competitor names, URLs, overlap percentages, and detailed reasoning. Web Scraping Firecrawl extracts structured data from websites using custom schemas. For each site, it captures: company name, products/services, value proposition, target audience, key features, pricing info, and content themes. This runs asynchronously with 60-second waits to allow for complete extraction. Strategy Generation OpenAI GPT-4 analyzes the combined brand and competitor data to generate a comprehensive report including: executive summary, competitive analysis, 15+ specific AEO tactics across 4 categories (content optimization, structural improvements, authority building, answer engine targeting), content priority matrix with 10 ranked topics, and a detailed implementation roadmap. Email Delivery The strategy is formatted as a professional HTML email with clear sections, visual hierarchy, and actionable next steps. Recipients get an immediately implementable roadmap for improving their AEO performance. How to customize the workflow Change AI Models Replace Google Gemini** with Claude, GPT-4, or other LLM in the competitor analysis node Replace OpenAI** with Anthropic Claude or Google Gemini in the strategy generation node Both use LangChain agent nodes, making model swapping straightforward Modify Competitor Analysis Find more competitors**: Edit the AI prompt to request 5 or 10 competitors instead of 3 Add filtering criteria**: Include factors like company size, funding stage, or geographic focus Change ranking weights**: Adjust the 40/30/20/10 weighting in the prompt Enhance Data Collection Add social media scraping**: Include LinkedIn, Twitter/X, or Facebook page analysis Pull review data**: Integrate G2, Capterra, or Trustpilot APIs for customer sentiment Include traffic data**: Add SimilarWeb or Semrush API calls for competitive metrics Change Output Format Export to Google Docs**: Replace Gmail with Google Docs node to create shareable documents Send to Slack/Discord**: Post strategy summaries to team channels for collaboration Save to database**: Store results in Airtable, PostgreSQL, or MongoDB for tracking Create presentations**: Generate PowerPoint slides using automation tools Add More Features Schedule periodic analysis**: Run monthly competitive audits for specific brands A/B test strategies**: Generate multiple strategies and compare results over time Multi-language support**: Add translation nodes for international brands Custom branding**: Modify email templates with your agency's logo and colors Adjust Scraping Behavior Change Firecrawl schema**: Customize extracted data fields based on industry needs Add timeout handling**: Implement retry logic for failed scraping attempts Scrape more pages**: Extend beyond homepage to include blog, pricing, and about pages Use different scrapers**: Replace Firecrawl with Apify, Browserless, or custom solutions Tips for best results Provide clear brand information**: The more specific the product type and niche, the better the competitor identification Ensure websites are accessible**: Some sites block scrapers; consider adding user agents or rotating IPs Monitor API costs**: Firecrawl and OpenAI charges can add up; set usage limits Review generated strategies**: AI recommendations should be reviewed and customized for your specific context Iterate on prompts**: Fine-tune the AI prompts based on output quality over multiple runs Common use cases Client onboarding** for marketing agencies - Generate initial AEO assessments Content strategy planning** - Identify topics and angles competitors are missing Quarterly audits** - Track competitive positioning changes over time Product launches** - Understand competitive landscape before entering market Sales enablement** - Equip sales teams with competitive intelligence Note: This workflow uses community and AI nodes that require external API access. Make sure your n8n instance can make outbound HTTP requests and has the necessary LangChain nodes installed.
by Marth
Automated AI-Driven Competitor & Market Intelligence System Problem Solved:** Small and Medium-sized IT companies often struggle to stay ahead in a rapidly evolving market. Manually tracking competitor moves, pricing changes, product updates, and emerging market trends is time-consuming, inconsistent, and often too slow for agile sales strategies. This leads to missed sales opportunities, ineffective pitches, and a reactive rather than proactive market approach. Solution Overview:** This n8n workflow automates the continuous collection and AI-powered analysis of competitor data and market trends. By leveraging web scraping, RSS feeds, and advanced AI models, it transforms raw data into actionable insights for your sales and marketing teams. The system generates structured reports, notifies relevant stakeholders, and stores intelligence in your database, empowering your team with real-time, strategic information. For Whom:** This high-value workflow is perfect for: IT Solution Providers & SaaS Companies: To maintain a competitive edge and tailor sales pitches based on competitor weaknesses and market opportunities. Sales & Marketing Leaders: To gain comprehensive, automated market intelligence without extensive manual research. Product Development Teams: To identify market gaps and validate new feature development based on competitive landscapes and customer sentiment. Business Strategists: To inform strategic planning with data-driven insights into industry trends and competitive threats. How It Works (Scope of the Workflow) ⚙️ This system establishes a powerful, automated pipeline for market and competitor intelligence: Scheduled Data Collection: The workflow runs automatically at predefined intervals (e.g., weekly), initiating data retrieval from various online sources. Diverse Information Gathering: It pulls data from competitor websites (pricing, features, blogs via web scraping services), industry news and blogs (via RSS feeds), and potentially other sources. Intelligent Data Preparation: Collected data is aggregated, cleaned, and pre-processed using custom code to ensure it's in an optimal format for AI analysis, removing noise and extracting relevant text. AI-Powered Analysis: An advanced AI model (like OpenAI's GPT-4o) performs in-depth analysis on the cleaned data. It identifies competitor strengths, weaknesses, new offerings, pricing changes, customer sentiment from reviews, emerging market trends, and suggests specific opportunities and threats for your company. Automated Report Generation: The AI's structured insights are automatically populated into a professional Google Docs report using a predefined template, making the intelligence easily digestible for your team. Team Notification: Stakeholders (sales leads, marketing managers) receive automated notifications via Slack (or email), alerting them to the new report and key insights. Strategic Data Storage & Utilization: All analyzed insights are stored in a central database (e.g., PostgreSQL). This builds a historical record for long-term trend analysis and can optionally trigger sub-workflows to generate personalized sales talking points directly relevant to ongoing deals or specific prospects. Setup Steps 🛠️ (Building the Workflow) To implement this sophisticated workflow in your n8n instance, follow these detailed steps: Prepare Your Digital Assets & Accounts: Google Sheet (Optional, if using for CRM data): For simpler CRM, create a sheet with CompetitorName, LastAnalyzedDate, Strengths, Weaknesses, Opportunities, Threats, SalesTalkingPoints. API Keys & Credentials: OpenAI API Key: Essential for the AI analysis. Web Scraping Service API Key: For services like Apify, Crawlbase, or similar (e.g., Bright Data, ScraperAPI). Database Access: Credentials for your PostgreSQL/MySQL database. Ensure you've created necessary tables (competitor_profiles, market_trends) with appropriate columns. Google Docs Credential: To link n8n to your Google Drive for report generation. Create a template Google Doc with placeholders (e.g., {{competitorName}}, {{strengths}}). Slack Credential: For sending team notifications to specific channels. CRM API Key (Optional): If directly integrating with HubSpot, Salesforce, or custom CRM via API. Identify Data Sources for Intelligence: Compile a list of competitor website URLs you want to monitor (e.g., pricing pages, blog sections, news). Identify relevant online review platforms (e.g., G2, Capterra) for competitor products. Gather RSS Feed URLs from key industry news sources, tech blogs, and competitor's own blogs. Define keywords for general market trends or competitor mentions, if using tools that provide RSS feeds (like Google Alerts). Build the n8n Workflow (10 Key Nodes): Start a new workflow in n8n and add the following nodes, configuring their parameters and connections carefully: Cron (Scheduled Analysis Trigger): Set this to trigger daily or weekly at a specific time (e.g., Every Week, At Hour: 0, At Minute: 0). HTTP Request (Fetch Competitor Web Data): Configure this to call your chosen web scraping service's API. Set Method to POST, URL to the service's API endpoint, and build the JSON/Raw Body with the startUrls (competitor websites, review sites) for scraping, including your API Key in Authentication (e.g., Header Auth). RSS Feed (Fetch News & Blog RSS): Add the URLs of competitor blogs and industry news RSS feeds. Merge (Combine Data Sources): Connect inputs from both Fetch Competitor Web Data and Fetch News & Blog RSS. Use Merge By Position. Code (Pre-process Data for AI): Write JavaScript code to iterate through merged items, extract relevant text content, perform basic cleaning (e.g., HTML stripping), and limit text length for AI input. Output should be an array of objects with content, title, url, and source. OpenAI (AI Analysis & Competitor Insights): Select your OpenAI credential. Set Resource to Chat Completion and Model to gpt-4o. In Messages, create a System message defining AI's role and a User message containing the dynamic prompt (referencing {{ $json.map(item => ... ).join('\\n\\n') }} for content, title, url, source) and requesting a structured JSON output for analysis. Set Output to Raw Data. Google Docs (Generate Market Intelligence Report): Select your Google Docs credential. Set Operation to Create document from template. Provide your Template Document ID and map the Values from the parsed AI output (using JSON.parse($json.choices[0].message.content).PropertyName) to your template placeholders. Slack (Sales & Marketing Team Notification): Select your Slack credential. Set Chat ID to your team's Slack channel ID. Compose the Text message, referencing the report link ({{ $json.documentUrl }}) and key AI insights (e.g., {{ JSON.parse($json.choices[0].message.content).Competitor_Name }}). PostgreSQL (Store Insights to Database): Select your PostgreSQL credential. Set Operation to Execute Query. Write an INSERT ... ON CONFLICT DO UPDATE SQL query to store the AI insights into your competitor_profiles or market_trends table, mapping values from the parsed AI output. OpenAI (Generate Personalized Sales Talking Points - Optional Branch): This node can be part of the main workflow or a separate, manually triggered workflow. Configure it similarly to the main AI node, but with a prompt tailored to generate sales talking points based on a specific sales context and the stored insights. Final Testing & Activation: Run a Test: Before going live, manually trigger the workflow from the first node. Carefully review the data at each stage to ensure correct processing and output. Verify that reports are generated, notifications are sent, and data is stored correctly. Activate Workflow: Once testing is complete and successful, activate the workflow in n8n. This system will empower your IT company's sales team with invaluable, data-driven intelligence, enabling them to close more deals and stay ahead in the market.
by franck fambou
Overview This advanced automation workflow enables deep web scraping combined with Retrieval-Augmented Generation (RAG) to transform websites into intelligent, queryable knowledge bases. The system recursively crawls target websites, extracts content, and indexes all data in a vector database for AI conversational access. How the system works Intelligent Web Scraping and RAG Pipeline Recursive Web Scraper - Automatically crawls every accessible page of a target website Data Extraction - Collects text, metadata, emails, links, and PDF documents Supabase Integration - Stores content in PostgreSQL tables for scalability RAG Vectorization - Generates embeddings and stores them for semantic search AI Query Layer - Connects embeddings to an AI chat engine with citations Error Handling - Automatically retriggers failed queries Setup Instructions Estimated setup time: 30-45 minutes Prerequisites Self-hosted n8n instance (v0.200.0 or higher) Supabase account and project (PostgreSQL enabled) OpenAI/Gemini/Claude API key for embeddings and chat Optional: External vector database (Pinecone, Qdrant) Detailed configuration steps Step 1: Supabase configuration Project creation**: New Supabase project with PostgreSQL enabled Generating credentials**: API keys (anon key and service_role key) and connection string Security configuration**: RLS policies according to your access requirements Step 2: Connect Supabase to n8n Configure Supabase node**: Add credentials to n8n Credentials Test connection**: Verify with a simple query Configure PostgreSQL**: Direct connection for advanced operations Step 3: Preparing the database Main tables**: pages: URLs, content, metadata, scraping statuses documents: Extracted and processed PDF files embeddings: Vectors for semantic search links: Link graph for navigation Management functions**: Scripts to reactivate failed URLs and manage retries Step 4: Configuring automation Recursive scraper**: Starting URL, crawling depth, CSS selectors HTTP extraction**: User-Agent, headers, timeouts, and retry policies Supabase backup**: Batch insertion, data validation, duplicate management Step 5: Error handling and re-executions Failure monitoring**: Automatic detection of failed URLs Manual triggers**: Selective re-execution by domain or date Recovery sub-streams**: Retry logic with exponential backoff Step 6: RAG processing Embedding generation**: Text-embedding models with intelligent chunking Vector storage**: Supabase pgvector or external database Conversational engine**: Connection to chat models with source citations Data structure Main Supabase tables | Table | Content | Usage | |-------|---------|-------| | pages | URLs, HTML content, metadata | Main storage for scraped content | | documents | PDF files, extracted text | Downloaded and processed documents | | embeddings | Vectors, text chunks | Semantic search and RAG | | links | Link graph, navigation | Relationships between pages | Use cases Business and enterprise Competitive intelligence with conversational querying Market research from complex web domains Compliance monitoring and regulatory watch Research and academia Literature extraction with semantic search Building datasets from fragmented sources Legal and technical Scraping legal repositories with intelligent queries Technical documentation transformed into a conversational assistant Key features Advanced scraping Recursive crawling with automatic link discovery Multi-format extraction (HTML, PDF, emails) Intelligent error handling and retry Intelligent RAG Contextual embeddings for semantic search Multi-document queries with citations Intuitive conversational interface Performance and scalability Processing of thousands of pages per execution Embedding cache for fast responses Scalable architecture with Supabase Technical Architecture Main flow: Target URL → Recursive scraping → Content extraction → Supabase storage → Vectorization → Conversational interface Supported types: HTML pages, PDF documents, metadata, links, emails Performance specifications Capacity**: 10,000+ pages per run Response time**: < 5 seconds for RAG queries Accuracy**: >90% relevance for specific domains Scalability**: Distributed architecture via Supabase Advanced configuration Customization Crawling depth and scope controls Domain and content type filters Chunking settings to optimize RAG Monitoring Real-time monitoring in Supabase Cost and performance metrics Detailed conversation logs
by Akshay
Overview This project is an AI-powered hotel receptionist built using n8n, designed to handle guest queries automatically through WhatsApp. It integrates Google Gemini, Redis, MySQL, and Google Sheets via LangChain to create an intelligent conversational system that understands and answers booking-related questions in real time. A standout feature of this workflow is its AI model-switching system — it dynamically assigns users to different Gemini models, balancing traffic, improving performance, and reducing API costs. How It Works WhatsApp Trigger The workflow starts when a hotel guest sends a message through WhatsApp. The system captures the message text, contact details, and session information for further processing. Redis-Based Model Management The workflow checks Redis for a saved record of the user’s previously assigned AI model. If no record exists, a Model Decider node assigns a new model (e.g., Gemini 1 or Gemini 2). Redis then stores this model assignment for an hour, ensuring consistent routing and controlled traffic distribution. Model Selector The Model Selector routes each user’s request to the correct Gemini instance, enabling parallel execution across multiple AI models for faster response times and cost optimization. AI Agent Logic The LangChain AI Agent serves as the system’s reasoning core. It: Interprets guest questions such as: “Who checked in today?” “Show me tomorrow’s bookings.” “What’s the price for a deluxe suite for two nights?” Generates safe, read-only SQL SELECT queries. Fetches the requested data from the MySQL database. Combines this with dynamic pricing or promotions from Google Sheets, if available. Response Delivery Once the AI Agent formulates an answer, it sends a natural-sounding message back to the guest via WhatsApp, completing the interaction loop. Setup & Requirements Prerequisites Before deploying this workflow, ensure the following: n8n Instance** (local or hosted) WhatsApp Cloud API** with messaging permissions Google Gemini API Key** (for both models) Redis Database** for user session and model routing MySQL Database** for hotel booking and guest data Google Sheets Account** (optional, for pricing or offer data) Step-by-Step Setup Configure Credentials Add all API credentials in n8n → Settings → Credentials (WhatsApp, Redis, MySQL, Google). Prepare Databases MySQL Tables Example: bookings(id, guest_name, room_type, check_in, check_out) rooms(id, type, rate, status) Ensure the MySQL user has read-only permissions. Set Up Redis Create Redis keys for each user: llm-user:<whatsapp_id> = { "modelIndex": 0 } TTL: 3600 seconds (1 hour). Connect Google Sheets (Optional) Add your sheet under Google Sheets OAuth2. Use it to manage room rates, discounts, or seasonal offers dynamically. WhatsApp Webhook Configuration In Meta’s Developer Console, set the webhook URL to your n8n instance. Select message updates to trigger the workflow. Testing the Workflow Send messages like “Who booked today?” or a voice message. Confirm responses include real data from MySQL and contextual replies. Key Features Text & voice support** for guest interactions Automatic AI model-switching** using Redis Session memory** for context-aware conversations Read-only SQL query generation** for database safety Google Sheets integration** for live pricing and availability Scalable design** supporting multiple LLM instances Example Guest Queries | Guest Query | AI Response Example | |--------------|--------------------| | “Who checked in today?” | “Two guests have checked in today: Mr. Ahmed (Room 203) and Ms. Priya (Room 410).” | | “How much is a deluxe room for two nights?” | “A deluxe room costs $120 per night. The total for two nights is $240.” | | “Do you have any discounts this week?” | “Yes! We’re offering a 10% weekend discount on all deluxe and suite rooms.” | | “Show me tomorrow’s check-outs.” | “Three check-outs are scheduled tomorrow: Mr. Khan (101), Ms. Lee (207), and Mr. Singh (309).” | Customization Options 🧩 Model Assignment Logic You can modify the Model Decider node to: Assign models based on user load, region, or priority level. Increase or decrease TTL in Redis for longer model persistence. 🧠 AI Agent Prompt Adjust the system prompt to control tone and response behavior — for example: Add multilingual support. Include upselling or booking confirmation messages. 🗂️ Database Expansion Extend MySQL to include: Staff schedules Maintenance records Restaurant reservations Then link new queries in the AI Agent node for richer responses. Tech Stack n8n** – Workflow automation & orchestration Google Gemini (PaLM)** – LLM for reasoning & generation Redis** – Model assignment & session management MySQL** – Booking & guest data storage Google Sheets** – Dynamic pricing reference WhatsApp Cloud API** – Messaging interface Outcome This workflow demonstrates how AI automation can transform hotel operations by combining WhatsApp communication, database intelligence, and multi-model AI reasoning. It’s a production-ready foundation for scalable, cost-optimized, AI-driven hospitality solutions that deliver fast, accurate, and personalized guest interactions.
by Michael A Putra
Demo Personalized Email This n8n workflow is built for AI and automation agencies to promote their workflows through an interactive demo that prospects can try themselves. The featured system is a deep personalized email demo. 🔄 How It Works Prospect Interaction A prospect starts the demo via Telegram. The Telegram bot (created with BotFather) connects directly to your n8n instance. Demo Guidance The RAG agent and instructor guide the user step-by-step through the demo. Instructions and responses are dynamically generated based on user input. Workflow Execution When the user triggers an action (e.g., testing the email demo), n8n runs the workflow. The workflow collects website data using Crawl4AI or standard HTTP requests. Email Demo The system personalizes and sends a demo email through SparkPost, showing the automation’s capability. Logging and Control Each user interaction is logged in your database using their name and id. The workflow checks limits to prevent misuse or spam. Error Handling If a low-CPU scraping method fails, the workflow automatically escalates to a higher-CPU method. ⚙️ Requirements Before setting up, make sure you have the following: n8n — Automation platform to run the workflow Docker — Required to run Crawl4AI Crawl4AI — For intelligent website crawling Telegram Account — To create your Telegram bot via BotFather SparkPost Account — To send personalized demo emails A database (e.g., PostgreSQL, MySQL, or SQLite) — To store log data such as user name and ID 🚀 Features Telegram interface** using the BotFather API Instructor and RAG agent** to guide prospects through the demo Flow generation limits per user ID** to prevent abuse Low-cost yet powerful web scraping**, escalating from low- to high-CPU flows if earlier ones fail 💡 Development Ideas Replace the RAG logic with your own query-answering and guidance method Remove the flow limit if you’re confident the demo can’t be misused Swap the personalized email demo with any other workflow you want to showcase 🧠 Technical Notes Telegram bot** created with BotFather Website crawl process:** Extract sub-links via /sitemap.xml, sitemap_index.xml, or standard HTTP requests Fall back to Crawl4AI if normal requests fail Fetch sub-link content via HTTPS or Crawl4AI as backup SparkPost** used for sending demo emails ⚙️ Setup Instructions 1. Create a Telegram Bot Use BotFather on Telegram to create your bot and get the API token. This token will be used to connect your n8n workflow to Telegram. 2. Create a Log Data Table In your database, create a table to store user logs. The table must include at least the following columns: name — to store the user’s name or Telegram username. id — to store the user’s unique identifier. 3. Install Crawl4AI with Docker Follow the installation guide from the official repository: 👉 https://github.com/unclecode/crawl4ai Crawl4AI** will handle website crawling and content extraction in your workflow. 📦 Notes This setup is optimized for low cost, easy scalability, and real-time interaction with prospects. You can customize each component — Telegram bot behavior, RAG logic, scraping strategy, and email workflow — to fit your agency’s demo needs. 👉 You can try the live demo here: @email_demo_bot
by Vivekanand M
Self-learning feedback loop for AI customer support email drafts with Gmail, OpenAI and PostgreSQL Automatically compare AI-generated email drafts against what your support team actually sent, learn from the differences, and improve future drafts over time — without any model fine-tuning. What this workflow does This is the second workflow in a two-part customer support automation system. The first workflow generates AI draft replies for incoming support emails. This workflow closes the loop — it runs every 3 hours, checks which drafts were reviewed and sent, compares them against the original AI output, and stores the human-edited versions as training examples. The more this workflow runs, the smarter the first workflow becomes. When generating future drafts, the similarity search surfaces past human-approved responses — so the AI progressively learns what good answers look like for your specific support context. How it works Step 1 — Watermark and scheduling Every run starts by fetching the last_processed_sent_at timestamp from the previous completed run. Only Gmail Sent emails newer than this timestamp are fetched, so nothing gets processed twice. On the first-ever run it defaults to 7 days ago. Step 2 — Fetch and loop Sent emails are fetched from Gmail and processed one at a time. For each email, the full message body is retrieved via the Gmail API (the list endpoint only returns a preview snippet). The sent email's thread ID is matched against the ai_drafts table to find the corresponding AI draft. Step 3 — Match and skip logic Three things skip an email without processing: no matching AI draft found (the team sent something manually), the draft was already processed in a previous run, or the fetch returns no results. Only genuine unprocessed matches continue. Step 4 — AI comparison GPT-4o-mini compares the AI draft text against the human-sent text and returns a structured analysis: whether it was approved unchanged, the type of edit made (minor edits vs major rewrite), a plain English summary of what changed, and whether the edit implies missing or incorrect information in the knowledge base. Step 5 — Store the correction If the human made any edits, the pair (original email + human response) is embedded using OpenAI text-embedding-3-small and saved to the corrections table. This table is what the first workflow searches using vector cosine similarity when assembling future draft prompts. Step 6 — KB auto-update If the AI comparison flags that the human edit contained new information, the most relevant knowledge base entry for that category is fetched and rewritten by GPT-4o-mini to incorporate the new information. The previous answer is preserved in the previous_answer column for auditing. Step 7 — Run log Each run is logged to feedback_run_log with counts of emails checked, corrections saved, KB updates made and any errors. This log also serves as the watermark source for the next run. Setup steps Prerequisites Gmail account (same support inbox used by the main email workflow) OpenAI API key PostgreSQL database with pgvector extension and the full schema from Workflow 1 already applied The main email automation workflow (Workflow 1) must be active and generating drafts 1. Apply the DB migration Run the following against your existing database to add the columns this workflow needs: ALTER TABLE ai_drafts ADD COLUMN IF NOT EXISTS email_embedding vector(1536), ADD COLUMN IF NOT EXISTS feedback_processed_at TIMESTAMPTZ, ADD COLUMN IF NOT EXISTS was_approved_as_is BOOLEAN DEFAULT FALSE; ALTER TABLE corrections ADD COLUMN IF NOT EXISTS source TEXT DEFAULT 'feedback_loop', ADD COLUMN IF NOT EXISTS kb_updated BOOLEAN DEFAULT FALSE; ALTER TABLE kb_data ADD COLUMN IF NOT EXISTS updated_by TEXT DEFAULT 'manual', ADD COLUMN IF NOT EXISTS previous_answer TEXT; CREATE TABLE IF NOT EXISTS feedback_run_log ( id SERIAL PRIMARY KEY, run_started_at TIMESTAMPTZ DEFAULT NOW(), run_completed_at TIMESTAMPTZ, last_processed_sent_at TIMESTAMPTZ, emails_checked INTEGER DEFAULT 0, approved_as_is INTEGER DEFAULT 0, corrections_saved INTEGER DEFAULT 0, kb_updates INTEGER DEFAULT 0, errors INTEGER DEFAULT 0, status TEXT DEFAULT 'running' ); 2. Configure credentials | Node | Credential needed | |---|---| | Gmail - Fetch Sent Emails | Gmail OAuth2 | | Gmail - Fetch Full Message | Gmail OAuth2 (HTTP Request with OAuth) | | All DB nodes | PostgreSQL | | OpenAI Chat Model - Compare | OpenAI API | | AI - Rewrite KB Answer | OpenAI API | | Generate Embedding - Human Sent | OpenAI API | 3. Check node connections The splitInBatches loop node has two outputs — make sure they are connected correctly: Output 0 (loop)** → DB - Match Thread ID Output 1 (done)** → DB - Complete Run Log All branch dead-ends (approved as-is, no KB update, KB updated) should feed back into the loop node's input to advance to the next item. 4. Activate Toggle the workflow to active. It will run automatically on the 3-hour schedule. You can also trigger it manually to test. How it connects to Workflow 1 Once corrections start accumulating in the corrections table, Workflow 1's similarity search (which queries this table using vector cosine distance) will begin surfacing relevant past human-approved responses when assembling draft prompts. No changes to Workflow 1 are needed — it queries the same table this workflow writes to. Tech stack n8n** — workflow automation and scheduling Gmail API** — sent folder monitoring and full message fetch OpenAI GPT-4o-mini** — draft comparison and KB rewriting OpenAI text-embedding-3-small** — vector embedding for similarity search PostgreSQL + pgvector** — storing corrections and running cosine similarity queries Who this is for Teams already running an AI email draft workflow who want it to improve over time Support operations that want human edits to automatically become training data Anyone who wants a self-improving system without model fine-tuning or external ML infrastructure
by KPendic
This n8n flow demos basic dev-ops operation task, dns records management. AI agent with light and basic prompt functions like getter and setter for DNS records. In this special case, we are managing remote dns server, via API calls - that are handled on CloudFlare platform side. Use-cases for this flow can be standalone, or you can chain it in your pipe-line to get powerful infrastructure flows for your needs. How it works we created basic agent and gave it a prompt to know about one tool: cf_tool - sub-routine (to itself flow - or it can be separate dedicated one) prompt have defined arguments that are needed for passing them when calling agent, for each action specifically tool it self have basic if switch that is - based of a action call - calling specific CloudFlare API endpoint (and pass down the args from the tool) Requirements For storing and processing of data in this flow you will need: CloudFlare.com API key/token - for retrieving your data (https://dash.cloudflare.com/?to=/:account/api-tokens) OpenAPI credentials (or any other LLM provider) saved - for agent chat (Optional) PostGres table for chat history saving Official CloudFlare api Documentation For full details and specifications please use API documentation from: https://developers.cloudflare.com/api/ Linkedin post Let me know if you found this flow usefull on my Linkedin post > here. tags: #cloudflare, #dns, #domain
by Muhammadumar
This is the core AI agent used for isra36.com. Don't trust complex AI-generated SQL queries without double-checking them in a safe environment. That's where isra36 comes in. It automatically creates a test environment with the necessary data, generates code for your task, runs it to double-check for correctness, and handles errors if necessary. If you enable auto-fixing, isra36 will detect and fix issues on its own. If not, it will ask for your permission before making changes during debugging. In the end, you get thoroughly verified code along with full details about the environment it ran in. Setup It is an embedded chat for the website, but you can pin input data and run it on your own n8n instance. Input data sessionId: uuid\_v4. Required to handle ongoing conversations and to create table names (used as a prefix). threadId: string | nullable. If aiProvider is openai, conversation history is managed on OpenAI’s side. This is not needed in the first request—it will start a new conversation. For ongoing conversations, you must provide this value. You can get it from the OpenAIMainBrain node output after the first run. If you want to start a new conversation, just leave it as null. apiKey: string. Your API key for the selected aiProvider. aiProvider: string. Currently supported values: openai, openrouter. model: string. The AI model key (e.g., gpt-4.1, o3-mini, or any supported model key from OpenRouter). autoErrorFixing: boolean. If true, it will automatically fix errors encountered when running code in the environment. If false, it will ask for your permission before attempting a fix. chatInput: string. The user's prompt or message. currentDbSchemaWithData: string. A JSON representation of the database schema with sample data. Used to inform the AI about the current database structure during an ongoing conversation. Please use the '[]' value in the first request. Example string for filled db structure : '{"users":[{"id":1,"name":"John Doe","email":"john.d@example.com"},{"id":2,"name":"Jane Smith","email":"jane.s@example.com"}],"products":[{"product_id":101,"product_name":"Laptop","price":999.99}]}' Make sure to fill in your credentials: Your OpenAI or OpenRouter API key Access to a local PostgreSQL database for test execution You can view your generated tables using your preferred PostgreSQL GUI. We recommend DBeaver. Alternatively, you can activate the “Deactivated DB Visualization” nodes below. To use them, connect each to the most recent successful Set node and manually adjust the output. However, the easiest and most efficient method is to use a GUI. Workflow Explanation We store all input values in the localVariables node. Please use this node to get the necessary data. OpenAI has a built-in assistant that manages chat history on their side. For OpenRouter, we handle chat history locally. That’s why we use separate nodes like ifOpenAi and isOpenAi. Note that if logic can also be used inside nodes. The AutoErrorFixing loop will run only a limited number of times, as defined by the isMaxAutoErrorReached node. This prevents infinite loops. The Execute_AI_result node connects to the PostgreSQL test database used to execute queries. Guidance on customization This setup is built for PostgreSQL, but it can be adapted to any programming language, and the logic can be extended to any programming framework. To customize the logic for other programming languages: Change instruction parameter in localVariables node. Replace the Execute_AI_result PostgreSQL node with another executable node. For example, you can use the HTTP Request node. Update the GenerateErrorPrompt node's prompt parameter to generate code specific to your target language or framework. Any workflows built on top of this must credit the original author and be released under an open-source license.
by Ronnie Craig
AI Email Assistant - Smart Email Processing & Response 🤖 A sophisticated n8n workflow that transforms your email management with AI-powered classification, automatic responses, and intelligent organization. 🎯 What This Workflow Does This advanced AI email assistant automatically: Analyzes** incoming emails using intelligent classification Categorizes** messages by priority, urgency, and type Generates** context-aware draft responses in your voice Organizes** emails with smart labeling and filing Alerts** you to urgent messages instantly Manages** attachments with cloud storage integration Perfect for busy professionals, customer service teams, and anyone drowning in email! ✨ Key Features 🧠 Intelligent Email Analysis Context-Aware Processing**: Understands email threads and conversation history Smart Classification**: Automatically categorizes by priority, urgency, and required actions Multi-Criteria Assessment**: Evaluates response needs, follow-up requirements, team involvement Dynamic Label Management**: Syncs with your Gmail labels for consistent organization 📝 AI-Powered Response Generation Professional Draft Creation**: Generates contextually appropriate responses Tone Matching**: Mirrors the formality and style of incoming emails Multiple Response Options**: Provides alternatives for complex inquiries Customizable Voice**: Adapts to your business communication style 🔔 Smart Notification System Urgent Email Alerts**: Instant notifications for high-priority messages Telegram/Slack Integration**: Get alerts where you work Smart Filtering**: Only notifies when truly urgent Quick Action Links**: Direct links to Gmail for immediate response 📎 Advanced Attachment Management Automatic Cloud Upload**: Saves attachments to Google Drive Smart File Naming**: Organized by date, sender, and content Duplicate Detection**: Prevents redundant uploads File Type Filtering**: Optional filtering for security 🏷️ Intelligent Organization Auto-Labeling**: Applies relevant Gmail labels automatically Progress Tracking**: Marks emails as "processed" or "digested" Priority Indicators**: Visual priority levels in your inbox Category-Based Sorting**: Groups similar emails together 🛠️ Setup Instructions Prerequisites n8n instance (cloud or self-hosted) Gmail account with API access OpenAI API key (or compatible AI service) Google Drive account (for attachments) Telegram bot (optional, for alerts) Step 1: Import the Workflow Download AI_Email_Assistant_Community_Template.json In n8n, navigate to Templates → Import from File Select the downloaded JSON file The workflow will import as inactive Step 2: Configure Credentials Gmail Setup: Create Gmail OAuth2 credentials in n8n Configure the following nodes: Email_Trigger Get Conversation Thread Get Latest Message Content Create Draft Response Assign Classification Label Mark as Processed Get All Gmail Labels Test connections to ensure proper authentication AI Model Setup: Configure the AI Language Model node Options include: OpenAI (GPT-4, GPT-3.5-turbo) Anthropic Claude (recommended) Local LLMs via Ollama Add your API credentials Test the connection Google Drive Setup (Optional): Create Google Drive OAuth2 credentials Configure nodes: Upload to Google Drive Check Existing Attachments Replace YOUR_GOOGLE_DRIVE_FOLDER_ID with your folder ID Create a dedicated folder for email attachments Telegram Alerts (Optional): Create a Telegram bot via @BotFather Get your chat ID Configure the Send Urgent Alert node Replace YOUR_TELEGRAM_CHAT_ID with your actual chat ID Step 3: Customize AI Instructions Email Classification (AI Email Classifier node): Review the classification criteria in the system message Adjust urgency keywords for your business Modify priority levels based on your needs Customize category definitions Response Generation (AI Response Generator node): Update the response guidelines Replace [YOUR NAME] with your actual name Adjust tone and style preferences Add company-specific response templates Step 4: Configure Gmail Labels Create Custom Labels in Gmail: High Priority Medium Priority Low Priority Needs Response Urgent Follow Up Required Processed (or use existing labels) Update Label IDs: Run the workflow once to get label IDs Replace YOUR_PROCESSED_LABEL_ID in the "Mark as Processed" node Update any hardcoded label references Step 5: Test and Deploy Testing Process: Send yourself a test email Monitor the workflow execution Verify classification accuracy Check draft response quality Confirm labeling works correctly Test urgent alert functionality Fine-Tuning: Adjust AI prompts based on test results Refine classification criteria Update response templates Modify notification preferences Go Live: Activate the workflow Monitor initial performance Adjust settings as needed 📊 Email Classification System Priority Levels High**: Urgent matters requiring immediate attention Medium**: Important but not time-critical Low**: Routine or informational messages Classification Categories toReply**: Direct questions or requests requiring response urgent**: Immediate business impact or crisis situations dateRelated**: Time-sensitive events or deadlines attachmentsToUpload**: Financial docs or important files requiresFollowUp**: Multi-step processes or ongoing projects forwardToTeam**: Cross-departmental or collaborative items Response Generation Guidelines Professional Tone**: Business casual, warm but professional Context Awareness**: Considers email thread history Structured Responses**: Clear paragraphs with actionable next steps Placeholder System**: Uses [PLACEHOLDER] for missing information Alternative Options**: Provides multiple response choices for complex inquiries 🔧 Advanced Customization File Type Filtering // In Get Specific File Types node, modify: if (mimeType === 'application/pdf' || mimeType === 'text/xml' || mimeType === 'image/jpeg') { // Process file } Custom Urgency Keywords Update the AI classifier prompt with your business-specific urgent terms: Keywords: "URGENT", "EMERGENCY", "CRITICAL", "ASAP", "IMMEDIATE" Custom terms: "CLIENT ESCALATION", "SYSTEM DOWN", "LEGAL DEADLINE" Response Templates Customize the response generator with your company voice: Greeting style: "Hi [Name]" vs "Dear [Name]" Closing: "Best Regards" vs "Thank you" vs "Cheers" Company-specific phrases and terminology Integration Options CRM Systems**: Add nodes to create tasks in your CRM Project Management**: Auto-create tickets in Jira, Asana, etc. Calendar Integration**: Schedule follow-ups automatically Slack/Teams**: Alternative notification channels 🚨 Troubleshooting Common Issues 1. Gmail Authentication Errors Verify OAuth2 credentials are active Check Gmail API quotas Ensure proper scopes are configured 2. AI Classification Inconsistency Review and refine classification prompts Add more specific examples Adjust confidence thresholds 3. Response Generation Problems Validate AI model configuration Check API key and quotas Test with simpler email examples 4. Attachment Upload Failures Verify Google Drive permissions Check folder ID configuration Ensure sufficient storage space 5. Missing Notifications Test Telegram bot configuration Verify chat ID is correct Check urgency classification logic Performance Optimization Rate Limiting**: Gmail has API quotas - monitor usage Batch Processing**: Workflow processes one email at a time Error Handling**: Built-in retry logic for reliability Resource Management**: Monitor AI API costs and usage 📈 Best Practices 1. Email Management Regular Monitoring**: Review classifications weekly Label Hygiene**: Keep Gmail labels organized Feedback Loop**: Manually correct misclassifications Archive Strategy**: Set up auto-archiving for processed emails 2. AI Optimization Prompt Engineering**: Continuously refine AI instructions Example Training**: Add specific examples for your business Context Limits**: Monitor token usage and costs Model Selection**: Choose appropriate AI model for your needs 3. Security Considerations Credential Management**: Regularly rotate API keys Data Privacy**: Review what data is sent to AI services Access Control**: Limit workflow access to authorized users Audit Logging**: Monitor workflow executions 4. Workflow Maintenance Regular Updates**: Keep n8n and node versions current Backup Strategy**: Export workflow configurations regularly Documentation**: Keep setup notes and customizations documented Testing**: Test major changes in development environment first 🤝 Contributing to the Community This workflow template demonstrates: Comprehensive AI Integration**: Multiple AI touchpoints working together Production-Ready Architecture**: Error handling, retry logic, and monitoring Extensive Documentation**: Clear setup and customization guidance Flexible Configuration**: Adaptable to different business needs Best Practice Examples**: Security, performance, and maintenance considerations 📄 License & Support This workflow is provided free to the n8n community under MIT License. Community Resources: n8n Community Forum for questions GitHub Issues for bug reports Documentation updates welcome Professional Support: For enterprise deployments or custom modifications, consider: n8n Cloud for managed hosting Professional services for complex integrations Custom AI model training for specific use cases Transform your email workflow today! 🚀 This AI Email Assistant reduces email processing time by up to 90% while ensuring no important message goes unnoticed. Perfect for busy professionals who want to stay responsive without being overwhelmed by their inbox.