by Anderson Adelino
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Build intelligent AI chatbot with RAG and Cohere Reranker Who is it for? This template is perfect for developers, businesses, and automation enthusiasts who want to create intelligent chatbots that can answer questions based on their own documents. Whether you're building customer support systems, internal knowledge bases, or educational assistants, this workflow provides a solid foundation for document-based AI conversations. How it works This workflow creates an intelligent AI assistant that combines RAG (Retrieval-Augmented Generation) with Cohere's reranking technology for more accurate responses: Chat Interface: Users interact with the AI through a chat interface Document Processing: PDFs from Google Drive are automatically extracted and converted into searchable vectors Smart Search: When users ask questions, the system searches through vectorized documents using semantic search Reranking: Cohere's reranker ensures the most relevant information is prioritized AI Response: OpenAI generates contextual answers based on the retrieved information Memory: Conversation history is maintained for context-aware interactions Setup steps Prerequisites n8n instance (self-hosted or cloud) OpenAI API key Supabase account with vector extension enabled Google Drive access Cohere API key 1. Configure Supabase Vector Store First, create a table in Supabase with vector support: CREATE TABLE cafeina ( id SERIAL PRIMARY KEY, content TEXT, metadata JSONB, embedding VECTOR(1536) ); -- Create a function for similarity search CREATE OR REPLACE FUNCTION match_cafeina( query_embedding VECTOR(1536), match_count INT DEFAULT 10 ) RETURNS TABLE( id INT, content TEXT, metadata JSONB, similarity FLOAT ) LANGUAGE plpgsql AS $$ BEGIN RETURN QUERY SELECT cafeina.id, cafeina.content, cafeina.metadata, 1 - (cafeina.embedding <=> query_embedding) AS similarity FROM cafeina ORDER BY cafeina.embedding <=> query_embedding LIMIT match_count; END; $$; 2. Set up credentials Add the following credentials in n8n: OpenAI**: Add your OpenAI API key Supabase**: Add your Supabase URL and service role key Google Drive**: Connect your Google account Cohere**: Add your Cohere API key 3. Configure the workflow In the "Download file" node, replace URL DO ARQUIVO with your Google Drive file URL Adjust the table name in both Supabase Vector Store nodes if needed Customize the agent's tool description in the "searchCafeina" node 4. Load your documents Execute the bottom workflow (starting with "When clicking 'Execute workflow'") This will download your PDF, extract text, and store it in Supabase You can repeat this process for multiple documents 5. Start chatting Once documents are loaded, activate the main workflow and start chatting with your AI assistant through the chat interface. How to customize Different document types**: Replace the Google Drive node with other sources (Dropbox, S3, local files) Multiple knowledge bases**: Create separate vector stores for different topics Custom prompts**: Modify the agent's system message for specific use cases Language models**: Switch between different OpenAI models or use other LLM providers Reranking settings**: Adjust the top-k parameter for more or fewer search results Memory window**: Configure the conversation memory buffer size Tips for best results Use high-quality, well-structured documents for better search accuracy Keep document chunks reasonably sized for optimal retrieval Regularly update your vector store with new information Monitor token usage to optimize costs Test different reranking thresholds for your use case Common use cases Customer Support**: Create bots that answer questions from product documentation HR Assistant**: Build assistants that help employees find information in company policies Educational Tutor**: Develop tutors that answer questions from course materials Research Assistant**: Create tools that help researchers find relevant information in papers Legal Helper**: Build assistants that search through legal documents and contracts
by Nick Saraev
AI Facebook Ad Spy Tool with Apify, OpenAI, Gemini & Google Sheets Categories: Competitive Intelligence, Marketing Automation, AI Analysis This workflow creates a comprehensive Facebook ad spy tool that scrapes competitor ads from Facebook's ad library and generates detailed analysis with rewritten versions. The system processes text, image, and video ads using different AI models, providing strategic intelligence for PPC agencies and marketers. Built to be sold as a premium service for $2,000+, this tool combines web scraping, multi-modal AI analysis, and competitor intelligence into one powerful automation. Benefits Complete Competitive Intelligence** - Analyze competitor strategies across all ad formats (text, image, video) Multi-Modal AI Analysis** - Uses GPT-4 Vision for images and Gemini for video content understanding Automated Ad Rewriting** - Generates inspired variations of successful competitor ads Quality Filtering** - Targets high-performing advertisers with significant page likes Scalable Processing** - Handle hundreds of competitor ads with detailed strategic analysis Premium Service Potential** - Easily sold to agencies and marketers for $2,000+ implementations How It Works Facebook Ad Library Scraping: Connects to Facebook's public ad library through Apify's specialized scraper Searches for active ads using customizable keywords and targeting parameters Extracts comprehensive ad data including creative assets, targeting info, and engagement metrics Filters results to focus on high-quality advertisers with substantial page followings Intelligent Content Routing: Automatically categorizes ads into text-only, image-based, or video content types Routes each ad type to specialized processing pipelines optimized for that content format Ensures appropriate AI models are used for each type of creative analysis Maintains data integrity while processing different content formats simultaneously Advanced Video Analysis Pipeline: Downloads video ads directly from Facebook's content delivery network Uploads videos to Google Drive for temporary storage and processing Initiates Gemini AI video upload sessions for multi-modal analysis Uses Gemini's advanced video understanding to generate detailed content descriptions Processes video narrative, visual elements, messaging strategy, and target audience insights Image and Text Processing: Analyzes image ads using GPT-4 Vision for comprehensive visual content understanding Processes text-only ads using GPT-4 for messaging strategy and copywriting analysis Identifies key persuasion techniques, target demographics, and messaging frameworks Generates detailed competitive intelligence reports for each ad format Strategic Intelligence Generation: Creates comprehensive summaries analyzing competitor messaging strategies and target audiences Generates rewritten ad copy that captures successful elements while avoiding direct copying Produces recreation prompts for images and videos that can be used with AI generation tools Organizes all insights in structured Google Sheets database for easy analysis and reporting Required Setup Configuration Apify Integration: Sign up for Apify account and obtain API key Replace <your-apify-api-key-here> in "Run Ad Library Scraper" node Customize Facebook Ad Library search URLs with your target keywords and regions AI Service Configuration: OpenAI API**: Set up for text analysis and image understanding with GPT-4 Vision Gemini API**: Configure for advanced video content analysis and description Replace <your-gemini-api-key-here> in all Gemini-related nodes Google Services Setup: Google Drive**: Configure OAuth for temporary video storage during Gemini processing Google Sheets**: Create results database with proper column structure for ad intelligence storage Facebook Ad Library Search Configuration: Customize the search parameters in the Apify scraper Google Sheets Database Structure: Create a sheet with these columns: ad_archive_id - Unique Facebook ad identifier page_id - Advertiser's Facebook page ID page_name - Advertiser's business name page_url - Link to advertiser's Facebook page type - Ad format (text, image, or video) date_added - When ad was analyzed summary - Detailed competitive intelligence analysis rewritten_ad_copy - AI-generated inspired version image_prompt - Description for recreating image ads video_prompt - Description for recreating video ads Business Use Cases PPC Agencies - Offer comprehensive competitor analysis services to clients for strategic advantage Marketing Teams - Research competitor strategies and messaging before launching new campaigns E-commerce Businesses - Analyze successful ads in your industry for creative inspiration SaaS Companies - Study how competitors position their products and target audiences Course Creators - Research educational content marketing approaches and messaging strategies Affiliate Marketers - Identify successful promotional strategies and high-converting ad formats Difficulty Level: Advanced Estimated Build Time: 3-4 hours Monthly Operating Cost: ~$200 (Apify + OpenAI + Gemini + Google Workspace APIs) Watch My Complete Build Process Want to see exactly how I built this entire Facebook ad spy system from scratch? I walk through the complete development process live, including API integrations, multi-modal AI setup, error handling, and the exact business strategy for selling this as a premium service. 🎥 Watch My Live Build: "Build A Facebook Ads Spy Tool With N8N (Sell for $2k+)" This comprehensive tutorial shows the real development process - including complex API orchestration, multi-modal AI integration, and proven strategies for monetizing competitive intelligence systems. Set Up Steps Apify Scraper Configuration: Set up Apify account and configure Facebook Ad Library scraper Customize search parameters for your target industries and regions Configure result limits and filtering parameters for quality control Test scraper with sample searches to verify data quality Multi-Modal AI Setup: Configure OpenAI API credentials for text and image analysis Set up Gemini API access for advanced video content understanding Configure appropriate rate limits and error handling for API stability Test AI analysis with sample ads to optimize prompt quality Google Services Integration: Set up Google Drive OAuth for temporary video storage during processing Create Google Sheets database with proper column structure for intelligence storage Configure sharing permissions and access controls for team collaboration Test complete data flow from scraping to final intelligence reports Quality Control and Filtering: Configure page likes threshold in "Filter For Likes" node (recommend 1,000+ for quality) Adjust content routing logic in Switch node based on your analysis needs Set up error handling and retry logic for reliable large-scale processing Test complete workflow with various ad types to ensure proper routing Advanced Customization: Customize AI prompts for your specific industry analysis needs Configure additional filtering criteria beyond page likes Set up automated scheduling for regular competitor monitoring Add custom fields to database for tracking specific competitive metrics Advanced Features Scale the system with additional capabilities: Industry-Specific Analysis - Customize prompts and filters for different verticals Trend Tracking - Monitor messaging changes over time for strategic insights Performance Correlation - Cross-reference ad engagement with business outcomes Alert Systems - Notify when competitors launch new campaign types Custom Reporting - Generate client-ready intelligence reports automatically Integration Extensions - Connect to CRM and marketing platforms for strategic workflow Important Considerations API Rate Limits - Built-in delays and error handling prevent service interruptions Content Rights - System generates inspired variations, not direct copies, for legal compliance Data Storage - Organize intelligence database for easy client reporting and analysis Scalability - Batch processing handles hundreds of ads efficiently without blocking Quality Assurance - Filtering logic ensures analysis focuses on successful, high-quality advertisers Why This System Works The competitive advantage lies in comprehensive multi-modal analysis: Complete format coverage - analyzes text, image, and video ads with appropriate AI models Strategic depth - goes beyond basic scraping to provide actionable intelligence Automation scale - processes competitor research that would take weeks manually Premium positioning - advanced AI analysis justifies higher service pricing Immediate value - clients receive actionable insights within hours of setup Check Out My Channel For more advanced automation systems that generate real business results and premium service opportunities, explore my YouTube channel where I share proven strategies for building profitable automation businesses.
by Mohan Gopal
This workflow automates the process of reading EDI files generated by Sabre, parsing them using an AI Agent, and producing structured accounting reports like: 📌 Accounts Receivable (AR) Summary 📌 Tax and Surcharges Report It also uses Retrieval-Augmented Generation (RAG) to vectorize the Sabre Interface User Record (IUR)—a 154-page technical document—so that the AI agent can reference it when clarification is required while generating reports. ⚙️ Tools & Integrations Used Component:Tool/Service:Purpose:Workflow Engine:n8n:Automation & orchestration LLM Model:OpenAI GPT-4 / Chat Model:Natural language understanding and parsing Embeddings Model:OpenAI Embeddings:Convert text into semantic vector format Vector Database:Pinecone:Store and retrieve document chunks semantically Storage:Google Drive:Source of raw EDI text files and PDF documentation DataLoader + Splitter:n8n Node + Recursive Splitter:Loads and prepares documents for embedding AI Agents:n8n AI Agent Node:Runs context-aware prompts and parses reports 🧱 Workflow Breakdown 🧠 1. Vectorizing the Sabre IUR Document (RAG Setup) 📘 Objective: Enable the AI Agent to refer to the IUR document (154 pages) for detailed explanations of EDI terms, formats, and rules. Flow Steps: Google Drive Search + Download – Find and pull the IUR PDF file. Default Data Loader – Load the file and preprocess it for semantic splitting. Recursive Character Splitter – Break down large pages into meaningful chunks. OpenAI Embeddings – Vectorize each chunk. Pinecone Vector Store – Save into a Pinecone namespace for future retrieval. ✅ Result: The IUR is now searchable via semantic queries from the AI Agent. 📁 2. Reading and Extracting Data from EDI Files 📘 Objective: Parse raw EDI files for financial records and summaries. Flow Steps: Trigger – Manual or scheduled execution of the workflow. Google Drive Search – Finds all new .edi or .txt files. Download File Contents – Loads content of each file into memory. Extract from File – Raw text extraction. 📊 3. Report Generation Using AI Agents 📘 Objective: AI Agents parse the extracted data to generate structured accounting reports. a. Accounts Receivable Report Agent The extracted text is passed to an AI Agent. Model is connected to: OpenAI Chat Model (LLM) Pinecone Vector DB (IUR reference) Outputs a structured AR Summary Report. b. Tax and Surcharges Report Agent Same steps as above. Prompts adjusted to extract tax, fees, surcharges, and amounts. ✅ Output Format: Can be mapped to columns and inserted into a Google Sheet or exported as a CSV/JSON. 📑 Sample Reports You Can Build Already implemented: ✅ Accounts Receivable (AR) Summary Report ✅ Tax and Surcharges Report Can be extended to: Accounts Payable (AP) Passenger Revenue Daily Sales Commission Report Net Profit Margin (if supplier cost + commission is available) 💡 Key Advantages ✅ No-code automation with n8n ✅ Semantic reasoning using AI + Vector DB (RAG) ✅ Can work with various Sabre outputs without manual parsing ✅ Modular: Easy to add new report types ✅ Cloud-integrated (Drive, Pinecone, OpenAI) 🧪 Potential Improvements Area Suggestions Testing Add a “Preview” step to validate extracted data before writing Scalability Batch mode + Google Sheet batching for multiple reports Audit Trail Log every file name, timestamp, report type in a Google Sheet Notification Send Slack/Email when a new report is generated Multi-model support Add Claude/Gemini fallback if OpenAI usage limit is hit
by Yaron Been
Automated solution to extract and organize contact information from Upwork job postings, enabling direct outreach to potential clients who post jobs matching your expertise. 🚀 What It Does Scrapes job postings for contact information Extracts email addresses and social profiles Organizes leads in a structured format Enables direct outreach campaigns Tracks response rates 🎯 Perfect For Freelancers looking to expand their client base Agencies targeting specific industries Sales professionals in the gig economy Recruiters sourcing clients Digital marketing agencies ⚙️ Key Benefits ✅ Access to hidden contact information ✅ Expand your client base ✅ Beat the competition to opportunities ✅ Targeted outreach campaigns ✅ Higher response rates 🔧 What You Need Upwork account n8n instance Email service (for outreach) CRM (optional) 📊 Features Email pattern detection Social media profile extraction Company website discovery Lead scoring system Outreach tracking 🛠️ Setup & Support Quick Setup Start collecting leads in 20 minutes with our step-by-step guide 📺 Watch Tutorial 💼 Get Expert Support 📧 Direct Help Take control of your freelance career with direct access to potential clients. Transform how you find and secure projects on Upwork.
by Jimleuk
This n8n workflow demonstrates a simple multi-agent setup to perform the task of competitor research. It showcases how using the HTTP request tool could reduce the number of nodes needed to achieve a workflow like this. How it works For this template, a source company is defined by the user which is sent to Exa.ai to find competitors. Each competitor is then funnelled through 3 AI agents that will go out onto the internet and retrieve specific datapoints about the competitor; company overview, product offering and customer reviews. Once the agents are finished, the results are compiled into a report which is then inserted in a notion database. Check out an example output here: https://jimleuk.notion.site/2d1c3c726e8e42f3aecec6338fd24333?v=de020fa196f34cdeb676daaeae44e110&pvs=4 Requirements An OpenAI account for the LLM. Exa.ai account for access to their AI search engine. SerpAPI account for Google search. Firecrawl.dev account for webscraping. Notion.com account for database to save final reports. Customising the workflow Add additional agents to gather more datapoints such as SEO keywords and metrics. Not using notion? Feel free to swap this out for your own database.
by Behram
Automated n8n workflow: Receives videos via form, dubs/translates them to the selected languages, and—upon completion—uploads them to multiple social media channels and cloud drives, including Box, Dropbox, and YouTube, Telegram, Postiz (Facebook, Instagram, Tiktok, Reddit etc.) Workflows Via n8n form select files to dub for desired languages. Listen webhook and whenever dubbing finishes upload to desired platforms Used Stacks DubLab App (ApiKey, Webhook Setup Required) Optional (Upload) Telegram (Token Required) Box (Oauth2 Required) Dropbox (Oauth2 Required) Youtube (Oauth2 Required) Postiz (ApiKey Required)
by Jimleuk
This n8n workflow is a proof-of-concept template exploring how we might work with multimodal LLMs and their multi-image analysis capabilities. In this demo, we compare 2 screenshots of a webpage taken at different timestamps and pass both to our multimodal LLM for a visual comparison of differences. Handling multiple binary inputs (ie. images) in an AI request is supported by n8n's basic LLM node. How it works This template is intended to run as 2 parts: first to generate the base screenshots and next to run the visual regression test which captures fresh screenshots. Starting with a list of webpages captured in a Google sheet, base screenshots are captured for each using a external web scraping service called Apify.com (I prefer Apify but feel free to use whichever web scraping service available to you) These base screenshots are uploaded to Google Drive and will be referenced later when we run our testing. Phase 2 of the workflow, we'll use a scheduled trigger to fire sometime in the future which will reuse our web scraping service to generate fresh screenshots of our desired webpages. Next, re-download our base screenshots in parallel and with both old and new captures, we'll pass these to our LLM node. In the LLM node's options, we'll define 2 "user message" inputs with the type of binary (data) for our images. Finally, we'll prompt our LLM with our testing criteria and capture the regressions detected. Note, results will vary depending on which LLM you use. A final report can be generated using the LLM's output and is uploaded to Linear. Requirements Apify.com API key for web screenshotting service Google Drive and Sheets access to store list of webpages and captures Customising this workflow Have your own preferred web screenshotting service? Feel free to swap out Apify with your service of choice. If the web screenshot is too large, it may prove difficult for the LLM to spot differences with precision. Try splitting up captures into smaller images instead.
by Victor Gold
Telegram Bot Starter template workflow + n8n AI Agent Chatbot provides a foundational setup for creating powerful Telegram bots with n8n. It handles incoming messages, photos, files, and voice notes, making it an excellent starting point for developers looking to create bots for customer engagement, support, or interactive services. Sign up to n8n now → and try it! Key Features: Dynamic Message Handling: Respond to text messages, photos, files, and more. Modular Design: Easily integrate additional workflows such as user registration, payment modules, or custom commands. Error Handling: Ensure the bot gracefully manages errors and user inputs. Extensibility: This workflow is the base for building any Telegram bot. Additional modules, such as a user registration module, payment integration, and user profile management, are available for easy connection to expand the bot’s functionality. ✍🏻Use the Telegram user registration workflow → 💵Use the Telegram Payment, Invoicing and Refund Workflow for Stars → Who Can Use This Workflow? Developers looking for a quick way to build and customize Telegram bots. Businesses and service providers who need customer interaction automation. Setup Instructions: Replace Telegram credentials with your own API credentials. Customize responses for different message types (text, photo, file). If integrating with external services (like Google Sheets), update the necessary credentials and links. UPDATES: 🔥 Get the most up-to-date and expanded version → June 25: New! AI Agent + Setup Instructions Simple setup instructions and examples are included inside the workflow as sticky notes. Sep 24: Improved message handler: Updated logic to handle various types of messages using Switch (text, photo, file, voice, and callback). Payment processing: Added new nodes for sending invoices and handling payments via Telegram Aug 24: Changed processing of system events: “new user” and ‘user who blocked bot’ events Please reach out to Victor if you need further assistance with your n8n workflows and automations! Sign up to n8n →, you have to try it!
by Arnaud MARIE
Monthly Spotify Track Archiving and Playlist Classification This n8n workflow allows you to automatically archive your monthly Spotify liked tracks in a Google Sheet, along with playlist details and descriptions. Based on this data, Claude 3.5 is used to classify each track into multiple playlists and add them in bulk. Who is this template for? This workflow template is perfect for Spotify users who want to systematically archive their listening history and organize their tracks into custom playlists. What problem does this workflow solve? It automates the monthly process of tracking, storing, and categorizing Spotify tracks into relevant playlists, helping users maintain well-organized music collections and keep a historical record of their listening habits. Workflow Overview Trigger Options**: Can be initiated manually or on a set schedule. Spotify Playlists Retrieval**: Fetches the current playlists and filters them by owner. Track Details Collection**: Retrieves information such as track ID and popularity from the user’s library. Audio Features Fetching**: Uses Spotify's API to get audio features for each track. Data Merging**: Combines track information with their audio features. Duplicate Checking**: Filters out tracks that have already been logged in Google Sheets. Data Logging**: Archives new tracks into a Google Sheet. AI Classification**: Uses an AI model to classify tracks into suitable playlists. Playlist Updates**: Adds classified tracks to the corresponding playlists. Setup Instructions Credentials Setup: Make sure you have valid Spotify OAuth2 and Google Sheets access credentials. Trigger Configuration: Choose between manual or scheduled triggers to start the workflow. Google Sheets Preparation: Set up a Google Sheet with the necessary structure for logging track details. Spotify Playlists Setup: Have a diverse range of playlists and exhaustive description (see example) ready to accommodate different music genres and moods. Customization Options Adjust Playlist Conditions**: Modify the AI model’s classification criteria to align with your personal music preferences. Enhance Track Analysis**: Incorporate additional audio features or external data sources for more refined track categorization. Personalize Data Logging**: Customize which track attributes to log in Google Sheets based on your archival preferences. Configure Scheduling**: Set a preferred schedule for periodic track archiving, e.g., monthly or weekly. Cost Estimate For 300 tracks, the token usage amounts to approximately 60,000 tokens (58,000 for input and 2,000 for completion), costing around 20 cents with Claude 3.5 Sonnet (as of October 2024). Playlists' Description Examples | Playlist Name | Playlist Description | |-------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Classique | Indulge in the timeless beauty of classical music with this refined playlist. From baroque to romantic periods, this collection showcases renowned compositions. | | Poi | Find your flow with this dynamic playlist tailored for poi, staff, and ball juggling. Featuring rhythmic tracks that complement your movements. | | Pro Sound | Boost your productivity and focus with this carefully selected mix of concentration-enhancing music. Ideal for work or study sessions. | | ChillySleep | Drift off to dreamland with this soothing playlist of sleep-inducing tracks. Gentle melodies and ambient sounds create a peaceful atmosphere for restful sleep. | | To Sing | Warm up your vocal cords and sing your heart out with karaoke-friendly tracks. Featuring popular songs, perfect for solo performances or group sing-alongs. | | 1990s | Relive the diverse musical landscape of the 90s with this eclectic mix. From grunge to pop, hip-hop to electronic, this playlist showcases defining genres. | | 1980s | Take a nostalgic trip back to the era of big hair and neon with this 80s playlist. Packed with iconic hits and forgotten gems, capturing the energy of the decade.| | Groove Up | Elevate your mood and energy with this upbeat playlist. Featuring a mix of feel-good tracks across various genres to lift your spirits and get you moving. | | Reggae & Dub | Relax and unwind with the laid-back vibes of reggae and dub. This playlist combines classic reggae tunes with deep, spacious dub tracks for a chilled-out vibe. | | Psytrance | Embark on a mind-bending journey with this collection of psychedelic trance tracks. Ideal for late-night dance sessions or intense focus. | | Cumbia | Sway to the infectious rhythms of Cumbia with this lively playlist. Blending traditional Latin American sounds with modern interpretations for a danceable mix. | | Funky Groove | Get your body moving with this collection of funk and disco tracks. Featuring irresistible basslines and catchy rhythms, perfect for dance parties. | | French Chanson | Experience the romance and charm of France with this mix of classic and modern French songs, capturing the essence of French musical culture. | | Workout Motivation | Push your limits and power through your exercise routine with this high-energy playlist. From warm-up to cool-down, these tracks will keep you motivated. | | Cinematic Instrumentals | Immerse yourself in a world of atmospheric sounds with this collection of cinematic instrumental tracks, perfect for focus, relaxation, or contemplation. |
by Yulia
This workflow is a modification of the previous template on how to create an SQL agent with LangChain and SQLite. The key difference – the agent has access only to the database schema, not to the actual data. To achieve this, SQL queries are made outside the AI Agent node, and the results are never passed back to the agent. This approach allows the agent to generate SQL queries based on the structure of tables and their relationships, without having to access the actual data. This makes the process more secure and efficient, especially in cases where data confidentiality is crucial. 🚀 Setup To get started with this workflow, you’ll need to set up a free MySQL server and import your database (check Step 1 and 2 in this tutorial). Of course, you can switch MySQL to another SQL database such as PostgreSQL, the principle remains the same. The key is to download the schema once and save it locally to avoid repeated remote connections. Run the top part of the workflow once to download and store the MySQL chinook database schema file on the server. With this approach, we avoid the need to repeatedly connect to a remote db4free database and fetch the schema every time. As a result, we reach greater processing speed and efficiency. 🗣️ Chat with your data Start a chat: send a message in the chat window. The workflow loads the locally saved MySQL database schema, without having the ability to touch the actual data. The file contains the full structure of your MySQL database for analysis. The Langchain AI Agent receives the schema, your input and begins to work. The AI Agent generates SQL queries and brief comments based solely on the schema and the user’s message. An IF node checks whether the AI Agent has generated a query. When: Yes: the AI Agent passes the SQL query to the next MySQL node for execution. No: You get a direct answer from the Agent without further action. The workflow formats the results of the SQL query, ensuring they are convenient to read and easy to understand. Once formatted, you get both the Agent answer and the query result in the chat window. 🌟 Example queries Try these sample queries to see the schema-driven AI Agent in action: Would you please list me all customers from Germany? What are the music genres in the database? What tables are available in the database? Please describe the relationships between tables. - In this example, the AI Agent does not need to create the SQL query. And if you prefer to keep the data private, you can manually execute the generated SQL query in your own environment using any database client or tool you trust 🗄️ 💭 The AI Agent memory node does not store the actual data as we run SQL-queries outside the agent. It contains the database schema, user questions and the initial Agent reply. Actual SQL query results are passed to the chat window, but the values are not stored in the Agent memory.
by Oriol Seguí
This n8n workflow allows you to automatically monitor the status of multiple URLs in a simple and efficient way. You just need to enter the URLs you want to scan and run the workflow (either manually or scheduled). For each URL, an availability check is performed. The results are logged in a Google Sheet, clearly distinguishing between successful checks and failures (downtime). If any URL fails, the system filters these errors and automatically sends an email alert notifying you of the detected outages. The workflow includes help messages in both English and Spanish, integrates with Google Sheets and Gmail, and is suitable for both one-off tasks and scheduled monitoring. For Who? Webmasters SEO & Marketing Teams SysAdmins Anyone needing automated website uptime monitoring How it works? Enter the URLs to scan in the “URLs” field. Trigger the workflow manually or schedule it to run automatically. For each URL, the workflow: Checks if the URL is online or down. Logs the status (success or error) in a Google Sheet. At the end, filters the failed URLs (crashes) and sends an email alert l
by Oskar
This workflow uses OpenAI Assistant to compose draft replies for labeled email messages. It automatically connects the drafts to Gmail threads. 💡 You can add knowledge base to your OpenAI Assistant and make your reply drafts very customized (e.g. compose response with product information in response to inquiry from customer). 🎬 See this workflow in action in my YouTube video about automating Gmail. How it works? The workflow is triggered at regular intervals (default: every 1 minute – you can change this value) to check for messages with a specific label (e.g., "AI"). The content of the retrieved email message is then forwarded to the OpenAI Assistant node, and a reply draft is generated. Next, the response from the Assistant is converted to HTML, and a raw message in RFC standard is composed. 💡 You can learn more about composing drafts with the Gmail API in the official Google documentation. The raw email message (reply draft) is encoded and attached to the original thread ID. Finally, the trigger label (in this case: "AI") is removed to prevent the workflow from looping. Set up steps Set credentials for Gmail and OpenAI. Add new label in Gmail account for messages that should be handled by the workflow (e.g. name it "AI"). Select this label in the first and last Gmail nodes in workflow. Create and configure your OpenAI Assistant. Select your assistant in "OpenAI Assistant" node. Optionally: change trigger interval (by default interval is 1 minute). If you like this workflow, please subscribe to my YouTube channel and/or my newsletter.