by Joseph LePage
Empower Your AI Chatbot with Long-Term Memory and Dynamic Tool Routing This n8n workflow equips your AI agent with long-term memory and a dynamic tools router, enabling it to provide intelligent, context-aware responses while managing tasks across multiple tools. By combining persistent memory and modular task routing, this workflow makes your AI smarter, more efficient, and highly adaptable. ๐ฅ Who Is This For? AI Developers & Automation Enthusiasts: Integrate advanced AI features like long-term memory and task routing without coding expertise. Businesses & Teams: Automate tasks while maintaining personalized, context-aware interactions. Customer Support Teams: Improve user experience with chatbots that remember past interactions. Marketers & Content Creators: Streamline communication across platforms like Gmail and Telegram. AI Researchers: Experiment with persistent memory and multi-tool integration. ๐ What Problem Does This Solve? This workflow simplifies the creation of intelligent AI systems that retain memory, manage tasks dynamically, and automate notifications across tools like Gmail and Telegramโsaving time and improving efficiency. ๐ ๏ธ What This Workflow Does Save & Retrieve Memories**: Uses Google Docs for long-term storage to recall past interactions or user preferences. Dynamic Task Routing**: Routes tasks to the right tools (e.g., saving/retrieving memories or sending notifications). AI-Powered Context Understanding**: Combines OpenAI GPT-based short-term memory with long-term memory for smarter responses. Multi-Channel Notifications**: Sends updates via Gmail or Telegram. ๐ง Setup API Credentials: Connect to OpenAI (AI processing), Google Docs (memory storage), Gmail/Telegram (notifications). Customize Parameters: Adjust the AI agent's system message for your use case. Define task-routing rules in the tools router node. Test & Deploy: Verify memory saving/retrieval, task routing, and notification delivery. ๐ก How to Customize Modify the system message in the OpenAI node to tailor your agentโs behavior. Add or adjust routing rules for additional tools. Update notification settings to match your communication preferences.
by Khairul Muhtadin
Effortlessly track your expenses with MoneyMate, an n8n workflow that transforms receipts into organized financial insights. Upload a photo or text via Telegram, and let MoneyMate extract key detailsโstore info, transaction dates, items, and totalsโusing Google Vision OCR and AI-powered parsing via OpenRouter. It categorizes expenses (e.g., Food & Beverages, Transport, Household) and delivers a clean, emoji-rich summary back to your Telegram chat. Handles zero-total errors with a friendly nudge to double-check inputs. Perfect for freelancers, small business owners, or anyone seeking hassle-free expense management. No database required, ensuring privacy and simplicity. Deploy MoneyMate and take control of your finances today! Key Features ๐ฑ Telegram Integration: Input via photo or text, receive summaries instantly. ๐ธ Receipt Scanning: Converts receipt images to text using Google Vision API. ๐ค AI Parsing: Categorizes transactions with OpenRouterโs AI analysis. ๐ก๏ธ Privacy-First: Processes data on-the-fly without storage. โ ๏ธ Smart Error Handling: Catches zero totals with user-friendly prompts. ๐ Flexible Categories: Supports Income/Expense and custom expense types. Ideal For Budget-conscious individuals** managing personal finances. Entrepreneurs** tracking business expenses. Teams** needing quick, automated expense reporting. Pre-Requirements n8n Instance:** A running n8n instance (cloud or self-hosted). Credentials:** Telegram: A bot token and webhook setup (obtained via BotFather). For more information, please refer to Telegram bots creation Google Cloud: A service account with Google Vision API enabled and API key. For more informations, please refer to Google cloud Vision OpenRouter: An account with API access for AI language model usage. Telegram Bot:* A configured *Telegram** bot to receive inputs and send summaries. Setup Instructions Import Workflow:* Copy the *MoneyMate** workflow JSON and import it into your n8n instance using the "Import Workflow" option. Set Up Telegram Bot:* Create a bot via BotFather on *Telegram** to get a token and set up a webhook. For detailed steps, refer to n8nโs Telegram setup guide. Configure Credentials:** In the Telegram Trigger, Send Error Message, and Send Expense Summary nodes, add Telegram API credentials with your bot token. In the Get Telegram File and Download Image nodes, ensure Telegram API credentials are linked. In the Google Vision OCR node, add Google Cloud credentials with Google Vision API access. In the OpenRouter AI Model node, set up OpenRouter API credentials. Test the Workflow:* Send a test receipt photo or text (e.g., "Lunch 50,000 IDR") via *Telegram** and verify the summary in your chat. Activate:** Enable the workflow in n8n to run automatically for each input. Customization Options Add Categories:* Modify the *AI Categorizer* node to include new expense types (e.g., *Entertainment**). Change Output Format:* Adjust the *Format Summary Message** node to include more details like taxes or payment methods. Switch AI Model:* In the *OpenRouter AI Model* node, select a different *OpenRouter** model for better parsing. Store Data:* Add a *Google Sheets* node after *Parse Receipt Data** to save expense records. Enhance Errors:* Include an email notification node after *Check Invalid Input** for failed inputs. Why Choose MoneyMate? Save time, reduce manual entry, and gain clarity on your spending with MoneyMateโs AI-driven workflow. Ready to streamline your finances? Get MoneyMate now! Made by: khmuhtadin Need a custom? contact me on LinkedIn or Web
by Dina Lev
Automate Legal Document Generation with n8n, Apify, Google Drive, and AI This tutorial details an end-to-end automation solution for streamlining the lien filing process for Homeowners Associations (HOAs) using an n8n workflow. It significantly reduces manual effort and potential errors for legal professionals by automating document retrieval, information extraction, and document generation. Who's it for This template is ideal for legal professionals, law firms, and property management companies that frequently handle lien filings for Homeowners Associations. If you're looking to reduce manual document processing time, minimize errors, and improve efficiency in your legal operations, this workflow is for you. The Problem Legal professionals often allocate a significant portion of their timeโup to 40%โto manual document processing tasks. The traditional process for filing a lien is particularly time-consuming (e.g., 15 minutes per case) and error-prone, involving steps like manual searching, downloading, extracting, and populating legal documents. The Automation Solution Overview This automation leverages an n8n workflow in conjunction with external services like Playwright (via Apify), Google Drive, Google Sheets, Gmail, and the Gemini API. The primary objective is to automate the legal document generation processโfrom initial data submission to final document generation and notification. Requirements Before importing and running the n8n workflow, you need the following: n8n Instance:** A running n8n instance (self-hosted or cloud). Google Account:** With access to Google Sheets, Google Drive, and Gmail. Google Sheets:** An Input Sheet to receive form responses (e.g., "Legal Automation Input Form (Responses)"). An Output/Review Sheet for extracted data and approval (e.g., "Automation Output data Sheet") with specific columns like "Timestamp", "Legal Description", "Association Name", "Debt", "Parcel", "Owner", "Doc link", "Approval", and "Created". Google Drive:** A main folder for n8n outputs (e.g., "N8N Folder"). A Google Docs Lien Template with placeholders (e.g., {{ASSOCIATION}}, {{DEBT}}, {{PROPERTY}}, {{MONTH}}, {{YEAR}}, {{DAY}}, {{PARCEL}}, {{OWNER}}). Google Gemini API Key:** For text and image processing. Apify Account & Playwright Actor:** An Apify account with access to a Playwright actor capable of scraping property information from your target county's website. Setup Steps n8n Credentials: Add Google Sheets, Google Drive, and Gmail credentials in your n8n instance. Add an HTTP Query Auth credential for your Gemini API key (named "Query Auth account" in the template). Ensure your Apify API token is configured within the Apify Playwright script to find property info node. Google Sheets Configuration: Link the Google Sheets Trigger node to your Input Sheet. Link the Google Sheets node (for appending data) and the Intermediate data received trigger to your Output/Review Sheet. Google Drive Configuration: Update the Create folder to output node with the ID of your "N8N Folder". Update the Make Copy of Template node with the ID of your Google Docs Lien Template. Email Addresses: Update the recipient email addresses in the Approve Through Email and Notify complete nodes to your desired notification email. Detailed Tutorial Steps and n8n Workflow Breakdown Summary This n8n workflow, "Legal Document Generator E2E", automates the process of generating legal lien documents, from initial data input to final document creation and notification. Initiate Workflow: The workflow starts with a Google Sheets Trigger node, which listens for new lien requests submitted via a form that populates a Google Sheet. Gather Property Data: An Apify Playwright script to find property info node fetches property details from county websites, and a Get file for property node downloads associated legal documents. Process and Store Document: The downloaded document is transformed to base64 using Transform to base64 and then uploaded to Google Drive via Upload legal doc for storage and further processing. Extract Information with AI: Call Gemini API for legal desc and Property metadata nodes leverage the Gemini API to extract the precise legal description, parcel number, and owner's name from the document. This extracted data is then structured by the Property Information Extractor. Review and Approve: The extracted information is appended to an intermediate Google Sheet by the first Google Sheets node, and an email is sent via Approve Through Email to the user for review and approval. Generate Documents on Approval: A second Intermediate data received Google Sheets Trigger node monitors the approval status in the sheet. Once "Approved", an If node allows the workflow to proceed. Create and Populate Documents: A new client-specific folder is created in Google Drive using Create folder to output. A blank lien template is copied (Make Copy of Template), and its custom variables are populated with the extracted data using Change Custom Variables. Finalize and Store Output: The populated document is converted to PDF (Generate PDF), and both the new PDF (Add PDF To Drive) and the original source document (Move file in Google Drive) are saved to the client's new folder. Update Records and Notify: The Update Creation Google Sheets node marks the document as "Created" in the tracking sheet and updates the document link. Finally, Notify complete sends a notification email about the completion. How to Customize the Workflow Adjust Input Form Fields:** Modify the column names in your initial Google Sheet and update the expressions in the Google Sheets Trigger and Apify Playwright script to find property info nodes to match your form. Change County Website/Scraper:** If you need to fetch data from a different county or property database, you will need to modify the Apify Playwright script to find property info node to call a different Apify actor or configure a new HTTP Request node to interact with your chosen data source. Customize Document Template:** Update the placeholders in your Google Docs Lien Template to match your specific document needs. Ensure corresponding replaceAll actions are updated in the Change Custom Variables node. Modify AI Prompts:** Refine the prompts within the Call Gemini API for legal desc and Property metadata nodes to improve the accuracy of information extraction based on your document types. Notification Preferences:** Adjust the sendTo email addresses and subject/message content in the Approve Through Email and Notify complete nodes. Benefits of this Automation This automation offers significant advantages for legal professionals: Streamlined Organization:** Ensures all relevant documentsโoriginal source files, editable templates, and final PDFsโare systematically organized, tracked, and easily accessible within Google Drive. Time-Saving and Efficiency:** Documents are quickly generated and ready for client sharing, leading to faster turnaround times and improved service delivery. Scalability:** Provides a scalable solution for handling a higher volume of document processing tasks without a proportional increase in manual effort. Learn more about Chill Labs and our services on our website: Chill Labs
by Muhammad Nouman
How it works This workflow turns a Google Drive folder into a fully automated YouTube publishing pipeline. Whenever a new video file is added to the folder, the workflow generates all YouTube metadata using AI, uploads the video to your YouTube channel, deletes the original file from Drive, sends a Telegram confirmation, and can optionally post to Instagram and Facebook using permanent system tokens. High-level flow: Detects new video uploads in a specific Google Drive folder. Downloads the file and uses AI to generate: โข a polished first-person YouTube description โข an SEO-optimized YouTube title โข high-ranking YouTube tags Uploads the video to YouTube with the generated metadata. Deletes the original Drive file after upload. Sends a Telegram notification with video details. (Optional) Posts to Instagram & Facebook using permanent system user tokens. Set up steps Setup usually takes a few minutes. Add Google Drive OAuth2 credentials for the trigger and download/delete nodes. Add your OpenAI (or Gemini) API credentials for title/description/tag generation. Add YouTube OAuth2 credentials in the YouTube Upload node. Add Facebook/Instagram Graph API credentials if enabling cross-posting. Replace placeholder IDs (Drive folder ID, Page ID, IG media endpoint). Review sticky notes in the workflowโthey contain setup guidance and token info. Activate the Google Drive trigger to start automated uploads.
by Fahmi Oktafian
This n8n workflow is a Telegram bot that allows users to either: Generate AI images using Pollinations API, or Generate blog articles using Gemini AI Users simply type image your prompt or blog your title, and the bot responds with either an AI-generated image or article. Who's it for This template is ideal for: Content creators and marketers who want to generate visual and written content quickly Telegram bot developers looking for real-world AI integration Educators or students automating content workflows Anyone managing content pipelines using Google Sheets What it does / How it works Telegram Interaction Trigger Telegram Message: Listens for new messages or button clicks via Telegram Classify Telegram Input: JavaScript logic to classify input as /start, /help, normal text, or callback Switch Input Type: Directs the flow based on the classification Menu & Help Send Main Menu to User: Shows "Generate Image", "Blog Article", "Help" options Switch Callback Selection: Routes based on button pressed (image, blog, or help) Send Help Instructions: Sends markdown instructions on how to use the bot Input Validation Validate Command Format: Ensures input starts with image or blog Notify Invalid Input Format: If validation fails, informs user of correct format Image Generator Prompt User for Image Description โ When user clicks Generate Image Detect Text-Based Input Type โ Detects if text is image or blog Switch Text Command Type โ Directs whether to generate image or article Show Typing for Image Generation โ Sends "uploading photo..." typing status Build Image Generation URL โ Constructs Pollinations API image URL from prompt Download AI Image โ Makes HTTP request to get the image Send Image Result to Telegram โ Sends image to user via Telegram Log Image Prompt to Google Sheets โ Logs prompt, image URL, date, and user ID Upload Image to Google Drive โ Saves image to Google Drive folder Blog Article Generator Prompt User for Blog Title โ When user clicks Blog Article Store Blog Prompt โ Saves prompt for later use Log Blog Prompt to Google Sheets โ Writes title + user ID to Google Sheets Send Article Style Options โ Offers: Formal, Casual, or News style Store Selected Article Style โ Updates row with chosen style in Google Sheets Fetch Last User Prompt โ Finds the latest prompt submitted by this user Extract Last Blog Prompt โ Extracts row for use in AI request Gemini Chat Wrapper โ Handles input into LangChain node for AI processing Generate Article with Gemini โ Calls Gemini to create 3-paragraph blog post Parse Gemini Response โ Parses JSON string to extract title and content Send Article to Telegram โ Sends blog article result back to user Log Final Article to Google Sheets โ Updates row with final content and timestamp Requirements Telegram bot (via @BotFather) Pollinations API (free and public endpoint) Google Sheets & Drive (OAuth credential setup in n8n) Google Gemini / PaLM API key via LangChain Self-hosted or cloud n8n setup Setup Instructions Clone the workflow and import it into your n8n instance Set credentials: Telegram API Google Sheets OAuth Google Drive OAuth Gemini (via LangChain) Replace: Sheet ID with your own Google Sheet Folder ID on Google Drive chat_id placeholders if needed (use expressions instead) Deploy and send /start in your Telegram bot ๐ง Customization Tips Edit the Gemini prompt to adjust article length or tone Add extra style buttons like "SEO", "Story", "Academic" Add image post-processing (e.g. compression, renaming) Add error catching logic (e.g. if Pollinations image fails) Store images with filenames based on timestamp/user Security Considerations Use n8n credentials for all tokens (Telegram, Gemini, Sheets, Drive) Never hardcode your token inside HTTP nodes Do not expose real Google Sheet or Drive links in shared version Use Set node to collect all editable variables (like folder ID, sheet name)
by scrapeless official
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works This n8n workflow helps you build a fully automated SEO content engine using Scrapeless and AI. Itโs designed for teams running international websitesโsuch as SaaS products, e-commerce platforms, or content-driven businessesโwho want to grow targeted search traffic through high-conversion content, without relying on manual research or hit-or-miss topics. The flow runs in three key phases: ๐ Phase 1: Topic Discovery Automatically find high-potential long-tail keywords based on a seed keyword using Google Trends via Scrapeless. Each keyword is analyzed for trend strength and categorized by priority (P0โP3) with the help of an AI agent. ๐ง Phase 2: Competitor Research For each P0โP2 keyword, the flow performs a Google Search (via Deep SerpAPI) and extracts the top 3 organic results. Scrapeless then crawls each result to extract full article content in clean Markdown. This gives you a structured, comparable view of how competitors are writing about each topic. โ๏ธ Phase 3: AI Article Generation Using AI (OpenAI or other LLM), the workflow generates a complete SEO article draft, including: SEO title Slug Meta description Trend-based strategy summary Structured JSON-based article body with H2/H3 blocks Finally, the article is stored in Supabase (or any other supported DB), making it ready for review, API-based publishing, or further automation. Set up steps This flow requires intermediate familiarity with n8n and API key setup. Full configuration may take 30โ60 minutes. โ Prerequisites Scrapeless** account (for Google Trends and web crawling) LLM provider** (e.g. OpenAI or Claude) Supabase* or *Google Sheets** (to store keywords & article output) ๐งฉ Required Credentials in n8n Scrapeless API Key OpenAI (or other LLM) credentials Supabase or Google Sheets credentials ๐ง Setup Instructions (Simplified) Input Seed Keyword Edit the โSet Seed Keywordโ node to define your niche, e.g., "project management". Google Trends via Scrapeless Use Scrapeless to retrieve โrelated queriesโ and their interest-over-time data. Trend Analysis with AI Agent AI evaluates each keyword's trend strength and assigns a priority (P0โP3). Filter & Store Keyword Data Group and sort keywords by priority, then store them in Google Sheets. Competitor Research Use Deep SerpAPI to get top 3 Google results. Crawl each using Scrapeless. AI Content Generation Feed competitor content + trend data into AI. Output a structured SEO blog article. Store Final Article Save full article JSON (title, meta, slug, content) to Supabase.
by Markhah
Overview This workflow generates automated revenue and expense comparison reports from a structured Google Sheet. It enables users to compare financial data across the current period, last month, and last year, then uses an AI agent to analyze and summarize the results for business reporting. 1.Prerequisites A connected Google Sheets OAuth2 credential. A valid DeepSeek AI API (or replaceable with another Chat Model). A sub-workflow (child workflow) that handles processing logic. Properly structured Google Sheets data (see below). 2.Required Google Sheet Structure Column headers must include at least: Date, Amount, Type. Data format for Date must be in dd/MM/yyyy or dd-MM-yyyy. Entries should span over multiple time periods (e.g., current month, last month, last year). 3.Setup Steps Import the workflow into your n8n instance. Connect your Google Sheets and DeepSeek API credentials. Update: Sheet ID and Tab Name (already embedded in node: Get revenual from google sheet). Custom sub-workflow ID (in the Call n8n Workflow Tool node). Optionally configure chatbot webhook in the When chat message received node. 4.What the Workflow Does Accepts date inputs via AI chat interface (ChatTrigger + AI Agent). Fetches raw transaction data from Google Sheets. Segments and pivots revenue by classification for: Current period Last month Last year Aggregates totals and applies custom titles for comparison. Merges all summaries into a final unified JSON report. 5.Customization Options Replace DeepSeek with OpenAI or other LLMs. Change the date fields or cycle comparisons (e.g., quarterly, weekly). Add more AI analysis steps such as sentiment scoring or forecasting. Modify the pivot logic to suit specific KPI tags or labels. 6.Troubleshooting Tips If Google Sheets fetch fails: ensure the document is shared with your n8n Google credential. If parsing errors: verify that all dates follow the expected format. Sub-workflow must be active and configured to accept the correct inputs (6 dates). 7.SEO Keywords: google sheets report, AI financial report, compare revenue by month, expense analysis automation, chatbot n8n report generator, n8n Google Sheet integration
by Onur
Automated AI Content Creation & Instagram Publishing from Google Sheets This n8n workflow automates the creation and publishing of social media content directly to Instagram, using ideas stored in a Google Sheet. It leverages AI (Google Gemini and Replicate Flux) to generate concepts, image prompts, captions, and the final image, turning your content plan into reality with minimal manual intervention. Think of this as the execution engine for your content strategy. It assumes you have a separate process (whether manual entry, another workflow, or a different tool) for populating the Google Sheet with initial post ideas (including Topic, Audience, Voice, and Platform). This workflow takes those ideas and handles the rest, from AI generation to final publication. What does this workflow do? This workflow streamlines the content execution process by: Automatically fetching** unprocessed content ideas from a designated Google Sheet based on a schedule. Using Google Gemini to generate a platform-specific content concept (specifically for a 'Single Image' format). Generating two distinct AI image prompt options based on the concept using Gemini. Writing an engaging, platform-tailored caption (including hashtags) using Gemini, based on the first prompt option. Generating a visual image using the first prompt option via the Replicate API (using the Flux model). Publishing* the generated image and caption directly to a connected *Instagram Business account**. Updating the status** in the Google Sheet to mark the idea as completed, preventing reprocessing. Who is this for? Social Media Managers & Agencies:** Automate the execution of your content calendar stored in Google Sheets. Marketing Teams:** Streamline content production from planned ideas and ensure consistent posting schedules. Content Creators & Solopreneurs:** Save significant time by automating the generation and publishing process based on your pre-defined ideas. Anyone** using Google Sheets to plan social media content and wanting to automate the creative generation and posting steps with AI. Benefits Full Automation:** From fetching planned ideas to Instagram publishing, automate the entire content execution pipeline. AI-Powered Generation:** Leverage Google Gemini for creative concepts, prompts, and captions, and Replicate for image generation based on your initial topic. Content Calendar Execution:** Directly turn your Google Sheet plan into published posts. Time Savings:** Drastically reduce the manual effort involved in creating visuals and text for each planned post. Consistency:** Maintain a regular posting schedule by automatically processing your queue of ideas. Platform-Specific Content:** AI prompts are designed to tailor concepts, prompts, and captions for the platform specified in your sheet (e.g., Instagram or LinkedIn). How it Works Scheduled Trigger: The workflow starts automatically based on the schedule you set (e.g., every hour, daily). Fetch Idea: Reads the next row from your Google Sheet where the 'Status' column indicates it's pending (e.g., '0'). It only fetches one idea per run. Prepare Inputs: Extracts Topic, Audience, Voice, and Platform from the sheet data. AI Concept Generation (Gemini): Creates a single content concept suitable for a 'Single Image' post on the target platform. AI Prompt Generation (Gemini): Develops two detailed, distinct image prompt options based on the concept. AI Caption Generation (Gemini): Writes a caption tailored to the platform, using the first image prompt and other context. Image Generation (Replicate): Sends the first prompt to the Replicate API (Flux model) to generate the image. Prepare for Instagram: Formats the generated image URL and caption. Publish to Instagram: Uses the Facebook Graph API in three steps: Creates a media container by uploading the image URL and caption. Waits for Instagram to process the container. Publishes the processed container to your feed. Update Sheet: Changes the 'Status' in the Google Sheet for the processed row (e.g., to '1') to mark it as complete. n8n Nodes Used Schedule Trigger Google Sheets (Read & Update operations) Set (Multiple instances for data preparation) Langchain Chain - LLM (Multiple instances for Gemini calls) Langchain Chat Model - Google Gemini (Multiple instances) Langchain Output Parser - Structured (Multiple instances) HTTP Request (for Replicate API call) Wait Facebook Graph API (Multiple instances for Instagram publishing steps) Prerequisites Active n8n instance (Cloud or Self-Hosted). Google Account** with access to Google Sheets. Google Sheets API Credentials (OAuth2):** Configured in n8n. A Google Sheet** structured with columns like Topic, Audience, Voice, Platform, Status (or similar). Ensure your 'pending' and 'completed' statuses are defined (e.g., '0' and '1'). Google Cloud Project** with the Vertex AI API enabled. Google Gemini API Credentials:** Configured in n8n (usually via Google Vertex AI credentials). Replicate Account** and API Token. Replicate API Credentials (Header Auth):** Configured in n8n. Facebook Developer Account**. Instagram Business Account** connected to a Facebook Page. Facebook App** with necessary permissions: instagram_basic, instagram_content_publish, pages_read_engagement, pages_show_list. Facebook Graph API Credentials (OAuth2):** Configured in n8n with the required permissions. Setup Import the workflow JSON into your n8n instance. Configure Schedule Trigger: Set the desired frequency (e.g., every 30 minutes, every 4 hours) for checking new ideas in the sheet. Configure Google Sheets Nodes: Select your Google Sheets OAuth2 credentials for both Google Sheets nodes. In 1. Get Next Post Idea..., enter your Spreadsheet ID and Sheet Name. Verify the Status filter matches your 'pending' value (e.g., 0). In 7. Update Post Status..., enter the same Spreadsheet ID and Sheet Name. Ensure the Matching Columns (e.g., Topic) and the Status value to update match your 'completed' value (e.g., 1). Configure Google Gemini Nodes: Select your configured Google Vertex AI / Gemini credentials in all Google Gemini Chat Model nodes. Configure Replicate Node (4. Generate Image...): Select your Replicate Header Auth credentials. The workflow uses black-forest-labs/flux-1.1-pro-ultra by default; you can change this if needed. Configure Facebook Graph API Nodes (6a, 6c): Select your Facebook Graph API OAuth2 credentials. Crucially, update the Instagram Account ID in the Node parameter of both Facebook Graph API nodes (6a and 6c). The template uses a placeholder (17841473009917118); replace this with your actual Instagram Business Account ID. Adjust Wait Node (6b): The default wait time might be sufficient, but if you encounter errors during publishing (especially with larger images/videos in the future), you might need to increase the wait duration. Activate the workflow. Populate your Google Sheet: Ensure you have rows with your content ideas and the correct 'pending' status (e.g., '0'). The workflow will pick them up on its next scheduled run. This workflow transforms your Google Sheet content plan into a fully automated AI-powered Instagram publishing engine. Start automating your social media presence today!
by NanaB
What it does This n8n workflow creates a cutting-edge, multi-modal AI Memory Assistant designed to capture, understand, and intelligently recall your personal or business information from diverse sources. It automatically processes voice notes, images, documents (like PDFs), and text messages sent via Telegram. Leveraging GPT-4o for advanced AI processing (including visual analysis, document parsing, transcription, and semantic understanding) and MongoDB Atlas Vector Search for persistent and lightning-fast recall, this assistant acts as an external brain. Furthermore, it integrates with Gmail, allowing the AI to send and search emails as part of its memory and response capabilities. This end-to-end solution blurprint provides a powerful starting point for personal knowledge management and intelligent automation. How it works 1. Multi-Modal Input Ingestion ๐ฃ๏ธ๐ธ๐๐ฌ Your memories begin when you send a voice note, an image, a document (e.g., PDF), or a text message to your Telegram bot. The workflow immediately identifies the input type. 2. Advanced AI Content Processing ๐ง โจ Each input type undergoes specialized AI processing by GPT-4o: Voice notes are transcribed into text using OpenAI Whisper. Images are visually analyzed by GPT-4o Vision, generating detailed textual descriptions. Documents (PDFs) are processed for text extraction, leveraging GPT-4o for robust parsing and understanding of content and structure. Unsupported document types are gracefully handled with a user notification. Text messages are directly forwarded for further processing. This phase transforms all disparate input formats into a unified, rich textual representation. 3. Intelligent Memory Chunking & Vectorization โ๏ธ๐ท๏ธโก๏ธ๐ข The processed content (transcriptions, image descriptions, extracted document text, or direct text) is then fed back into GPT-4o. The AI intelligently chunks the information into smaller, semantically coherent pieces, extracts relevant keywords and tags, and generates concise summaries. Each of these enhanced memory chunks is then converted into a high-dimensional vector embedding using OpenAI Embeddings. 4. Persistent Storage & Recall (MongoDB Atlas Vector Search) ๐พ๐ These vector embeddings, along with their original content, metadata, and tags, are stored in your MongoDB Atlas cluster, which is configured with Atlas Vector Search. This allows for highly efficient and semantically relevant retrieval of memories based on user queries, forming the core of your "smart recall" system. 5. AI Agent & External Tools (Gmail Integration) ๐ค๐ ๏ธ When you ask a question, the AI Agent (powered by GPT-4o) acts as the central intelligence. It uses the MongoDB Chat Memory to maintain conversational context and, crucially, queries the MongoDB Atlas Vector Search store to retrieve relevant past memories. The agent also has access to Gmail tools, enabling it to send emails on your behalf or search your past emails to find information or context that might not be in your personal memory store. 6. Smart Response Generation & Delivery ๐ฌโก๏ธ๐ฑ Finally, using the retrieved context from MongoDB and the conversational history, GPT-4o synthesizes a concise, accurate, and contextually aware answer. This response is then delivered back to you via your Telegram bot. How to set it up (~20 Minutes) Getting this powerful workflow running requires a few key configurations and external service dependencies. Telegram Bot Setup: Use BotFather in Telegram to create a new bot and obtain its API Token. In your n8n instance, add a new Telegram API credential. Give it a clear name (e.g., "My AI Memory Bot") and paste your API Token. OpenAI API Key Setup: Log in to your OpenAI account and generate a new API key. Within n8n, create a new OpenAI API credential. Name it appropriately (e.g., "My OpenAI Key for GPT-4o") and paste your API key. This credential will be used by the OpenAI Chat Model (GPT-4o for processing, chunking, and RAG), Analyze Image, and Transcribe Audio nodes. MongoDB Atlas Setup: If you don't have one, create a free-tier or paid cluster on MongoDB Atlas. Create a database and a collection within your cluster to store your memory chunks and their vector embeddings. Crucially, configure an Atlas Vector Search index on your chosen collection. This index will be on the field containing your embeddings (e.g., embedding field, type knnVector). Refer to MongoDB Atlas documentation for detailed instructions on creating vector search indexes. In n8n, add a new MongoDB credential. Provide your MongoDB Atlas connection string (ensure it includes your username, password, and database name), and give it a clear name (e.g., "My Atlas DB"). This credential will be used by the MongoDB Chat Memory node and for any custom HTTP requests you might use for Atlas Vector Search insertion/querying. Gmail Account Setup: Go to Google Cloud Console, enable the Gmail API for your project, and configure your OAuth consent screen. Create an OAuth 2.0 Client ID for a Desktop app (or Web application, depending on your n8n setup and redirect URI). Download the JSON credentials. In n8n, add a new Gmail OAuth2 API credential. Follow the n8n instructions to configure it using your Google Client ID and Client Secret, and authenticate with your Gmail account, ensuring it has sufficient permissions to send and search emails. External API Services: If your Extract from File node relies on an external service for robust PDF/DocX text extraction, ensure you have an API key and the service is operational. The current flow uses ConvertAPI. Add the necessary credential (e.g., ConvertAPI) in n8n. How you could enhance it โจ This workflow offers numerous avenues for advanced customization and expansion: Expanded Document Type Support: Enhance the "Document Processing" section to handle a wider range of document types beyond just PDFs (e.g., .docx, .xlsx, .pptx, markdown, CSV) by integrating additional conversion APIs or specialized parsing libraries (e.g., using a custom code node or dedicated third-party services like Apache Tika, Unstructured.io). Fine-tuned Memory Chunks & Metadata: Implement more sophisticated chunking strategies for very long documents, perhaps based on semantic breaks or document structure (headings, sections), to improve recall accuracy. Add more metadata fields (e.g., original author, document date, custom categories) to your MongoDB entries for richer filtering and context. Advanced AI Prompting: Allow users to dynamically set parameters for their memory inputs (e.g., "This is a high-priority meeting note," "This image contains sensitive information") which can influence how GPT-4o processes, tags, and stores the memory, or how it's retrieved later. n8n Tool Expansion for Proactive Actions: Significantly expand the AI Agent's capabilities by providing it with access to a wider range of n8n tools, moving beyond just information retrieval and email External Data Source Integration (APIs): Expand the AI Agent's tools to query other external APIs (e.g., weather, stock prices, news, CRM systems) so it can provide real-time information relevant to your memories. Getting Assistance & More Resources Need assistance setting this up, adapting it to a unique use case, or exploring more advanced customizations? Don't hesitate to reach out! You can contact me directly at nanabrownsnr@gmail.com. Also, feel free to check out my Youtube Channel where I discuss other n8n templates, as well as Innovation and automation solutions.
by Adnan Tariq
๐ก CyberScan โ AI-Powered Vulnerability Scanner with Nessus, OpenAI, and Google Sheets ๐ค Whoโs it for Security teams, DevOps engineers, vulnerability analysts, and automation builders who want to eliminate repetitive Nessus scan parsing, AI-based risk triage, and manual reporting. Designed for orgs following NIST CSF or CISA KEV compliance guidelines. โ๏ธ How it works / What it does Runs scheduled or manual scans via the Nessus API. Processes scan results and extracts asset + vulnerability data. Uses a custom AI-based risk metric (LEV) to triage findings into: ๐จ Expert review โ Self-healing ๐ต๏ธ Monitoring Automatically sends email alerts for critical CVEs. Exports daily summaries to Google Sheets (or your own BI system). Maps to NIST CSF (Identify, Protect, Detect, Respond, Recover). ๐งฐ How to set up Nessus: Add your Nessus API credentials and instance URL. Google Sheets: Authenticate your Google account. OpenAI / LLM: Use your API key if adding LLM triage or rewrite prompts. Email: Update SMTP credentials and alert recipient address. Set your targets: Adjust asset ranges or scan UUIDs as needed. โ ๏ธ All setup steps are explained in sticky notes inside the workflow. ๐ Requirements Nessus Essentials (Free) or Nessus Pro with API access. SMTP service (e.g. Gmail, Mailgun, SendGrid). Google Sheets OAuth2 credentials. Optional: OpenAI or other LLM provider for LEV scoring and CVE insights. ๐ How to customize the workflow Swap Google Sheets with Airtable, Supabase, or PostgreSQL. Change scan logic or asset list to fit your internal network scope. Adjust AI scoring logic to match internal CVSS thresholds or KEV tags. Expand alerting logic to include Slack, Discord, or webhook triggers. ๐ No sensitive data included. All credentials and sheet links are placeholders.
by Hardikkumar
This workflow automates the entire process of creating SEO-optimized meta titles and descriptions. It analyzes your webpage, spies on top-ranking competitors for the same keywords, and then uses a multi-step AI process to generate compelling, length-constrained meta tags. ๐ค How It Works This workflow operates in a three-phase process for each URL you provide: Phase 1: Self-Analysis When you add a URL to a Google Sheet with the status "New", the workflow scrapes your page's content. The first AI then performs a deep analysis to identify the page's primary keyword, semantic keyword cluster, search intent, and target audience. Phase 2: Competitor Intelligence The workflow takes your primary keyword and performs a live Google search. A custom code block intelligently filters the search results to identify true competitors. A second AI analyzes their meta titles and descriptions to find common patterns and successful strategies. Phase 3: Master Generation & Update The final AI synthesizes all gathered intelligenceโyour page's data and the competitor's winning patternsโto generate a new, optimized meta title and description. It then writes this new data back to your Google Sheet and updates the status to "Generated". โ๏ธ Setup Instructions You should be able to set up this workflow in about 10-15 minutes โฑ๏ธ. ๐ Prerequisites You will need the following accounts and API keys: A Google Account with access to Google Sheets. A Google AI / Gemini API key. A SerpApi key for Google search data. A ScrapingDog API key for reliable website scraping. ๐ ๏ธ Configuration Google Sheet Setup: Create a new Google Sheet. The workflow requires the following columns: URL, Status, Current Meta Title, Current Meta Description, Generated Meta Title, Generated Meta Description, and Ranking Factor. Add Credentials: Google Sheets Nodes: Connect your Google account credentials to the Google Sheets Trigger & Google Sheets nodes. Google Gemini Nodes: Add your Google Gemini API key to the credentials for all three Google Gemini Chat Model nodes. Scrape Website Node: In this HTTP Request node, go to Query Parameters and replace <your-api-key> with your ScrapingDog API key. Googl SERP Node: In this HTTP Request node, go to Query Parameters and replace <your-api-key> with your SerpApi API key. Configure Google Sheets Nodes: Copy the Document ID from your Google Sheet's URL. Paste this ID into the "Document ID" field in the following nodes: Google Sheets Trigger, Get row(s) in sheet1, and Update row in sheet. In each of those nodes, select the correct sheet name from the "Sheet Name" dropdown. โ Activate Workflow Save and activate the workflow. To run it, simply add a new row to your Google Sheet containing the URL you want to process and set the "Status" column to New.
by Jez
Summary This n8n workflow implements an AI-powered agent that intelligently uses the Brave Search API (via an external MCP service like Smithery) to perform both web and local searches. It understands natural language queries, selects the appropriate search tool, and exposes this enhanced capability as a single, callable MCP tool. Key Features ๐ค Intelligent Tool Selection: AI agent decides between Brave's web search and local search tools based on user query context. ๐ MCP Microservice: Exposes complex search logic as a single, easy-to-integrate MCP tool (call_brave_search_agent). ๐ง Powered by Google Gemini: Utilizes the gemini-2.5-flash-preview-05-20 LLM for advanced reasoning. ๐ฃ๏ธ Conversational Memory: Remembers context within a single execution flow. ๐ Customizable System Prompt: Tailor the AI's behavior and responses. ๐งฉ Modular Design: Connects to external Brave Search MCP tools (e.g., from Smithery). Benefits ๐ Simplified Integration: Easily add advanced, AI-driven search capabilities to other applications or agent systems. ๐ธ Reduced Client-Side LLM Costs: Offloads complex prompting and tool orchestration to n8n, minimizing token usage for client-side LLMs. ๐ง Centralized Logic: Manage and update search strategies and AI behavior in one place. ๐ Extensible: Can be adapted to use other search tools or incorporate more complex decision-making. Nodes Used @n8n/n8n-nodes-langchain.mcpTrigger (MCP Server Trigger) @n8n/n8n-nodes-langchain.toolWorkflow @n8n/n8n-nodes-langchain.agent (AI Agent) @n8n/n8n-nodes-langchain.lmChatGoogleGemini (Google Gemini Chat Model) n8n-nodes-mcp.mcpClientTool (MCP Client Tool - for Brave Search) @n8n/n8n-nodes-langchain.memoryBufferWindow (Simple Memory) n8n-nodes-base.executeWorkflowTrigger (Workflow Start - for direct execution/testing) Prerequisites An active n8n instance (v1.22.5+ recommended). A Google AI API key for using the Gemini LLM. Access to an external MCP service that provides Brave Search tools (e.g., a Smithery account configured with their Brave Search MCP). This includes the MCP endpoint URL and any necessary authentication (like an API key for Smithery). Setup Instructions Import Workflow: Download the Brave_Search_Smithery_AI_Agent_MCP_Server.json file and import it into your n8n instance. Configure LLM Credential: Locate the 'Google Gemini Chat Model' node. Select or create an n8n credential for "Google Palm API" (used for Gemini), providing your Google AI API key. Configure Brave Search MCP Credential: Locate the 'brave_web_search' and 'brave_local_search' (MCP Client) nodes. Create a new n8n credential of type "MCP Client HTTP API". Name: e.g., Smithery Brave Search Access Base URL: Enter the URL of your Brave Search MCP endpoint from your provider (e.g., https://server.smithery.ai/@YOUR_PROFILE/brave-search/mcp). Authentication: If your MCP provider requires an API key, select "Header Auth". Add a header with the name (e.g., X-API-Key) and value provided by your MCP service. Assign this newly created credential to both the 'brave_web_search' and 'brave_local_search' nodes. Note MCP Trigger Path: Open the 'Brave Search MCP Server Trigger' node. Copy its unique 'Path' (e.g., /cc8cc827-3e72-4029-8a9d-76519d1c136d). You will combine this with your n8n instance's base URL to get the full endpoint URL for clients. How to Use This workflow exposes an MCP tool named call_brave_search_agent. External clients can call this tool via the URL derived from the 'Brave Search MCP Server Trigger'. Example Client MCP Configuration (e.g., for Roo Code): "n8n-brave-search-agent": { "url": "https://YOUR_N8N_INSTANCE/mcp/cc8cc827-3e72-4029-8a9d-76519d1c136d/sse", "alwaysAllow": [ "call_brave_search_agent" ] } Replace YOUR_N8N_INSTANCE with your n8n's public URL and ensure the path matches your trigger node. Example Request: Send a POST request to the trigger URL with a JSON body: { "input": { "query": "best coffee shops in London" } } The agent will stream its response, including the summarized search results. Customization AI Behavior:* Modify the System Prompt within the *'Brave Search AI Agent'** node to fine-tune its decision-making, response style, or how it uses the search tools. LLM Choice:* Replace the *'Google Gemini Chat Model'** node with any other compatible LLM node supported by n8n. Search Tools:** Adapt the workflow to use different or additional search tools by modifying the MCP Client nodes and updating the AI Agent's system prompt and tool definitions. Further Information GitHub Repository: https://github.com/jezweb/n8n The workflow includes extensive sticky notes for in-canvas documentation. Author Jeremy Dawes (Jezweb)