by Jadai kongolo
🚀 n8n Local AI Agentic RAG Template Author: Jadai kongolo What is this? This template provides an entirely local implementation of an Agentic RAG (Retrieval Augmented Generation) system in n8n that can be extended easily for your specific use case and knowledge base. Unlike standard RAG which only performs simple lookups, this agent can reason about your knowledge base, self-improve retrieval, and dynamically switch between different tools based on the specific question. Why Agentic RAG? Standard RAG has significant limitations: Poor analysis of numerical/tabular data Missing context due to document chunking Inability to connect information across documents No dynamic tool selection based on question type What makes this template powerful: Intelligent tool selection**: Switches between RAG lookups, SQL queries, or full document retrieval based on the question Complete document context**: Accesses entire documents when needed instead of just chunks Accurate numerical analysis**: Uses SQL for precise calculations on spreadsheet/tabular data Cross-document insights**: Connects information across your entire knowledge base Multi-file processing**: Handles multiple documents in a single workflow loop Efficient storage**: Uses JSONB in Supabase to store tabular data without creating new tables for each CSV Getting Started Run the table creation nodes first to set up your database tables in Supabase Upload your documents to the folder on your computer that is mounted to /data/shared in the n8n container. This folder by default is the "shared" folder in the local AI package. The agent will process them automatically (chunking text, storing tabular data in Supabase) Start asking questions that leverage the agent's multiple reasoning approaches Customization This template provides a solid foundation that you can extend by: Tuning the system prompt for your specific use case Adding document metadata like summaries Implementing more advanced RAG techniques Optimizing for larger knowledge bases The non-local ("cloud") version of this Agentic RAG agent can be found here.
by Atik
Automate video transcription and Q&A with async VLM processing that scales from short clips to long recordings. What this workflow does Monitors Google Drive for new files in a specific folder and grabs the file ID on create Automatically downloads the binary to hand off for processing Sends the video to VLM Run for async transcription with a callback URL that posts results back to n8n Receives the transcript JSON via Webhook and appends a row in Google Sheets with the video identifier and transcript data Enables chat Q&A through the Chat Trigger + AI Agent. The agent fetches relevant rows from Sheets and answers only from those segments using the connected chat model Setup Prerequisites: Google Drive and Google Sheets accounts, VLM Run API credentials, OpenAI (or another supported) chat model credentials, n8n instance. Install the verified VLM Run node by searching for VLM Run in the nodes list, then click Install. You can also confirm on npm if needed. After install, it integrates directly for robust async transcription. Quick Setup: Google Drive folder watch Add Google Drive Trigger and choose Specific folder. Set polling to every minute, event to File Created. Connect Drive OAuth2. Download the new file Add Google Drive node with Download. Map {{$json.id}} and save the binary as data. Async transcription with VLM Run Add VLM Run node. Operation: video. Domain: video.transcription. Enable Process Asynchronously and set Callback URL to your Webhook path (for example /transcript-video). Add your VLM Run API key. Webhook to receive results Add Webhook node with method POST and path /transcript-video. This is the endpoint VLM Run calls when the job completes. Use When Last Node Finishes or respond via a Respond node if you prefer. Append to Google Sheets Add Google Sheets node with Append. Point to your spreadsheet and sheet. Map: Video Name → the video identifier from the webhook payload Data → the transcript text or JSON from the webhook payload Connect Google Sheets OAuth2. Chat entry point and Agent Add Chat Trigger to receive user questions. Add AI Agent and connect: a Chat Model (for example OpenAI Chat Model) the Google Sheets Tool to read relevant rows In the Agent system message, instruct: Use the Sheets tool to fetch transcript rows matching the question Answer only from those rows Cite or reference row context as needed Test and activate Upload a sample video to the watched Drive folder. Wait for the callback to populate your sheet. Ask a question through the Chat Trigger and confirm the agent quotes only from the retrieved rows. Activate your template and let it automate the task. How to take this further Team memory:** Ask “What did we decide on pricing last week?” and get the exact clip and answer. Study helper:** Drop classes in, then ask for key points or formulas by topic. Customer FAQ builder:** Turn real support calls into answers your team can reuse. Podcast highlights:** Find quotes, tips, and standout moments from each episode. Meeting catch-up:** Get decisions and action items from any recording, fast. Marketing snippets:** Pull short, social-ready lines from long demos or webinars. Team learning hub:** Grow a searchable video brain that remembers everything. This workflow uses the VLM Run node for scalable, async video transcription and the AI Agent for grounded Q&A from Sheets, giving you a durable pipeline from upload to searchable answers with minimal upkeep.
by Anna Bui
Automatically analyze n8n workflow errors with AI, create support tickets, and send detailed Slack notifications Perfect for development teams and businesses that need intelligent error handling with automated support workflows. Never miss critical workflow failures again! How it works Error Trigger captures any workflow failure in your n8n instance AI Debugger analyzes the error using structured reasoning to identify root causes Clean Data transforms AI analysis into organized, actionable information Create Support Ticket automatically generates a detailed ticket in FreshDesk Merge combines ticket data with AI analysis for comprehensive reporting Generate Slack Alert creates rich, formatted notifications with all context Send to Team delivers instant alerts to your designated Slack channel How to use Replace FreshDesk credentials with your helpdesk system API Configure Slack channel for your team notifications Customize AI analysis prompts for your specific error types Set up as global error handler for all your critical workflows Requirements FreshDesk account (or compatible ticketing system) Slack workspace with bot permissions OpenAI API access for AI analysis n8n Cloud or self-hosted with AI nodes enabled Good to know OpenAI API calls cost approximately $0.01-0.03 per error analysis Works with any ticketing system that supports REST API Can be triggered by webhooks from external monitoring tools Slack messages use rich formatting for mobile-friendly alerts Need Help? Join the Discord or ask in the Forum! Happy Monitoring!
by David Olusola
🎥 Auto-Save Zoom Recordings to Google Drive + Log Meetings in Airtable This workflow automatically saves Zoom meeting recordings to Google Drive and logs all important details into Airtable for easy tracking. Perfect for teams that want a searchable meeting archive. ⚙️ How It Works Zoom Recording Webhook Listens for recording.completed events from Zoom. Captures metadata (Meeting ID, Topic, Host, File Type, File Size, etc.). Normalize Recording Data A Code node extracts and formats Zoom payload into clean JSON. Download Recording Uses HTTP Request to download the recording file. Upload to Google Drive Saves the recording into your chosen Google Drive folder. Returns the file ID and share link. Log Result Combines Zoom metadata with Google Drive file info. Save to Airtable Logs all details into your Meeting Logs table: Meeting ID Topic Host File Type File Size Google Drive Saved (Yes/No) Drive Link Timestamp 🛠️ Setup Steps 1. Zoom Create a Zoom App → enable recording.completed event. Add the workflow’s Webhook URL as your Zoom Event Subscription endpoint. 2. Google Drive Connect OAuth in n8n. Replace YOUR_FOLDER_ID with your destination Drive folder. 3. Airtable Create a base with table Meeting Logs. Add columns: Meeting ID Topic Host File Type File Size Google Drive Saved Drive Link Timestamp Replace YOUR_AIRTABLE_BASE_ID in the node. 📊 Example Airtable Output | Meeting ID | Topic | Host | File Type | File Size | Google Drive Saved | Drive Link | Timestamp | |------------|-------------|-------------------|-----------|-----------|--------------------|------------|---------------------| | 987654321 | Team Sync | host@email.com | MP4 | 104 MB | Yes | 🔗 Link | 2025-08-30 15:02:10 | ⚡ With this workflow, every Zoom recording is safely archived in Google Drive and logged in Airtable for quick search, reporting, and compliance tracking.
by Shayan Ali Bakhsh
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Try It Out! Automatically generate Linkedin Carousal and Upload to Linkedin Use case : Linkedin Content Creation, specifically carousal. But could be adjusted for many other creations as well. How it works It will run automatically every 6:00 AM Get latest News from TechRadar Parse it into readable JSON AI will decide, which news resonates with your profile Then give the title and description of that news to generate the final linkedin carousal content. This step is also trigerred by Form trigger After carousal generation, it will give it to Post Nitro to create images on that content. Post Nitro provides the PDF file. We Upload the PDf file to Linkedin and get the file ID, in next step, it will be used. Finally create the Post description and Post it to Linkedin How to use It will run every 6:00 AM automatically. Just make it Live Submit the form, with correct title and description ( i did not added tests for that so must give that correct 😅 ) Requirements Install Post Nitro community Node @postnitro/n8n-nodes-postnitro-ai We need the following API keys to make it work Google Gemini ( for Gemini 2.5-Flash Usage ) Docs Google Gemini Key Post Nitro credentials ( API key + Template id + Brand id ) Docs Post Nitro Linkedin API key Docs Linkedin API Need Help? Message on Linkedin the Linkedin Happy Automation!
by Anurag Patil
Geekhack Discord Updater How It Works This n8n workflow automatically monitors GeekHack forum RSS feeds every hour for new keyboard posts in Interest Checks and Group Buys sections. When it finds a new thread (not replies), it: Monitors RSS Feeds: Checks two GeekHack RSS feeds for new posts (50 items each) Filters New Threads: Removes reply posts by checking for "Re:" prefix in titles Prevents Duplicates: Queries PostgreSQL database to skip already-processed threads Scrapes Content: Fetches the full thread page and extracts the original post Extracts Images: Uses regex to find all images in the post content Creates Discord Embed: Formats the post data into a rich Discord embed with up to 4 images Sends to Multiple Webhooks: Retrieves all webhook URLs from database and sends to each one Logs Processing: Records the thread as processed to prevent duplicates The workflow includes a webhook management system with a web form to add/remove Discord webhooks dynamically, allowing you to send notifications to multiple Discord servers or channels. Steps to Set Up Prerequisites n8n instance running PostgreSQL database Discord webhook URL(s) 1. Database Setup Create PostgreSQL tables: Processed threads table: CREATE TABLE processed_threads ( topic_id VARCHAR PRIMARY KEY, title TEXT, processed_at TIMESTAMP DEFAULT NOW() ); Webhooks table: CREATE TABLE webhooks ( id SERIAL PRIMARY KEY, url TEXT NOT NULL, created_at TIMESTAMP DEFAULT NOW() ); 2. n8n Configuration Import Workflow Copy the workflow JSON Go to n8n → Workflows → Import from JSON Paste the JSON and import Configure Credentials PostgreSQL: Create new PostgreSQL credential with your database connection details All PostgreSQL nodes should use the same credential 3. Node Configuration Schedule Trigger Already configured for 1-hour intervals Modify if different timing needed PostgreSQL Nodes Ensure all PostgreSQL nodes use your PostgreSQL credential: "Check if Processed" "Update entry" "Insert rows in a table" "Select rows from a table" Database schema should be "public" Table names: "processed_threads" and "webhooks" RSS Feed Limits Both RSS feeds are set to limit=50 items Adjust if you need more/fewer items per check 4. Webhook Management Adding Webhooks via Web Form The workflow creates a form trigger for adding webhooks Access the form URL from the "On form submission" node Submit Discord webhook URLs through the form Webhooks are automatically stored in the database Manual Webhook Addition Alternatively, insert webhooks directly into the database: INSERT INTO webhooks (url) VALUES ('https://discord.com/api/webhooks/YOUR_WEBHOOK_URL'); 5. Testing Test the Main Workflow Ensure you have at least one webhook in the database Activate the workflow Use "Execute Workflow" to test manually Check Discord channels for test messages Test Webhook Form Get the form URL from "On form submission" node Submit a test webhook URL Verify it appears in the webhooks table 6. Monitoring Check execution history for errors Monitor both database tables for entries Verify all registered webhooks receive notifications Adjust schedule timing if needed 7. Managing Webhooks Use the web form to add new webhook URLs Remove webhooks by deleting from the database: DELETE FROM webhooks WHERE url = 'webhook_url_to_remove'; The workflow will now automatically post new GeekHack threads to all registered Discord webhooks every hour, with the ability to dynamically manage webhook destinations through the web form interface.
by Margo Rey
AI-Powered Email Generation with MadKudu sent via Outreach.io This workflow researches prospects using MadKudu MCP, generates personalized emails with OpenAI, and syncs them to Outreach with automatic sequence enrollment. Its for SDRs and sales teams who want to scale personalized outreach by automating research and email generation while maintaining quality. ✨ Who it's for Sales Development Representatives (SDRs) doing cold outreach Business Development teams needing personalized emails at scale RevOps teams wanting to automate prospect research workflows Sales teams using Outreach for email sequences 🔧 How it works 1. Input Email & Research: Enter prospect email via chat trigger. Extract email and generate comprehensive account brief using MadKudu MCP account-brief-instructions. 2. Deep Research & Email Generation: AI Agent performs 6 research steps using MadKudu MCP tools: Account details (hiring, partnerships, tech stack, sales motion, risk) Top users in the account (for name-dropping opportunities) Contact details (role, persona, engagement) Contact web search (personal interests, activities) Contact picture web search (LinkedIn profile insights) Company value prop research AI generates 5 different email angles and selects the best one based on relevance. 3. Outreach Integration: Checks if prospect exists in Outreach by email. If exists: Updates custom field (custom49) with generated email. If new: Creates new prospect with email in custom field. Enrolls prospect in specified email sequence (ID 781) using mailbox (ID 51). Waits 30 seconds and verifies successful enrollment. 📋 How to set up Set your OpenAI credentials Required for AI research and email generation. Create a n8n Variable to store your MadKudu API key named madkudu_api_key Used for the MadKudu MCP tool to access account research capabilities. Create a n8n Variable to store your company domain named my_company_domain Used for context in email generation and value prop research. Create an Oauth2 API credential to connect your Outreach account Used to create/update prospects and enroll in sequences. Configure Outreach settings Update Outreach Mailbox ID (currently set to 51) in the "Configure Outreach Settings" node. Update Outreach Sequence ID (currently set to 781) in the same node. Adjust custom field name if using different field than custom49. 🔑 How to connect Outreach In n8n, add a new Oauth2 API credential and copy the callback URL Now go to Outreach developer portal Click "Add" to create a new app In Feature selection add Outreach API (OAuth) In API Access (Oauth) set the redirect URI to the n8n callback Select the following scopes accounts.read, accounts.write, prospects.read, prospects.write, sequences.read Save in Outreach 7.Now enter the Outreach Application ID into n8n Client Id and the Outreach Application Secret into n8n Client secret Save in n8n and connect via Oauth your Outreach Account ✅ Requirements MadKudu account with access to API Key Outreach Admin permissions to create an app OpenAI API Key 🛠 How to customize the workflow Change the research steps Modify the AI Agent prompt to adjust the 6 research steps or add additional MadKudu MCP tools. Update Outreach configuration Change Mailbox ID (51) and Sequence ID (781) in the "Configure Outreach Settings" node. Update custom field mapping if using different field than custom49. Modify email generation Adjust the prompt guidelines, tone, or angle priorities in the "AI Email Generator" node. Change the trigger Swap the chat trigger for a Schedule, Webhook, or integrate with your CRM to automate prospect input.
by Sandeep Patharkar | ai-solutions.agency
Build an AI HR Assistant to Screen Resumes and Send Telegram Alerts A step-by-step guide to creating a fully automated recruitment pipeline that screens candidates, generates interview questions, and notifies your team. This template provides a complete, step-by-step guide to building an AI-powered HR assistant from scratch in n8n. You will learn how to connect a web form to an intelligent screening agent that reads resumes, evaluates candidates against your job criteria, and prepares unique interview questions for the most promising applicants. | Services Used | Features | | :---------------------------------------------- | :----------------------------------------------------------------------------- | | 🤖 OpenAI / LangChain | Uses AI Agents to screen, score, and analyze candidates. | | 📄 Google Drive & Google Sheets | Stores resumes and manages a database of open positions and applicants. | | 📥 n8n Form Trigger | Provides a public-facing web form to capture applications. | | 💬 Telegram | Sends real-time alerts to the hiring team for qualified candidates. | How It Works ⚙️ 📥 Application Submitted: The workflow starts when a candidate fills out the n8n Form Trigger with their details and uploads their CV. 📂 File Processing: The CV is automatically uploaded to a specific Google Drive folder for record-keeping, and the Extract from File node reads its text content. 🧠 AI Screening Agent: A LangChain Agent analyzes the resume text. It uses the Google Sheets Tool to look up the requirements for the applied role, then scores the candidate and decides if they should be shortlisted. 📊 Log Results: The agent's decision (name, score, shortlisted status) is logged in your master "Applications" Google Sheet. ✅ Qualification Check: An IF node checks if the candidate was shortlisted. ❓ AI Question Generator: If shortlisted, a second LangChain Agent generates three unique, relevant interview questions based on the candidate's resume and the job description. ✍️ Update Sheet: The generated questions are added to the candidate's row in the Google Sheet. 🔔 Notify Team: A final alert is sent via Telegram to notify the HR team that a new candidate has been qualified and is ready for review. 🛠️ How to Build This Workflow Follow these steps to build the recruitment assistant from a blank canvas. Step 1: Set Up the Application Intake Add a Form Trigger node. Configure it with fields for Name, Email, Phone Number, a File Upload for the CV, and a Dropdown for the "Job Role". Connect a Google Drive node. Set the Operation to Upload and connect your credentials. Set it to upload the CV file from the Form Trigger into a specific folder. Add an Extract from File node. Set it to extract text from the PDF CV file provided by the trigger. Step 2: Build the AI Screening Agent Add a Langchain Agent node. This will be your main screening agent. In its prompt, instruct the AI to act as a resume screener. Tell it to use the input text from the Extract from File node and the tools you will provide to score and shortlist candidates. Add an OpenAI Chat Model node and connect it to the Agent's Language Model input. Add a Google Sheets Tool node. Point it to a sheet with your open positions and their requirements. Connect this to the Agent's Tool input. Add a Structured Output Parser node and define the JSON structure you want the agent to return (e.g., candidate_name, score, shortlisted). Connect this to the Agent's Output Parser input. Step 3: Log Results & Check for a Match Connect a Google Sheets node after the Agent. Set its operation to Append or Update. Use it to add the structured output from the agent into your main "Applications" sheet. Add an IF node. Set the condition to continue only if the shortlisted field equals "yes". Step 4: Generate Interview Questions On the 'true' path of the IF node, add a second Langchain Agent node. Write a prompt telling this agent to generate 3 interview questions based on the candidate's resume and the job requirements. Connect the same OpenAI Model and Google Sheets Tool to this agent. Add another Google Sheets node. Set it to Update the existing row for the candidate, adding the newly generated questions. 💬 Need Help or Want to Learn More? Join my Skool community for n8n + AI automation tutorials, live Q&A sessions, and exclusive workflows: 👉 https://www.skool.com/n8n-ai-automation-champions Template Author: Sandeep Patharkar Category: Website Chatbots / AI Automation Difficulty: Beginner Estimated Setup Time: ⏱️ 15 minutes
by Pedro Entringer
🧠 Export Tawk.to Help Center Articles to Google Drive as Markdown Files Transform the way you manage your knowledge base with this fully automated N8N workflow! This automation connects directly to your Tawk.to Help Center, reads all published categories and articles, converts them to Markdown (.md) format, and uploads each file to Google Drive 🔹 Key Benefits 🚀 Complete Extraction Automatically captures all categories and articles from your Tawk.to Help Center, even without direct API integration. 🧩 Automatic Conversion Transforms HTML content into clean Markdown files — perfect for editing, version control, or migration to another CMS. ☁️ Native Google Drive Integration Saves each article with a structured filename, avoids duplicates, and organizes them by category. 🔁 Fully Customizable Easily adapt the workflow to export to Notion, GitHub, Dropbox, or any other platform supported by N8N. 💡 Ideal Use Cases Migrating your Tawk.to Help Center Creating automated content backups Integrating documentation across multiple systems ⚙️ Prerequisites Before running this workflow, make sure you have: An active Tawk.to account with access to your Help Center. A Google Drive account (personal or workspace). Access to N8N (self-hosted or cloud). 🧰 Setup Instructions Import the Workflow Download the JSON file from the provided link or your N8N community instance. In N8N, click Import Workflow and upload the file. Authenticate Google Drive Open the Google Drive node. Click Connect, choose your Google account, and allow access. Configure Output Folder Choose or create a target folder in your Google Drive where articles will be saved. Run the Workflow Click Execute Workflow. The automation will read all Help Center articles, convert them to Markdown, and save them to your Drive.
by Khairul Muhtadin
This AI-powered workflow transforms n8n workflow JSON files into publication-ready, SEO-optimized markdown posts for the n8n community. Simply upload your workflow's JSON, and let Google Gemini 2.5 Pro, guided by a LlamaIndex-powered knowledge base of best practices, automatically generate compelling content. Why Use This Workflow? Time Savings: Reduces the time to create a detailed workflow post from over an hour of manual writing to under 2 minutes. Cost Reduction: Eliminates the need for separate AI content subscriptions or outsourcing content creation tasks. Error Prevention: Enforces content quality and structural consistency by using a knowledge base of n8n's official guidelines, minimizing formatting errors. Ideal For n8n Workflow Creators:** To quickly document and share their creations on the community platform without the tedious, time-consuming writing process. Developer Advocates:** To standardize and accelerate the production of technical tutorials and workflow showcases. Content & Marketing Teams:** To streamline the content pipeline for n8n-related blog posts, tutorials, and community engagement initiatives. How It Works Trigger: The process starts when you upload an n8n workflow JSON file via a simple web form. Data Extraction: The workflow automatically extracts the JSON content from the uploaded file. Intelligence Layer: An advanced AI agent, powered by Google Gemini 2.5 Pro, analyzes the structure, nodes, and metadata of your workflow. Knowledge Retrieval: The agent consults a specialized, in-memory knowledge base built from n8n's content guidelines. This knowledge base is created by parsing documents with LlamaIndex and refined with a Cohere Reranker for maximum accuracy. Content Generation: The AI agent synthesizes the technical details from your JSON with the best practices from the knowledge base to write a complete, benefit-driven markdown post. Output & Delivery: The final, polished markdown content is generated as the workflow's output, ready to be copied and pasted into the n8n community platform. Setup Guide Prerequisites | Requirement | Type | Purpose | |-------------|------|---------| | n8n instance | Essential | Workflow execution platform | | Google Gemini API Key | Essential | Powers the core AI content generation | | LlamaIndex Cloud API Key | Essential | Parses documents for the knowledge base | | Cohere API Key | Optional | Improves knowledge base search results | | Google Drive Account | Optional | For automatically updating the knowledge base from a Google Doc | Installation Steps Import the JSON file to your n8n instance. Configure credentials: Google Gemini: In the "GEmini 2.5 pro" node, create and add your Google Gemini API credential. LlamaIndex: In the three HTTP Request nodes named "Parse Document...", "Monitor Document...", and "Retrieve Parsed...", create an HTTP Header Auth credential. The header name is Authorization and the value is Bearer YOUR_LLAMA_INDEX_API_KEY. Cohere: (Optional) In the "Reranker Cohere" node, create and add your Cohere API credential. Google Drive: (Optional) If you plan to auto-update the knowledge base, configure Google Drive OAuth2 credentials for the "Knowledge Base Updated Trigger" and "Download Knowledge Document" nodes. Update environment-specific values: To use the knowledge base auto-update feature, go to the "Knowledge Base Updated Trigger" node and select the Google Drive file containing your content guidelines. Customize settings: The primary system prompt in the "n8ncreator" agent node can be modified to adjust the tone, style, or structure of the generated content. Test execution: Run the workflow manually and use the form to upload a sample n8n workflow JSON file to verify that all connections work correctly. Technical Details Core Nodes | Node | Purpose | Key Configuration | |------|---------|-------------------| | Form Trigger | Initiates the workflow via a file upload. | Set the "Input Json Workflow" field to required. | | Langchain Agent | Orchestrates the entire content creation process. | The system prompt contains all instructions for the AI. | | ChatGoogleGemini | Provides the core generative AI capabilities. | Select your Gemini model of choice (e.g., gemini-2.5-pro). | | VectorStoreInMemory | Acts as the agent's knowledge base tool. | Configured to use embeddings from a Google Gemini model. | | HTTPRequest | Interacts with the LlamaIndex API to parse documents. | Set up with LlamaIndex API endpoint and authentication. | Customization Options Basic Adjustments: Change AI Model:** Replace the ChatGoogleGemini node with another LLM node (e.g., OpenAI, Anthropic) to use a different provider. Adjust System Prompt:** Modify the prompt in the "n8ncreator" node to tailor the output for different platforms (e.g., blog, internal wiki) or change the writing style. Advanced Enhancements: Automated Publishing:** Connect the output of the "n8ncreator" node to a Ghost, WordPress, or GitHub node to automatically publish the generated post. Add Web Search:** Equip the Langchain Agent with a web search tool to allow it to fetch live information about new n8n nodes or services. Batch Processing:** Replace the Form Trigger with a Read Binary Files node to process an entire folder of workflow JSON files in a single run. Performance & Optimization | Metric | Expected Performance | Optimization Tips | |--------|---------------------|-------------------| | Execution time | ~1 minute per run | Largely dependent on the Gemini API response time. | | API calls | 1 LLM call per post | Knowledge base updates trigger LlamaIndex/Google calls separately. | | Error handling | Built-in retry logic for document parsing | Add an error workflow path after the "n8ncreator" node to handle AI generation failures. | Troubleshooting Common Issues: | Problem | Cause | Solution | |---------|-------|----------| | AI output is generic or incomplete | The input JSON file is invalid or lacks key information (e.g., no node names). | Ensure you are uploading a valid, exported n8n workflow JSON. Verify the workflow has been saved with descriptive node names. | | LlamaIndex parsing fails | The LlamaIndex API key is incorrect or the source document is inaccessible. | Double-check your LlamaIndex API credential. Ensure the Google Doc sharing settings allow access. | | Credential Error | API keys are missing or incorrect for Gemini, LlamaIndex, or Cohere. | Go to the specified nodes and verify that the correct credentials have been created and selected. | Created by: khaisa Studio Category: AI Tags: AI, Content Generation, Google Gemini, LlamaIndex, Automation Need custom workflows? Contact us Connect with the creator: Portfolio • Workflows • LinkedIn • Medium • Threads
by Meak
Auto-Call Leads from Google Sheets with VAPI → Log Results + Book Calendar This workflow calls new leads from a Google Sheet using VAPI, saves the call results, and (if there’s a booking request) creates a Google Calendar event automatically. Benefits Auto-call each new lead from your call list Save full call outcomes back to Google Sheets Parse “today/tomorrow + time” into a real datetime (IST) Auto-create calendar events for bookings/deliveries Batch-friendly to avoid rate limits How It Works Trigger: New row in Google Sheets (call_list). Prepare: Normalize phone (adds +), then process in batches. Call: Send number to VAPI (/call) with your assistantId + phoneNumberId. Receive: VAPI posts results to your Webhook. Store: Append/Update Google Sheet with: name, role, company, phone, email, interest level, objections, next step, notes, etc. Parse Time: Convert today/tomorrow + HH:MM AM/PM to start/end in IST (+1 hour). Book: Create Google Calendar event with the parsed times. Respond: Send response back to VAPI to complete the cycle. Who Is This For Real estate / local service teams running outbound calls Agencies doing voice outreach and appointment setting Ops teams that want call logs + auto-booking in one place Setup Google Sheets Trigger:** select your spreadsheet Vapi_real-estate and tab call_list. VAPI Call:** set assistantId, phoneNumberId, and add Bearer token. Webhook:** copy the n8n webhook URL into VAPI so results post back. Google Calendar:** set the calendar ID (e.g., you@domain.com). Timezone:* the booking parser formats times to *Asia/Kolkata (IST)**. Batching:** adjust SplitInBatches size to control pace. ROI & Monetization Save 2–4 hours/week on manual dialing + data entry Faster follow-ups with instant booking creation Package as an “AI Caller + Auto-Booking” service ($1k–$3k/month) Strategy Insights In the full walkthrough, I show how to: Map VAPI tool call JSON safely into Sheets fields Handle missing/invalid times and default to safe slots Add no-answer / retry logic and opt-out handling Extend to send Slack/email alerts for hot leads Check Out My Channel For more voice automation workflows that turn leads into booked calls, check out my YouTube channel where I share the exact setups I use to win clients and scale to $20k+ monthly revenue.
by Jay Emp0
🐱 MemeCoin Art Generator - using Gemini Flash NanoBanana & upload to Twitter Automatically generates memecoin art and posts it to Twitter (X) powered by Google Gemini, NanoBanana image generation, and n8n automation. 🧩 Overview This workflow creates viral style memecoin images (like Popcat) and posts them directly to Twitter with a witty, Gen Z style tweet. It combines text to image AI, scheduled triggers, and social publishing, all in one seamless flow. Workflow flow: Define your memecoin mascot (name, description, and base image URL). Generate an AI image prompt and a meme tweet. Feed the base mascot image into Gemini Image Generation API. Render a futuristic memecoin artwork using NanoBanana. Upload the final image and tweet automatically to Twitter. 🧠 Workflow Diagram ⚙️ Key Components | Node | Function | |------|-----------| | Schedule Trigger | Runs automatically at chosen intervals to start meme generation. | | Define Memecoin | Defines mascot name, description, and base image URL. | | AI Agent | Generates tweet text and creative image prompt using Google Gemini. | | Google Gemini Chat Model | Provides trending topic context and meme phrasing. | | Get Source Image | Fetches the original mascot image (e.g., Popcat). | | Convert Source Image to Base64 | Prepares image for AI based remixing. | | Generate Image using NanoBanana | Sends the prompt and base image to Gemini Image API for art generation. | | Convert Base64 to PNG | Converts the AI output to an image file. | | Upload to Twitter | Uploads generated image to Twitter via media upload API. | | Create Tweet | Publishes the tweet with attached image. | 🪄 How It Works 1️⃣ Schedule Trigger - starts the automation (e.g., hourly or daily). 2️⃣ Define Memecoin - stores your mascot metadata: memecoin_name: popcat mascot_description: cat with open mouth mascot_image: https://i.pinimg.com/736x/9d/05/6b/9d056b5b97c0513a4fc9d9cd93304a05.jpg 3️⃣ AI Agent - prompts Gemini to: Write a short 100 character tweet in Gen Z slang. Create an image generation prompt inspired by current meme trends. 4️⃣ NanoBanana API - applies your base image + AI prompt to create art. 5️⃣ Upload & Tweet - final image gets uploaded and posted automatically. 🧠 Example Output Base Source Image: Generated Image (AI remix): Published Tweet: Example tweet text: > Popcat's about to go absolutely wild, gonna moon harder than my last test score! 🚀📈 We up! #Popcat #Memecoin 🧩 Setup Tutorial 1️⃣ Prerequisites | Tool | Purpose | |------|----------| | n8n (Cloud or Self hosted) | Workflow automation platform | | Google Gemini API Key | For generating tweet and image prompts | | Twitter (X) API OAuth1 + OAuth2 | For uploading and posting tweets | 2️⃣ Import the Workflow Download memecoin art generator.json. In n8n, click Import Workflow → From File. Set up and connect credentials: Google Gemini API Twitter OAuth (Optional) Adjust Schedule Trigger frequency to your desired posting interval. 3️⃣ Customize Your MemeCoin In the Define Memecoin node, edit these fields to change your meme theme: memecoin_name: "doggo" mascot_description: "shiba inu in astronaut suit" mascot_image: "https://example.com/shiba.jpg" That’s it - next cycle will generate your new meme and post it. 4️⃣ API Notes Gemini Image Generation API Docs:** https://ai.google.dev/gemini-api/docs/image-generation#gemini-image-editing API Key Portal:** https://aistudio.google.com/api-keys