by Matt Chong
Automatically Rename Gmail Attachments with AI and Save to Google Drive Who is this for? This workflow is perfect for anyone who regularly receives important email attachments like reports, invoices, or PDFs and wants them: Renamed using clean AI generated filenames Automatically saved to a specific Google Drive folder Neatly organized without manual work It is ideal for freelancers, business owners, accountants, and productivity enthusiasts. What does it solve? Manually naming and organizing email attachments takes time and often leads to messy files. This workflow solves that by: Automatically downloading unread Gmail attachments Using AI to understand the content and generate clean, consistent filenames Saving the renamed files to your chosen Google Drive folder Marking emails as read after processing No more confusing filenames like "Attachment 1.pdf". How it works The workflow runs on a scheduled interval (every hour by default) It checks Gmail for any unread emails with attachments. For each email: Downloads attachments Extracts and reads PDF content Uses AI to generate a new filename in the format: YYYYMMDD-keyword-summary.pdf Saves the file to Google Drive with the new name Marks the email as read to avoid duplicates How to set up? Connect these accounts in your n8n credentials: Gmail (OAuth2) Google Drive (OAuth2) OpenAI (API key) Update the folder URL in the Google Drive node to your target folder Optional: adjust the trigger interval if you want it to run more or less often How to customize this workflow to your needs Change the AI prompt to create different naming rules, such as including sender or topic Dynamically set Drive folders based on email sender or subject
by panyanyany
Overview This n8n workflow automatically converts and enhances multiple photos into professional ID-style portraits using Gemini AI (Nano Banana). It processes images in batch from Google Drive, applies professional ID photo standards (proper framing, neutral background, professional attire), and outputs the enhanced photos back to Google Drive. Input: Google Drive folder with photos Output: Professional ID-style portraits in Google Drive output folder The workflow uses a simple form interface where users provide Google Drive folder URLs and an optional custom prompt. It automatically fetches all images from the input folder, processes each through the Defapi API with Google's nano-banana model, monitors generation status, and uploads finished photos to the output folder. Perfect for HR departments, recruitment agencies, or anyone needing professional ID photos in bulk. Prerequisites A Defapi account and API key (Bearer token configured in n8n credentials): Sign up at Defapi.org An active n8n instance with Google Drive integration Google Drive account with two public folders: Input folder: Contains photos to be processed (must be set to public/anyone with the link) Output folder: Where enhanced photos will be saved (must be set to public/anyone with the link) Photos with clear faces (headshots or upper body shots work best) Setup Instructions 1. Prepare Google Drive Folders Create two Google Drive folders: One for input photos (e.g., https://drive.google.com/drive/folders/xxxxxxx) One for output photos (e.g., https://drive.google.com/drive/folders/yyyyyy) Important: Make both folders **public (set sharing to "Anyone with the link can view") Right-click folder → Share → Change "Restricted" to "Anyone with the link" Upload photos to the input folder (supported formats: .jpg, .jpeg, .png, .webp) 2. Configure n8n Credentials Defapi API**: Add HTTP Bearer Auth credential with your Defapi API token (credential name: "Defapi account") Google Drive**: Connect your Google Drive OAuth2 account (credential name: "Google Drive account"). See https://docs.n8n.io/integrations/builtin/credentials/google/oauth-generic/ 3. Run the Workflow Execute the workflow in n8n Access the form submission URL Fill in the form: Google Drive - Input Folder URL: Paste your input folder URL Google Drive - Output Folder URL: Paste your output folder URL Prompt (optional): Customize the AI generation prompt or leave blank to use the default 4. Monitor Progress The workflow will: Fetch all images from the input folder Process each image through the AI model Wait for generation to complete (checks every 10 seconds) Download and upload enhanced photos to the output folder Workflow Structure The workflow consists of the following nodes: On form submission (Form Trigger) - Collects Google Drive folder URLs and optional prompt Search files and folders (Google Drive) - Retrieves all files from the input folder Code in JavaScript (Code Node) - Prepares image data and prompt for API request Send Image Generation Request to Defapi.org API (HTTP Request) - Submits generation request for each image Wait for Image Processing Completion (Wait Node) - Waits 10 seconds before checking status Obtain the generated status (HTTP Request) - Polls API for completion status Check if Image Generation is Complete (IF Node) - Checks if status is not "pending" Format and Display Image Results (Set Node) - Formats result with markdown and image URL HTTP Request (HTTP Request) - Downloads the generated image file Upload file (Google Drive) - Uploads the enhanced photo to the output folder Default Prompt The workflow uses this professional ID photo generation prompt by default: Create a professional portrait suitable for ID documentation with proper spacing and composition. Framing: Include the full head, complete shoulder area, and upper torso. Maintain generous margins around the subject without excessive cropping. Outfit: Transform the existing attire into light business-casual clothing appropriate for the individual's demographics and modern style standards. Ensure the replacement garment appears natural, properly tailored, and complements the subject's overall presentation (such as professional shirt, refined blouse, contemporary blazer, or sophisticated layered separates). Pose & Gaze: Position shoulders square to the camera, maintaining perfect frontal alignment. Direct the gaze straight ahead into the lens at identical eye height, avoiding any angular deviation in vertical or horizontal planes. Expression: Display a professional neutral demeanor or subtle closed-lip smile that conveys confidence and authenticity. Background: Utilize a solid, consistent light gray photographic background (color code: #d9d9d9) without any pattern, texture, or tonal variation. Lighting & Quality: Apply balanced studio-quality illumination eliminating harsh contrast or reflective artifacts. Deliver maximum resolution imagery with precise focus and accurate natural skin color reproduction. Customization Tips for Different ID Photo Types Based on the default prompt structure, here are specific customization points for different use cases: 1. Passport & Visa Photos Key Requirements: Most countries require white or light-colored backgrounds, neutral expression, no smile. Prompt Modifications: Background**: Change to Plain white background (#ffffff) or Light cream background (#f5f5f5) Expression**: Change to Completely neutral expression, no smile, mouth closed, serious but not tense Framing**: Add Head size should be 70-80% of the frame height. Top of head to chin should be prominent Outfit**: Change to Replace with dark formal suit jacket and white collared shirt or Navy blue blazer with light shirt Additional**: Add No glasses glare, ears must be visible, no hair covering the face 2. Corporate Employee ID / Work Badge Key Requirements: Professional but approachable, company-appropriate attire. Prompt Modifications: Background**: Use company color or standard #e6f2ff (light blue), #f0f0f0 (light gray) Expression**: Keep Soft closed-mouth smile — confident and approachable Outfit**: Change to specific dress code: Corporate: Dark business suit with tie for men, blazer with blouse for women Tech/Startup: Smart casual polo shirt or button-down shirt without tie Creative: Clean, professional casual clothing that reflects company culture Framing**: Use default or add Upper chest visible with company badge area clear 3. University/School Student ID Key Requirements: Friendly, youthful, appropriate for educational setting. Prompt Modifications: Background**: Use school colors or Light blue (#e3f2fd), Soft gray (#f5f5f5) Expression**: Change to Friendly natural smile or pleasant neutral expression Outfit**: Change to Replace with clean casual clothing — collared shirt, polo, or neat sweater. No logos or graphics Framing**: Keep default Additional**: Add Youthful, fresh appearance suitable for educational environment 4. Driver's License / Government ID Key Requirements: Strict standards, neutral expression, specific background colors. Prompt Modifications: Background**: Check local requirements — often White (#ffffff), Light gray (#d9d9d9), or Light blue (#e6f2ff) Expression**: Change to Neutral expression, no smile, mouth closed, eyes fully open Outfit**: Use Replace with everyday casual or business casual clothing — collared shirt or neat top Framing**: Add Head centered, face taking up 70-80% of frame, ears visible Additional**: Add No glasses (or non-reflective lenses), no headwear except religious purposes, natural hair 5. Professional LinkedIn / Resume Photo Key Requirements: Polished, confident, approachable. Prompt Modifications: Background**: Use Soft gray (#d9d9d9) or Professional blue gradient (#e3f2fd to #bbdefb) Expression**: Keep Confident, warm smile — professional yet approachable Outfit**: Change to: Executive: Premium business suit, crisp white shirt, tie optional Professional: Tailored blazer over collared shirt or elegant blouse Creative: Smart business casual with modern, well-fitted clothing Framing**: Change to Show head, full shoulders, and upper chest. Slightly more relaxed framing than strict ID photo Lighting**: Add Soft professional lighting with slight catchlight in eyes to appear engaging 6. Medical/Healthcare Professional Badge Key Requirements: Clean, trustworthy, professional medical appearance. Prompt Modifications: Background**: Use Clinical white (#ffffff) or Soft medical blue (#e3f2fd) Expression**: Change to Calm, reassuring expression with gentle smile Outfit**: Change to Replace with clean white lab coat over professional attire or Medical scrubs in appropriate color (navy, ceil blue, or teal) Additional**: Add Hair neatly pulled back if long, clean professional appearance, no flashy jewelry 7. Gym/Fitness Membership Card Key Requirements: Casual, recognizable, suitable for athletic environment. Prompt Modifications: Background**: Use Bright white (#ffffff) or gym brand color Expression**: Change to Natural friendly smile or neutral athletic expression Outfit**: Change to Replace with athletic wear — sports polo, performance t-shirt, or athletic jacket in solid colors Framing**: Keep default Additional**: Add Casual athletic appearance, hair neat General Customization Parameters Background Color Options: White: #ffffff (passport, visa, formal government IDs) Light gray: #d9d9d9 (default, versatile for most purposes) Light blue: #e6f2ff (corporate, professional) Cream: #f5f5dc (warm professional) Soft blue-gray: #eceff1 (modern corporate) Expression Variations: Strict Neutral**: "Completely neutral expression, no smile, mouth closed, serious but relaxed" Soft Smile**: "Very soft closed-mouth smile — confident and natural" (default) Friendly Smile**: "Warm natural smile with slight teeth showing — approachable and professional" Calm Professional**: "Calm, composed expression with slight pleasant demeanor" Clothing Formality Levels: Formal**: "Dark suit, white dress shirt, tie for men / tailored suit or blazer with professional blouse for women" Business Casual** (default): "Light business-casual outfit — clean shirt/blouse, lightweight blazer, or smart layers" Smart Casual**: "Collared shirt, polo, or neat sweater in solid professional colors" Casual**: "Clean, neat casual top — solid color t-shirt, casual button-down, or simple blouse" Framing Adjustments: Tight Crop**: "Head and shoulders only, face fills 80% of frame" (passport style) Standard Crop** (default): "Entire head, full shoulders, and upper chest with balanced space" Relaxed Crop**: "Head, shoulders, and chest visible, with more background space for professional portraits"
by Hyrum Hurst
Who’s it for Property management companies, building managers, and inspection teams who want to automate recurring property inspections, improve issue tracking, and streamline reporting. How it works / What it does This n8n workflow schedules periodic property inspections using a Cron trigger. AI generates customized inspection checklists for each property, which are sent to assigned inspectors. Inspectors submit photos and notes via a connected form or mobile app. AI analyzes these submissions to flag issues based on priority (high, medium, low). High-priority issues are routed to managers via Slack/email, while routine notes are logged for reporting. The workflow also generates weekly or monthly summary reports and can optionally notify tenants of resolved issues. How to set up Configure the Cron trigger with your desired inspection frequency. Connect Google Sheets or your CRM to fetch property and tenant data. Set up OpenAI node with your API key and checklist generation prompts. Configure email/SMS notifications for inspectors. Connect a form or mobile app via Webhook to collect inspection data. Set up Slack/email notifications for managers. Log all inspection results, photos, and flagged issues into Google Sheets. Configure summary report email recipients. Requirements n8n account with Google Sheets, Email, Slack, Webhook, and OpenAI nodes. Property and tenant data stored in Google Sheets or CRM. OpenAI API credentials for AI checklist generation and note analysis. How to customize the workflow Adjust Cron frequency to match inspection schedule. Customize AI prompts for property-specific checklist items. Add or remove branches for issue severity (high/medium/low). Include additional notification channels if needed (Teams, SMS, etc.). Workflow Use Case Automates property inspections for property management teams, ensuring no inspections are missed, AI-generated checklists standardize the process, and potential issues are flagged and routed efficiently. Saves time, improves compliance, and increases tenant satisfaction. Created by QuarterSmart | Hyrum Hurst
by tsushima ryuto
Invoice Automation Kit: AI-Powered Invoice Processing and Weekly Reports This n8n workflow is designed to automate invoice processing and streamline financial management. It leverages AI to extract key invoice data, validate it, and store it in Airtable. Additionally, it generates and emails weekly spending reports. Who is it for? This template is for small businesses, freelancers, or individuals looking to save time on manual invoice processing. It's ideal for anyone who wants to improve the accuracy of their financial data and maintain a clear overview of their spending. How it Works / What it Does This workflow consists of two main parts: Invoice Data Extraction and Storage: Invoice Upload Form: Upload your invoices (PDF, PNG, JPG) via an n8n form. AI-Powered Data Extraction: AI extracts key information such as vendor name, invoice date, total amount, currency, and line items (description, quantity, unit price, total) from the uploaded invoice. Data Validation: The extracted data is validated to ensure it is complete and accurate. Store in Airtable: Validated invoice data is saved in a structured format to your specified Airtable base and table. Weekly Spending Report Generation and Email: Weekly Report Schedule: Automatically triggers every Sunday at 6 PM. Fetch Weekly Invoices: Retrieves all invoices stored in Airtable within the last 7 days. AI-Powered Spending Report Generation: Based on the retrieved invoice data, AI generates a comprehensive spending report, including total spending for the week, breakdown by vendor, top 5 expenses, spending trends, and any notable observations. Send Weekly Report Email: The generated report is sent in a professional format to the configured recipient email address. How to Set Up Update Workflow Configuration Node: Replace airtableBaseId with your Airtable Base ID. Replace airtableTableId with your Airtable Table ID. Replace reportRecipientEmail with the email address that should receive the weekly reports. Airtable Credentials: Set up your Airtable Personal Access Token credentials in the Airtable nodes. OpenAI Credentials: Set up your OpenAI API key credentials in the OpenAI Chat Model nodes. Email Credentials: Configure your email sending service (e.g., SMTP) credentials in the "Send Weekly Report Email" node and update the fromEmail. Airtable Table Setup: Ensure your Airtable has a table set up with appropriate columns to store invoice data, such as "Vendor", "Invoice Date", "Total Amount", "Currency", and "Line Items". Requirements An n8n instance An OpenAI account and API key An Airtable account and Personal Access Token An email sending service (e.g., SMTP server) How to Customize the Workflow Adjust Information Extraction**: Edit the prompt in the "Extract Invoice Data" node to include additional information you wish to extract. Customize Report**: Adjust the prompt in the "Generate Spending Report" node to change specific analyses or formatting included in the report. Add Notifications**: Incorporate notification nodes to other services like Slack or Microsoft Teams to be alerted when an invoice is uploaded or a report is ready. Modify Validation Rules**: Edit the conditions in the "Validate Invoice Data" node to implement additional validation rules. Here's a visual representation of the workflow.
by Sahil Sunny
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow allows users to extract sitemap links using ScrapingBee API. It only needs the domain name www.example.com and it automatically checks robots.txt and sitemap.xml to find the links. It is also designed to recursively run the workflow when new .xml links are found while scraping the sitemap. How It Works Trigger: The workflow waits for a webhook request that contains domain=www.example.com It then looks for robots.txt file, if not found it checks sitemap.xml Once it finds xml links, it recursively scrapes them to extract the website links For each xml file, first it checks whether it's a binary file and whether it's a compressed xml If it's a text response, it directly runs a code that extracts normal website link and another code to extract xml links If it's a binary that is not compressed, it just extracts text from the binary and then extract webiste links and xml links If it's a compressed binary, it first decompresses it and then extracts the text and then the links and xml After extracting website links, it appends those links directly to a sheet After extracting xml links, it scrapes them recursively until it finds all website links When the workflow is finished, you will see the output in the links column of the Google Sheet that we added to the workflow. Set Up Steps Get your ScrapingBee API Key here Create a new google sheet with an empty column named links. Connect to the sheet by signing in using your Google Credential and add the link to your sheet. Copy the webhook url, and send a GET request with domain as query parameter. Example: curl "https://webhook_link?domain=scrapingbee.com" Customisation Options If the website you are scraping is blocking your request, you can try using premium or stealth proxy in Scrape robots.txt file, Scrape sitemap.xml file, and Scrape xml file nodes. If you wish to store the data in a different app/tool or store it as a file, you would just need to replace Append links to sheet node with a relevant node. Next Steps If you wish to scrape the pages using the extracted links, then you can implement a new workflow that reads the sheet or file (output generated by this workflow) for links and for each link send a request to ScrapingBee's HTML API and save the returned data. NOTE: Some heavy sitemaps could result in a crash if the workflow consumes more memory than what is available in your n8n plan or self-hosted system. If this happens, we would recommend you to either upgrade your plan or use a self-hosted solution with a higher memory.
by IranServer.com
Monitor VPS security with AI analysis via SSH and Telegram alerts This n8n template automatically monitors your VPS for suspicious processes and network connections using AI analysis. It connects to your server via SSH, analyzes running processes, and sends Telegram alerts when potential security threats are detected. Who's it for System administrators managing VPS/dedicated servers DevOps teams monitoring production environments Security-conscious users who want automated threat detection Anyone running services on Linux servers who wants proactive monitoring How it works The workflow runs on a scheduled basis and performs the following steps: SSH Connection: Connects to your VPS via SSH and executes system commands to gather process and network information Data Collection: Runs ps aux --sort=-%cpu,-%mem && ss -tulpn to capture running processes sorted by CPU/memory usage and active network connections AI Analysis: Uses OpenAI's language model to analyze the collected data for suspicious patterns, malware signatures, unusual network connections, or abnormal resource usage Structured Output: Parses AI responses into structured data identifying malicious and suspicious activities with explanations Alert System: Sends immediate Telegram notifications when malicious processes are detected Requirements SSH access** to your VPS with valid credentials OpenAI API key** for AI analysis (uses GPT-4 mini model) Telegram Bot** and chat ID for receiving alerts Linux-based VPS or server to monitor How to set up Configure SSH credentials: Set up SSH connection to your VPS in the "Execute a command" node Add OpenAI API key: Configure your OpenAI credentials in the "OpenAI Chat Model" node Set up Telegram bot: Create a Telegram bot and get the API token Get your Telegram chat ID Update the admin_telegram_id in the "Edit Fields" node with your chat ID Configure Telegram credentials in the "Send a text message" node Adjust schedule: Modify the "Schedule Trigger" to set your preferred monitoring frequency Test the workflow: Run a manual execution to ensure all connections work properly How to customize the workflow Change monitoring frequency**: Adjust the schedule trigger interval (hourly, daily, etc.) Modify analysis criteria**: Update the AI prompt in "Basic LLM Chain" to focus on specific security concerns Add more commands**: Extend the SSH command to include additional system information like disk usage, log entries, or specific service status Multiple servers**: Duplicate the SSH execution nodes to monitor multiple VPS instances Different alert channels**: Replace or add to Telegram with email, Slack, or Discord notifications Custom filtering**: Add conditions to filter out known safe processes or focus on specific suspicious patterns Good to know The AI model analyzes both running processes and network connections for comprehensive monitoring Each analysis request costs approximately $0.001-0.01 USD depending on system activity The workflow only sends alerts when malicious or suspicious activity is detected, reducing notification noise SSH commands require appropriate permissions on the target server Consider running this workflow from a secure, always-on n8n instance for continuous monitoring
by Matt Chong
Who is this for? If you’re overwhelmed with incoming emails but only want to be notified about the essentials, this workflow is for you. Perfect for busy professionals who want a short AI summary of new emails delivered directly to Slack. What does it solve? Reading every email wastes time. This workflow filters out the noise by: Automatically summarizing each unread Gmail email using AI Sending you just the sender and a short summary to Slack Helping you stay focused without missing key information How it works Every minute, the workflow checks Gmail for unread emails When it finds one, it: Extracts the email content Sends it to OpenAI’s GPT model for a 250-character summary Delivers the message directly to Slack How to setup? Connect your accounts: Gmail (OAuth2) OpenAI (API key or connected account) Slack (OAuth2) Edit the Slack node: Choose the Slack user/channel to send alerts to Optional: Adjust the AI prompt in the “AI Agent” node to modify the summary style Optional: Change polling frequency in the Gmail Trigger node How to customize this workflow to your needs Edit the AI prompt to: Highlight urgency Include specific keywords Extend or reduce summary length Modify the Slack message format (add emojis, tags, or links)
by Matt Chong
Who is this for? If you're going on vacation or away from work and want your Gmail to respond to emails intelligently while you're out, this workflow is for you. It's perfect for freelancers, professionals, and teams who want a smarter, more personal out-of-office reply powered by AI. What does it solve? No more generic autoresponders or missing urgent emails. This AI-powered workflow: Writes short, polite, and personalized replies while you're away. Skips replying to newsletters, bots, or spam. Helps senders move forward by offering an alternate contact. Works around your specific time zone and schedule. How it works The workflow runs on a schedule (e.g., every 15 minutes). It checks if you are currently out of office (based on your defined start and end dates). If you are, it looks for unread Gmail messages. For each email: It uses AI to decide if a reply is needed. If yes, it generates a short, friendly out-of-office reply using your settings. It sends the reply and labels the email to avoid duplicate replies. How to setup? In the Set node: Define your out-of-office start and end times in ISO 8601 format (e.g., 2025-08-19T07:00:00+02:00). Set your timezone (e.g., Europe/Madrid). Add your backup contact's name and email. In the Gmail nodes: Connect your Gmail account using OAuth2 credentials. Replace the label ID in the final Gmail node with your own label (e.g., "Auto-Replied"). In the Schedule Trigger node: Set how often the workflow should check for new emails (e.g., every 15 minutes). How to customize this workflow to your needs Adjust the prompt in the AI Agent node to change tone or add more rules. Switch to a different timezone or update the return dates as needed. This workflow ensures you stay professional, even while you're offline and saves you from coming back to an email mess.
by Nguyen Thieu Toan
Monitor n8n Workflow Errors with AI Diagnosis & Instant Telegram Alerts This n8n template automatically catches errors from any workflow on your instance, analyzes them with Google Gemini AI, and delivers a structured diagnostic report directly to your Telegram — including error classification, root cause analysis, and specific fix steps. If you manage multiple n8n workflows in production and want to stop manually debugging failures, this workflow is your always-on error watch. How it works Error Trigger:** Fires automatically whenever any workflow on the instance encounters a failure, capturing the full error context including the failed node name, error message, and stack trace. Set Context:* Extracts all error data and holds your 3 configuration values. This is the *only node you ever need to edit — making the workflow easy to adapt and redeploy. Get Workflow Content:* Fetches the full workflow JSON definition via the *n8n REST API**, giving the AI meaningful context about what the failed workflow was actually trying to do. AI Agent (Gemini):* Classifies the error type (Authentication, Rate Limit, Credential, Connection, etc.), identifies the root cause, and generates a *Telegram HTML-formatted** report with 2–3 actionable fix steps. Send Telegram Notification:** Delivers the formatted report to your configured chat with proper HTML rendering — bold labels, code blocks for error messages, and a direct link to the failed execution. How to use Connect credentials: Add your Google Gemini (googlePalmApi) credential to the Google Gemini Chat Model node, and your Telegram Bot credential to the Send Telegram Notification node. Configure Set Context: Open the Set Context node and update n8n_instance_url (your public n8n URL), n8n_api_key (from n8n → Settings → API), and telegram_chat_id. Activate this workflow. Link to other workflows: In each workflow you want to monitor, go to Settings → Error Workflow and select this workflow. Requirements n8n Version:* Built and tested on *n8n 2.9.4+*. *(It is highly recommended to update to the latest n8n version.) Google Gemini** API key (googlePalmApi credential type). Telegram Bot** token and a chat ID to receive notifications. n8n API key** with read access to workflows (Settings → API → Create API Key). Your n8n instance must be accessible via a public URL for the API call. Customizing this workflow Different AI model:* Swap the *Google Gemini Chat Model sub-node for OpenAI, Anthropic, or any other LLM — no other changes needed. Different notification channel:* Replace the *Telegram node with Slack, Discord, or Zoho Mail to fit your team's tooling. Report language:** Change the language instruction at the end of the AI Agent's system prompt from Vietnamese to English or any other language. Filter specific workflows:* Add an *If* node after *Error Trigger to only process errors from high-priority workflows based on workflow.name. About the Author Created by: Nguyễn Thiệu Toàn (Jay Nguyen) Email: me@nguyenthieutoan.com Website: nguyenthieutoan.com Company: GenStaff (genstaff.net) Socials (Facebook / X / LinkedIn): @nguyenthieutoan More templates: n8n.io/creators/nguyenthieutoan
by Automate With Marc
AI Agent MCP for Email & News Research Build a chat-first MCP-powered research and outreach agent. This workflow lets you ask questions in an n8n chat, then the agent researches news (via Tavily + Perplexity through an MCP server) and drafts emails (via Gmail through a separate MCP server). It uses OpenAI for reasoning and short-term memory for coherent, multi‑turn conversations. Watch build along videos for workflows like these on: www.youtube.com/@automatewithmarc What this template does Chat-native trigger: Start a conversation and ask for research or an email draft. MCP client tools: The agent talks to two MCP servers — one for Email work, one for News research. News research stack: Uses Tavily (search) and Perplexity (LLM retrieval/answers) behind a News MCP server. Email stack: Uses Gmail Tool to generate and send messages via an Email MCP server. Reasoning + memory: OpenAI Chat Model + Simple Memory for context-aware, multi-step outputs. How it works (node map) When chat message received → collects your prompt and routes it to the agent. AI Agent (system prompt = “helpful email assistant”) → orchestrates tools via MCP Clients. OpenAI Chat Model → reasoning/planning for research or email drafting. Simple Memory → keeps recent chat context for follow-ups. News MCP Server exposes: Tavily Tool (Search) and Perplexity Tool (Ask) for up-to-date findings. Email MCP Server exposes: Gmail Tool (To, Subject, Message via AI fields) to send or draft emails. The MCP Clients (News/Email) plug into the Agent, so your single chat prompt can research and then draft/send emails in one flow. Requirements n8n (Cloud or self‑hosted) OpenAI API key for the Chat Model (set on the node) Tavily, Perplexity, and Gmail credentials (connected on their respective tool nodes) Publicly reachable MCP Server endpoints (provided in the MCP Client nodes) Setup (quick start) Import the template and open it in the editor. Connect credentials on: OpenAI, Tavily, Perplexity, and Gmail tool nodes. Confirm MCP endpoints in both MCP Client nodes (News/Email) and leave transport as httpStreamable unless you have special requirements. Run the workflow. In chat, try: “Find today’s top stories on Kubernetes security and draft an intro email to Acme.” “Summarize the latest AI infra trends and email a 3‑bullet update to my team.” Inputs & outputs Input: Natural-language prompt via chat trigger. Tools used: News MCP (Tavily + Perplexity), Email MCP (Gmail). Output: A researched summary and/or a drafted/sent email, returned in the chat and executed via Gmail when requested. Why teams will love it One prompt → research + outreach: No tab‑hopping between tools. Up-to-date answers: Pulls current info through Tavily/Perplexity. Email finalization: Converts findings into send-ready drafts via Gmail. Context-aware: Memory keeps threads coherent across follow-ups. Pro tips Use clear verbs in your prompt: “Research X, then email Y with Z takeaways.” For safer runs, point Gmail to a test inbox first (or disable send and only draft). Add guardrails in the Agent’s system message to match your voice/tone.
by Br1
Who’s it for This workflow is designed for developers, data engineers, and AI teams who need to migrate a Pinecone Cloud index into a Weaviate Cloud class index without recalculating the vectors (embeddings). It’s especially useful if you are consolidating vector databases, moving from Pinecone to Weaviate for hybrid search, or preparing to deprecate Pinecone. ⚠️ Note: The dimensions of the two indexes must match. How it works The workflow automates migration by batching, formatting, and transferring vectors along with their metadata: Initialization – Uses Airtable to store the pagination token. The token starts with a record initialized as INIT (Name=INIT, Number=0). Pagination handling – Reads batches of vector IDs from the Pinecone index using /vectors/list, resuming from the last stored token. Vector fetching – For each batch, retrieves embeddings and metadata fields from Pinecone via /vectors/fetch. Data transformation – Two Code nodes (Prepare Fetch Body and Format2Weaviate) are included to correctly structure the body of each HTTP request and map metadata into Weaviate-compatible objects. Data loading – Inserts embeddings and metadata into the target Weaviate class through its REST API. State persistence – Updates the pagination token in Airtable, ensuring the next run resumes from the correct point. Scheduling – The workflow runs on a defined schedule (e.g., every 15 seconds) until all data has been migrated. How to set up Airtable setup Create a Base (e.g., Cycle) and a Table (e.g., NextPage). The table should have two columns: Name (text) → stores the pagination token. Number (number) → stores the row ID to update. Initialize the first and only row with (INIT, 0). Source and target configuration Make sure you have a Pinecone index and namespace with embeddings. Manually create a target Weaviate Cluster and a target Weaviate Class with the same vector dimensions. In the Parameters node of the workflow, configure the following values: | Parameter | Description | Example Value | |---------------------|----------------------------------------------------------------------------------------------|---------------| | pineconeIndex | The name of your Pinecone index to read vectors from. | my-index | | pineconeNamespace | The namespace inside the Pinecone index (leave empty if unused). | default | | batchlimit | Number of records fetched per iteration. Higher = faster migration but heavier API calls. | 100 | | weaviateCluster | REST endpoint of your Weaviate Cloud instance. | https://dbbqrc9itXXXXXXXXX.c0.europe-west3.gcp.weaviate.cloud | | weaviateClass | Target class name in Weaviate where objects will be inserted. | MyClass | Credentials Configure Pinecone API credentials. Configure Weaviate Bearer token. Configure Airtable API key. Activate Import the workflow into n8n, update the parameters, and start the schedule trigger. Requirements Pinecone Cloud account with a configured index and namespace. Weaviate Cloud cluster with a class defined and matching vector dimensions. Airtable account and base to store pagination state. n8n instance with credentials for Pinecone, Weaviate, and Airtable. How to customize the workflow Adjust the batchlimit parameter to control performance (higher values = fewer API calls, but heavier requests). Adapt the Format2Weaviate Code node if you want to change or expand the metadata stored. Replace Airtable with another persistence store (e.g., Google Sheets, PostgreSQL) if preferred. Extend the workflow to send migration progress updates via Slack, email, or another channel.
by Amit Kumar
Overview This workflow automatically generates short-form AI videos using both OpenAI Sora 2 Pro and Google Veo 3.1, enhances your idea with Google Gemini, and publishes content across multiple platforms through Blotato. It’s perfect for creators, brands, UGC teams, and anyone building a high-frequency AI video pipeline. You can turn a single text idea into fully rendered videos, compare outputs from multiple AI models, and publish everywhere in one automated flow. Good to know Generating Sora or Veo videos may incur API costs depending on your provider. Video rendering time varies by prompt complexity. Sora & Veo availability depends on region and account access. Blotato must be connected to your social accounts before publishing. The workflow includes toggles so you can turn Sora, Veo, or platforms on/off easily. How it works Your text idea enters through the Chat Trigger. Google Gemini rewrites your idea into a detailed, high-quality video prompt. The workflow splits into two branches: Sora Branch: Generates video via OpenAI Sora 2 Pro, downloads the MP4, and uploads/publishes to YouTube, TikTok, and Instagram. Veo Branch: Generates a video using Google Veo 3.1 (via Wavespeed), retrieves the output link, emails it to you, and optionally uploads it to Blotato for publishing. A Config – Toggles node lets you enable or disable models and platforms. Optional Google Sheets logging can store video history and metadata. How to use Send a message to the Chat Trigger to start the workflow. Adjust toggles to choose whether you want Sora, Veo, or both. Add or remove publishing platforms inside the Blotato nodes. Check your email for Veo results or monitor uploads on your social accounts. Ideal for automation, batch content creation, and AI-powered video workflows. Requirements Google Gemini** API key (for prompt enhancement) OpenAI Sora 2** API key Wavespeed (Veo 3.1)** API key Blotato** account + connected YouTube/TikTok/Instagram channels Gmail OAuth2** (for sending video result emails) Google Sheets** (optional logging) Customizing this workflow Add a title/description generator for YouTube Shorts. Insert a thumbnail generator (image AI model). Extend logging with Sheets or a database. Add additional platforms supported by Blotato. Use different prompt strategies for cinematic, viral, or niche content styles.