by panyanyany
Overview This n8n workflow automatically converts and enhances multiple photos into professional ID-style portraits using Gemini AI (Nano Banana). It processes images in batch from Google Drive, applies professional ID photo standards (proper framing, neutral background, professional attire), and outputs the enhanced photos back to Google Drive. Input: Google Drive folder with photos Output: Professional ID-style portraits in Google Drive output folder The workflow uses a simple form interface where users provide Google Drive folder URLs and an optional custom prompt. It automatically fetches all images from the input folder, processes each through the Defapi API with Google's nano-banana model, monitors generation status, and uploads finished photos to the output folder. Perfect for HR departments, recruitment agencies, or anyone needing professional ID photos in bulk. Prerequisites A Defapi account and API key (Bearer token configured in n8n credentials): Sign up at Defapi.org An active n8n instance with Google Drive integration Google Drive account with two public folders: Input folder: Contains photos to be processed (must be set to public/anyone with the link) Output folder: Where enhanced photos will be saved (must be set to public/anyone with the link) Photos with clear faces (headshots or upper body shots work best) Setup Instructions 1. Prepare Google Drive Folders Create two Google Drive folders: One for input photos (e.g., https://drive.google.com/drive/folders/xxxxxxx) One for output photos (e.g., https://drive.google.com/drive/folders/yyyyyy) Important: Make both folders **public (set sharing to "Anyone with the link can view") Right-click folder → Share → Change "Restricted" to "Anyone with the link" Upload photos to the input folder (supported formats: .jpg, .jpeg, .png, .webp) 2. Configure n8n Credentials Defapi API**: Add HTTP Bearer Auth credential with your Defapi API token (credential name: "Defapi account") Google Drive**: Connect your Google Drive OAuth2 account (credential name: "Google Drive account"). See https://docs.n8n.io/integrations/builtin/credentials/google/oauth-generic/ 3. Run the Workflow Execute the workflow in n8n Access the form submission URL Fill in the form: Google Drive - Input Folder URL: Paste your input folder URL Google Drive - Output Folder URL: Paste your output folder URL Prompt (optional): Customize the AI generation prompt or leave blank to use the default 4. Monitor Progress The workflow will: Fetch all images from the input folder Process each image through the AI model Wait for generation to complete (checks every 10 seconds) Download and upload enhanced photos to the output folder Workflow Structure The workflow consists of the following nodes: On form submission (Form Trigger) - Collects Google Drive folder URLs and optional prompt Search files and folders (Google Drive) - Retrieves all files from the input folder Code in JavaScript (Code Node) - Prepares image data and prompt for API request Send Image Generation Request to Defapi.org API (HTTP Request) - Submits generation request for each image Wait for Image Processing Completion (Wait Node) - Waits 10 seconds before checking status Obtain the generated status (HTTP Request) - Polls API for completion status Check if Image Generation is Complete (IF Node) - Checks if status is not "pending" Format and Display Image Results (Set Node) - Formats result with markdown and image URL HTTP Request (HTTP Request) - Downloads the generated image file Upload file (Google Drive) - Uploads the enhanced photo to the output folder Default Prompt The workflow uses this professional ID photo generation prompt by default: Create a professional portrait suitable for ID documentation with proper spacing and composition. Framing: Include the full head, complete shoulder area, and upper torso. Maintain generous margins around the subject without excessive cropping. Outfit: Transform the existing attire into light business-casual clothing appropriate for the individual's demographics and modern style standards. Ensure the replacement garment appears natural, properly tailored, and complements the subject's overall presentation (such as professional shirt, refined blouse, contemporary blazer, or sophisticated layered separates). Pose & Gaze: Position shoulders square to the camera, maintaining perfect frontal alignment. Direct the gaze straight ahead into the lens at identical eye height, avoiding any angular deviation in vertical or horizontal planes. Expression: Display a professional neutral demeanor or subtle closed-lip smile that conveys confidence and authenticity. Background: Utilize a solid, consistent light gray photographic background (color code: #d9d9d9) without any pattern, texture, or tonal variation. Lighting & Quality: Apply balanced studio-quality illumination eliminating harsh contrast or reflective artifacts. Deliver maximum resolution imagery with precise focus and accurate natural skin color reproduction. Customization Tips for Different ID Photo Types Based on the default prompt structure, here are specific customization points for different use cases: 1. Passport & Visa Photos Key Requirements: Most countries require white or light-colored backgrounds, neutral expression, no smile. Prompt Modifications: Background**: Change to Plain white background (#ffffff) or Light cream background (#f5f5f5) Expression**: Change to Completely neutral expression, no smile, mouth closed, serious but not tense Framing**: Add Head size should be 70-80% of the frame height. Top of head to chin should be prominent Outfit**: Change to Replace with dark formal suit jacket and white collared shirt or Navy blue blazer with light shirt Additional**: Add No glasses glare, ears must be visible, no hair covering the face 2. Corporate Employee ID / Work Badge Key Requirements: Professional but approachable, company-appropriate attire. Prompt Modifications: Background**: Use company color or standard #e6f2ff (light blue), #f0f0f0 (light gray) Expression**: Keep Soft closed-mouth smile — confident and approachable Outfit**: Change to specific dress code: Corporate: Dark business suit with tie for men, blazer with blouse for women Tech/Startup: Smart casual polo shirt or button-down shirt without tie Creative: Clean, professional casual clothing that reflects company culture Framing**: Use default or add Upper chest visible with company badge area clear 3. University/School Student ID Key Requirements: Friendly, youthful, appropriate for educational setting. Prompt Modifications: Background**: Use school colors or Light blue (#e3f2fd), Soft gray (#f5f5f5) Expression**: Change to Friendly natural smile or pleasant neutral expression Outfit**: Change to Replace with clean casual clothing — collared shirt, polo, or neat sweater. No logos or graphics Framing**: Keep default Additional**: Add Youthful, fresh appearance suitable for educational environment 4. Driver's License / Government ID Key Requirements: Strict standards, neutral expression, specific background colors. Prompt Modifications: Background**: Check local requirements — often White (#ffffff), Light gray (#d9d9d9), or Light blue (#e6f2ff) Expression**: Change to Neutral expression, no smile, mouth closed, eyes fully open Outfit**: Use Replace with everyday casual or business casual clothing — collared shirt or neat top Framing**: Add Head centered, face taking up 70-80% of frame, ears visible Additional**: Add No glasses (or non-reflective lenses), no headwear except religious purposes, natural hair 5. Professional LinkedIn / Resume Photo Key Requirements: Polished, confident, approachable. Prompt Modifications: Background**: Use Soft gray (#d9d9d9) or Professional blue gradient (#e3f2fd to #bbdefb) Expression**: Keep Confident, warm smile — professional yet approachable Outfit**: Change to: Executive: Premium business suit, crisp white shirt, tie optional Professional: Tailored blazer over collared shirt or elegant blouse Creative: Smart business casual with modern, well-fitted clothing Framing**: Change to Show head, full shoulders, and upper chest. Slightly more relaxed framing than strict ID photo Lighting**: Add Soft professional lighting with slight catchlight in eyes to appear engaging 6. Medical/Healthcare Professional Badge Key Requirements: Clean, trustworthy, professional medical appearance. Prompt Modifications: Background**: Use Clinical white (#ffffff) or Soft medical blue (#e3f2fd) Expression**: Change to Calm, reassuring expression with gentle smile Outfit**: Change to Replace with clean white lab coat over professional attire or Medical scrubs in appropriate color (navy, ceil blue, or teal) Additional**: Add Hair neatly pulled back if long, clean professional appearance, no flashy jewelry 7. Gym/Fitness Membership Card Key Requirements: Casual, recognizable, suitable for athletic environment. Prompt Modifications: Background**: Use Bright white (#ffffff) or gym brand color Expression**: Change to Natural friendly smile or neutral athletic expression Outfit**: Change to Replace with athletic wear — sports polo, performance t-shirt, or athletic jacket in solid colors Framing**: Keep default Additional**: Add Casual athletic appearance, hair neat General Customization Parameters Background Color Options: White: #ffffff (passport, visa, formal government IDs) Light gray: #d9d9d9 (default, versatile for most purposes) Light blue: #e6f2ff (corporate, professional) Cream: #f5f5dc (warm professional) Soft blue-gray: #eceff1 (modern corporate) Expression Variations: Strict Neutral**: "Completely neutral expression, no smile, mouth closed, serious but relaxed" Soft Smile**: "Very soft closed-mouth smile — confident and natural" (default) Friendly Smile**: "Warm natural smile with slight teeth showing — approachable and professional" Calm Professional**: "Calm, composed expression with slight pleasant demeanor" Clothing Formality Levels: Formal**: "Dark suit, white dress shirt, tie for men / tailored suit or blazer with professional blouse for women" Business Casual** (default): "Light business-casual outfit — clean shirt/blouse, lightweight blazer, or smart layers" Smart Casual**: "Collared shirt, polo, or neat sweater in solid professional colors" Casual**: "Clean, neat casual top — solid color t-shirt, casual button-down, or simple blouse" Framing Adjustments: Tight Crop**: "Head and shoulders only, face fills 80% of frame" (passport style) Standard Crop** (default): "Entire head, full shoulders, and upper chest with balanced space" Relaxed Crop**: "Head, shoulders, and chest visible, with more background space for professional portraits"
by Yang
Who’s it for This template is perfect for digital agencies, SDRs, lead generators, or outreach teams that want to automatically convert LinkedIn company profiles into high-quality cold emails. If you spend too much time researching and writing outreach messages, this workflow does all the heavy lifting for you. What it does Once a LinkedIn company profile URL is submitted via a web form, the workflow: Scrapes detailed company data using Dumpling AI Enriches the contact (email, name, country) using Dropcontact Sends company data and contact info to GPT-4, which generates: A personalized subject line (max 8 words) A short HTML cold email (4–6 sentences) Sends the cold email via Gmail Logs the lead details to Airtable for tracking All AI-generated content follows strict formatting and tone guidelines, ensuring it's professional, human, and clean. How it works Form Trigger: Collects the LinkedIn URL Dumpling AI: Extracts company name, description, size, location, website, etc. Dropcontact: Finds the contact's email and name based on enriched company details GPT-4: Writes a structured cold email and subject line in JSON format Gmail: Sends the personalized email to a fixed recipient Airtable: Logs the lead into a specified base/table for follow-up Requirements ✅ Dumpling AI API key (stored in HTTP header credentials) ✅ Dropcontact API key ✅ OpenAI GPT-4 credentials ✅ Gmail account (OAuth2) ✅ Airtable base & table set up with at least these fields: Name LinkedIn Company URL People website How to customize Modify the GPT prompt to reflect your brand tone or service offering Replace Gmail with Slack, Outlook, or another communication tool Add a “review and approve” step before sending emails Add logic to avoid duplicates (e.g., check Airtable first) > This workflow lets you go from LinkedIn profile to inbox-ready cold email in less than a minute—with full AI support.
by Sahil Sunny
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow allows users to extract sitemap links using ScrapingBee API. It only needs the domain name www.example.com and it automatically checks robots.txt and sitemap.xml to find the links. It is also designed to recursively run the workflow when new .xml links are found while scraping the sitemap. How It Works Trigger: The workflow waits for a webhook request that contains domain=www.example.com It then looks for robots.txt file, if not found it checks sitemap.xml Once it finds xml links, it recursively scrapes them to extract the website links For each xml file, first it checks whether it's a binary file and whether it's a compressed xml If it's a text response, it directly runs a code that extracts normal website link and another code to extract xml links If it's a binary that is not compressed, it just extracts text from the binary and then extract webiste links and xml links If it's a compressed binary, it first decompresses it and then extracts the text and then the links and xml After extracting website links, it appends those links directly to a sheet After extracting xml links, it scrapes them recursively until it finds all website links When the workflow is finished, you will see the output in the links column of the Google Sheet that we added to the workflow. Set Up Steps Get your ScrapingBee API Key here Create a new google sheet with an empty column named links. Connect to the sheet by signing in using your Google Credential and add the link to your sheet. Copy the webhook url, and send a GET request with domain as query parameter. Example: curl "https://webhook_link?domain=scrapingbee.com" Customisation Options If the website you are scraping is blocking your request, you can try using premium or stealth proxy in Scrape robots.txt file, Scrape sitemap.xml file, and Scrape xml file nodes. If you wish to store the data in a different app/tool or store it as a file, you would just need to replace Append links to sheet node with a relevant node. Next Steps If you wish to scrape the pages using the extracted links, then you can implement a new workflow that reads the sheet or file (output generated by this workflow) for links and for each link send a request to ScrapingBee's HTML API and save the returned data. NOTE: Some heavy sitemaps could result in a crash if the workflow consumes more memory than what is available in your n8n plan or self-hosted system. If this happens, we would recommend you to either upgrade your plan or use a self-hosted solution with a higher memory.
by IranServer.com
Monitor VPS security with AI analysis via SSH and Telegram alerts This n8n template automatically monitors your VPS for suspicious processes and network connections using AI analysis. It connects to your server via SSH, analyzes running processes, and sends Telegram alerts when potential security threats are detected. Who's it for System administrators managing VPS/dedicated servers DevOps teams monitoring production environments Security-conscious users who want automated threat detection Anyone running services on Linux servers who wants proactive monitoring How it works The workflow runs on a scheduled basis and performs the following steps: SSH Connection: Connects to your VPS via SSH and executes system commands to gather process and network information Data Collection: Runs ps aux --sort=-%cpu,-%mem && ss -tulpn to capture running processes sorted by CPU/memory usage and active network connections AI Analysis: Uses OpenAI's language model to analyze the collected data for suspicious patterns, malware signatures, unusual network connections, or abnormal resource usage Structured Output: Parses AI responses into structured data identifying malicious and suspicious activities with explanations Alert System: Sends immediate Telegram notifications when malicious processes are detected Requirements SSH access** to your VPS with valid credentials OpenAI API key** for AI analysis (uses GPT-4 mini model) Telegram Bot** and chat ID for receiving alerts Linux-based VPS or server to monitor How to set up Configure SSH credentials: Set up SSH connection to your VPS in the "Execute a command" node Add OpenAI API key: Configure your OpenAI credentials in the "OpenAI Chat Model" node Set up Telegram bot: Create a Telegram bot and get the API token Get your Telegram chat ID Update the admin_telegram_id in the "Edit Fields" node with your chat ID Configure Telegram credentials in the "Send a text message" node Adjust schedule: Modify the "Schedule Trigger" to set your preferred monitoring frequency Test the workflow: Run a manual execution to ensure all connections work properly How to customize the workflow Change monitoring frequency**: Adjust the schedule trigger interval (hourly, daily, etc.) Modify analysis criteria**: Update the AI prompt in "Basic LLM Chain" to focus on specific security concerns Add more commands**: Extend the SSH command to include additional system information like disk usage, log entries, or specific service status Multiple servers**: Duplicate the SSH execution nodes to monitor multiple VPS instances Different alert channels**: Replace or add to Telegram with email, Slack, or Discord notifications Custom filtering**: Add conditions to filter out known safe processes or focus on specific suspicious patterns Good to know The AI model analyzes both running processes and network connections for comprehensive monitoring Each analysis request costs approximately $0.001-0.01 USD depending on system activity The workflow only sends alerts when malicious or suspicious activity is detected, reducing notification noise SSH commands require appropriate permissions on the target server Consider running this workflow from a secure, always-on n8n instance for continuous monitoring
by Javier Rieiro
Short description Automates collection, technical extraction, and automatic generation of Nuclei templates from public CVE PoCs. Converts verified PoCs into reproducible detection templates ready for testing and distribution. Purpose Provide a reliable pipeline that turns public proof-of-concept data into usable detection artifacts. Reduce manual work involved in finding PoCs, extracting exploit details, validating sources, and building Nuclei templates. How it works (technical summary) Runs a scheduled SSH job that executes vulnx with filters for recent, high-severity PoCs. Parses the raw vulnx output and splits it into individual CVE entries. Extracts structured fields: CVE ID, severity, title, summary, risk, remediation, affected products, POCs, and references. Extracts URLs from PoC sections using regex. Validates each URL with HTTP requests. Invalid or unreachable links are logged and skipped. Uses an AI agent (OpenAI via LangChain) to extract technical artifacts: exploit steps, payloads, endpoints, raw HTTP requests/responses, parameters, and reproduction notes. The prompt forces technical-only output. Sends the extracted technical content to ProjectDiscovery Cloud API to generate Nuclei templates. Validates AI and API responses. Accepted templates are saved to a configured Google Drive folder. Produces JSON records and logs for each processed CVE and URL. Output Nuclei templates in ProjectDiscovery format (YAML) stored in Google Drive. Structured JSON per CVE with metadata and extracted technical details. Validation logs for URL checks, AI extraction, and template generation. Intended audience Bug bounty hunters. Security researchers and threat intel teams. Automation engineers who need reproducible detection templates. Setup & requirements n8n instance with workflow imported. SSH access to a host with vulnx installed. OpenAI API key for technical extraction. ProjectDiscovery API key for template generation. Google Drive OAuth2 credentials for storing templates. Configure schedule trigger and target Google Drive folder ID. Security and usage notes Performs static extraction and validation only. No active exploitation. Processes only PoCs that meet configured filters (e.g., CVSS > 6). Use responsibly. Do not target systems you do not own or have explicit permission to test.
by Julian Reich
This n8n template demonstrates how to automatically convert voice messages from Telegram into structured, searchable notes in Google Docs using AI transcription and intelligent tagging. Use cases are many: Try capturing ideas on-the-go while walking, recording meeting insights hands-free, creating voice journals, or building a personal knowledge base from spoken thoughts! Good to know OpenAI Whisper transcription costs approximately $0.006 per minute of audio ChatGPT tagging adds roughly $0.001-0.003 per message depending on length The workflow supports both German and English voice recognition Text messages are also supported - they bypass transcription and go directly to AI tagging Perfect companion: Combine with the "Weekly AI Review**" workflow for automated weekly summaries of all your notes! How it works Telegram receives your voice message or text and triggers the workflow An IF node intelligently detects whether you sent audio or text content For voice messages: Telegram downloads the audio file and OpenAI Whisper transcribes it to text For text messages: Content is passed directly to the next step ChatGPT analyzes the content and generates up to 3 relevant keywords (Work, Ideas, Private, Health, etc.) A function node formats everything with Swiss timestamps, message type indicators, and clean structure The formatted entry gets automatically inserted into your Google Doc with date, keywords, and full content Telegram sends you a confirmation with the transcribed/original text so you can verify accuracy How to use Simply send a voice message or text to your Telegram bot - the workflow handles everything automatically The manual execution can be used for testing, but in production this runs on every message Voice messages work best with clear speech in quiet environments for optimal transcription Requirements Telegram Bot Token and configured webhook OpenAI API account for Whisper transcription and ChatGPT tagging Google Docs API access for document writing A dedicated Google Doc where all notes will be collected Customising this workflow Adjust the AI prompt to use different tagging categories relevant to your workflow (e.g., project names, priorities, emotions) Add multiple Google Docs for different contexts (work vs. private notes) Include additional processing like sentiment analysis or automatic task extraction Connect to other apps like Notion, Obsidian, or your preferred note-taking system And don't forget to also implement the complimentary workflow Weekly AI Review!
by Matt Chong
Who is this for? If you’re overwhelmed with incoming emails but only want to be notified about the essentials, this workflow is for you. Perfect for busy professionals who want a short AI summary of new emails delivered directly to Slack. What does it solve? Reading every email wastes time. This workflow filters out the noise by: Automatically summarizing each unread Gmail email using AI Sending you just the sender and a short summary to Slack Helping you stay focused without missing key information How it works Every minute, the workflow checks Gmail for unread emails When it finds one, it: Extracts the email content Sends it to OpenAI’s GPT model for a 250-character summary Delivers the message directly to Slack How to setup? Connect your accounts: Gmail (OAuth2) OpenAI (API key or connected account) Slack (OAuth2) Edit the Slack node: Choose the Slack user/channel to send alerts to Optional: Adjust the AI prompt in the “AI Agent” node to modify the summary style Optional: Change polling frequency in the Gmail Trigger node How to customize this workflow to your needs Edit the AI prompt to: Highlight urgency Include specific keywords Extend or reduce summary length Modify the Slack message format (add emojis, tags, or links)
by Matt Chong
Who is this for? If you're going on vacation or away from work and want your Gmail to respond to emails intelligently while you're out, this workflow is for you. It's perfect for freelancers, professionals, and teams who want a smarter, more personal out-of-office reply powered by AI. What does it solve? No more generic autoresponders or missing urgent emails. This AI-powered workflow: Writes short, polite, and personalized replies while you're away. Skips replying to newsletters, bots, or spam. Helps senders move forward by offering an alternate contact. Works around your specific time zone and schedule. How it works The workflow runs on a schedule (e.g., every 15 minutes). It checks if you are currently out of office (based on your defined start and end dates). If you are, it looks for unread Gmail messages. For each email: It uses AI to decide if a reply is needed. If yes, it generates a short, friendly out-of-office reply using your settings. It sends the reply and labels the email to avoid duplicate replies. How to setup? In the Set node: Define your out-of-office start and end times in ISO 8601 format (e.g., 2025-08-19T07:00:00+02:00). Set your timezone (e.g., Europe/Madrid). Add your backup contact's name and email. In the Gmail nodes: Connect your Gmail account using OAuth2 credentials. Replace the label ID in the final Gmail node with your own label (e.g., "Auto-Replied"). In the Schedule Trigger node: Set how often the workflow should check for new emails (e.g., every 15 minutes). How to customize this workflow to your needs Adjust the prompt in the AI Agent node to change tone or add more rules. Switch to a different timezone or update the return dates as needed. This workflow ensures you stay professional, even while you're offline and saves you from coming back to an email mess.
by Automate With Marc
AI Agent MCP for Email & News Research Build a chat-first MCP-powered research and outreach agent. This workflow lets you ask questions in an n8n chat, then the agent researches news (via Tavily + Perplexity through an MCP server) and drafts emails (via Gmail through a separate MCP server). It uses OpenAI for reasoning and short-term memory for coherent, multi‑turn conversations. Watch build along videos for workflows like these on: www.youtube.com/@automatewithmarc What this template does Chat-native trigger: Start a conversation and ask for research or an email draft. MCP client tools: The agent talks to two MCP servers — one for Email work, one for News research. News research stack: Uses Tavily (search) and Perplexity (LLM retrieval/answers) behind a News MCP server. Email stack: Uses Gmail Tool to generate and send messages via an Email MCP server. Reasoning + memory: OpenAI Chat Model + Simple Memory for context-aware, multi-step outputs. How it works (node map) When chat message received → collects your prompt and routes it to the agent. AI Agent (system prompt = “helpful email assistant”) → orchestrates tools via MCP Clients. OpenAI Chat Model → reasoning/planning for research or email drafting. Simple Memory → keeps recent chat context for follow-ups. News MCP Server exposes: Tavily Tool (Search) and Perplexity Tool (Ask) for up-to-date findings. Email MCP Server exposes: Gmail Tool (To, Subject, Message via AI fields) to send or draft emails. The MCP Clients (News/Email) plug into the Agent, so your single chat prompt can research and then draft/send emails in one flow. Requirements n8n (Cloud or self‑hosted) OpenAI API key for the Chat Model (set on the node) Tavily, Perplexity, and Gmail credentials (connected on their respective tool nodes) Publicly reachable MCP Server endpoints (provided in the MCP Client nodes) Setup (quick start) Import the template and open it in the editor. Connect credentials on: OpenAI, Tavily, Perplexity, and Gmail tool nodes. Confirm MCP endpoints in both MCP Client nodes (News/Email) and leave transport as httpStreamable unless you have special requirements. Run the workflow. In chat, try: “Find today’s top stories on Kubernetes security and draft an intro email to Acme.” “Summarize the latest AI infra trends and email a 3‑bullet update to my team.” Inputs & outputs Input: Natural-language prompt via chat trigger. Tools used: News MCP (Tavily + Perplexity), Email MCP (Gmail). Output: A researched summary and/or a drafted/sent email, returned in the chat and executed via Gmail when requested. Why teams will love it One prompt → research + outreach: No tab‑hopping between tools. Up-to-date answers: Pulls current info through Tavily/Perplexity. Email finalization: Converts findings into send-ready drafts via Gmail. Context-aware: Memory keeps threads coherent across follow-ups. Pro tips Use clear verbs in your prompt: “Research X, then email Y with Z takeaways.” For safer runs, point Gmail to a test inbox first (or disable send and only draft). Add guardrails in the Agent’s system message to match your voice/tone.
by Br1
Who’s it for This workflow is designed for developers, data engineers, and AI teams who need to migrate a Pinecone Cloud index into a Weaviate Cloud class index without recalculating the vectors (embeddings). It’s especially useful if you are consolidating vector databases, moving from Pinecone to Weaviate for hybrid search, or preparing to deprecate Pinecone. ⚠️ Note: The dimensions of the two indexes must match. How it works The workflow automates migration by batching, formatting, and transferring vectors along with their metadata: Initialization – Uses Airtable to store the pagination token. The token starts with a record initialized as INIT (Name=INIT, Number=0). Pagination handling – Reads batches of vector IDs from the Pinecone index using /vectors/list, resuming from the last stored token. Vector fetching – For each batch, retrieves embeddings and metadata fields from Pinecone via /vectors/fetch. Data transformation – Two Code nodes (Prepare Fetch Body and Format2Weaviate) are included to correctly structure the body of each HTTP request and map metadata into Weaviate-compatible objects. Data loading – Inserts embeddings and metadata into the target Weaviate class through its REST API. State persistence – Updates the pagination token in Airtable, ensuring the next run resumes from the correct point. Scheduling – The workflow runs on a defined schedule (e.g., every 15 seconds) until all data has been migrated. How to set up Airtable setup Create a Base (e.g., Cycle) and a Table (e.g., NextPage). The table should have two columns: Name (text) → stores the pagination token. Number (number) → stores the row ID to update. Initialize the first and only row with (INIT, 0). Source and target configuration Make sure you have a Pinecone index and namespace with embeddings. Manually create a target Weaviate Cluster and a target Weaviate Class with the same vector dimensions. In the Parameters node of the workflow, configure the following values: | Parameter | Description | Example Value | |---------------------|----------------------------------------------------------------------------------------------|---------------| | pineconeIndex | The name of your Pinecone index to read vectors from. | my-index | | pineconeNamespace | The namespace inside the Pinecone index (leave empty if unused). | default | | batchlimit | Number of records fetched per iteration. Higher = faster migration but heavier API calls. | 100 | | weaviateCluster | REST endpoint of your Weaviate Cloud instance. | https://dbbqrc9itXXXXXXXXX.c0.europe-west3.gcp.weaviate.cloud | | weaviateClass | Target class name in Weaviate where objects will be inserted. | MyClass | Credentials Configure Pinecone API credentials. Configure Weaviate Bearer token. Configure Airtable API key. Activate Import the workflow into n8n, update the parameters, and start the schedule trigger. Requirements Pinecone Cloud account with a configured index and namespace. Weaviate Cloud cluster with a class defined and matching vector dimensions. Airtable account and base to store pagination state. n8n instance with credentials for Pinecone, Weaviate, and Airtable. How to customize the workflow Adjust the batchlimit parameter to control performance (higher values = fewer API calls, but heavier requests). Adapt the Format2Weaviate Code node if you want to change or expand the metadata stored. Replace Airtable with another persistence store (e.g., Google Sheets, PostgreSQL) if preferred. Extend the workflow to send migration progress updates via Slack, email, or another channel.
by WhySoSerious
What it is This workflow listens for new tickets in HaloPSA via webhook, generates a professional AI-powered summary of the issue using Gemini (or another LLM), and posts it back into the ticket as a private note. It’s designed for MSPs using HaloPSA who want to reduce triage time and give engineers a clear head start on each support case. ⸻ ✨ Features • 🔔 Webhook trigger from HaloPSA on new ticket creation • 🚧 Optional team filter (skip Sales or other queues) • 📦 Extracts ticket subject, details, and ID • 🧠 Builds a structured AI prompt with MSP context (NinjaOne, M365, CIPP) • 🤖 Processes via Gemini or other LLM • 📑 Cleans & parses JSON output (summary, next step, troubleshooting) • 🧱 Generates a branded HTML private note (logo + styled sections) • 🌐 Posts the note back into HaloPSA via API ⸻ 🔧 Setup Webhook • Replace WEBHOOK_PATH and paste the generated Production URL into your HaloPSA webhook. Guard filter (optional) • Change teamName or teamId to skip tickets from specific queues. Branding • Replace YOUR_LOGO_URL and Your MSP Brand in the HTML note builder. HaloPSA API • In the HTTP node, replace YOUR_HALO_DOMAIN and add your Halo API token (Bearer auth). LLM credentials • Set your API key in the Gemini / OpenAI node credentials section. (Optional) Adjust the AI prompt with your own tools or processes. ⸻ ✅ Requirements • HaloPSA account with API enabled • Gemini / OpenAI (or other LLM) API key • SMTP (optional) if you want to extend with notifications ⸻ ⚡ Workflow overview `🔔 Webhook → 🚧 Guard → 📦 Extract Ticket → 🧠 Build AI Prompt → 🤖 AI Agent (Gemini) → 📑 Parse JSON → 🧱 Build HTML Note → 🌐 Post to HaloPSA`
by Alex Huy
How it works This workflow automatically curates and sends a daily AI/Tech news digest by aggregating articles from premium tech publications and using AI to select the most relevant and trending stories. 🔄 Automated News Pipeline RSS Feed Collection - Fetches articles from 14 premium tech news sources (TechCrunch, MIT Tech Review, The Verge, Wired, etc.) Smart Article Filtering - Limits articles per source to ensure diverse coverage and prevent single-source domination Data Standardization - Cleans and structures article data (title, summary, link, date) for AI processing AI-Powered Curation - Uses Google Vertex AI to analyze articles and select top 10 most relevant/trending stories Newsletter Generation - Creates professional HTML newsletter with summaries and direct links Email Delivery - Automatically sends formatted digest via Gmail 🎯 Key Features Premium Sources** - Curates from 14 top-tier tech publications AI Quality Control** - Intelligent article selection and summarization Balanced Coverage** - Prevents source bias with smart filtering Professional Format** - Clean HTML newsletter design Scheduled Automation** - Daily delivery at customizable times Error Resilience** - Continues processing even if some feeds fail Setup Steps 1. 🔑 Required API Access Google Cloud Project** with Vertex AI API enabled Google Service Account** with AI Platform Developer role Gmail API** enabled for email sending 2. ☁️ Google Cloud Setup Create or select a Google Cloud Project Enable the Vertex AI API Create a service account with these permissions: AI Platform Developer Service Account User Download the service account JSON key Enable Gmail API for the same project 3. 🔐 n8n Credentials Configuration Add these credentials to your n8n instance: Google Service Account (for Vertex AI): Upload your service account JSON key Name it descriptively (e.g., "Vertex AI Service Account") Gmail OAuth2: Use your Google account credentials Authorize Gmail API access Required scopes: gmail.send 4. ⚙️ Workflow Configuration Import the workflow into your n8n instance Update node configurations: Google Vertex AI Model: Set your Google Cloud Project ID Send Newsletter Email: Update recipient email address Daily Newsletter Trigger: Adjust schedule time if needed Verify credentials are properly connected to respective nodes 5. 📰 RSS Sources Customization (Optional) The workflow includes 14 premium tech news sources: TechCrunch (AI & Startups) The Verge (AI section) MIT Technology Review Wired (AI/Science) VentureBeat (AI) ZDNet (AI topics) AI Trends Nature (Machine Learning) Towards Data Science NY Times Technology The Guardian Technology BBC Technology Nikkei Asia Technology To customize sources: Edit the "Configure RSS Sources" node Add/remove RSS feed URLs as needed Ensure feeds are active and properly formatted 6. 🚀 Testing & Deployment Manual Test: Execute the workflow manually to verify setup Check Email: Confirm newsletter arrives with proper formatting Verify AI Output: Ensure articles are relevant and well-summarized Schedule Activation: Enable the daily trigger for automated operation 💡 Customization Options Newsletter Timing: Default: 8:00 AM UTC daily Modify "triggerAtHour" in the Schedule Trigger node Add multiple daily sends if desired Content Focus: Adjust the AI prompt in "AI Tech News Curator" node Specify different topics (e.g., focus on startups, enterprise AI, etc.) Change output language or format Email Recipients: Update single recipient in Gmail node Or modify to send to multiple addresses Integrate with mailing list services Article Limits: Current: Max 5 articles per source Modify the filtering logic in "Filter & Balance Articles" node Adjust total article count in AI prompt 🔧 Troubleshooting Common Issues: RSS Feed Failures**: Individual feed failures won't stop the workflow AI Rate Limits**: Vertex AI has generous limits, but monitor usage Gmail Sending**: Ensure sender email is authorized in Gmail settings Missing Articles**: Some RSS feeds may be inactive - check source URLs Performance Tips: Monitor execution times during peak RSS activity Consider adding delays if hitting rate limits Archive old newsletters for reference This workflow transforms daily news consumption from manual browsing to curated, AI-powered intelligence delivered automatically to your inbox.