by Mariela Slavenova
This template enriches a lead list by analyzing each contact’s LinkedIn activity and auto-generating a single personalized opening line for cold outreach. Drop a spreadsheet into a Google Drive folder → the workflow parses rows, fetches LinkedIn content (recent post or profile), uses an LLM to craft a one-liner, writes the result back to Google Sheets, and sends a Telegram summary. ⸻ Good to know • Works with two paths: • Recent post found → personalize from the latest LinkedIn post. • No recent post → personalize from profile fields (headline, about, current role). • Requires valid Apify credentials for LinkedIn scrapers and LLM keys (Anthropic and/or OpenAI). • Costs depend on the LLM(s) you choose and scraping usage. • Replace all placeholders like [put your token here] and [put your Telegram Bot Chat ID here] before running. • Respect the target platform’s terms of service when scraping LinkedIn data. What this workflow does Trigger (Google Drive) – Watches a specific folder for newly uploaded lead spreadsheets. Download & Parse – Downloads the file and converts it to structured items (first name, last name, company, LinkedIn URL, email, website). Batch Loop – Processes each row individually. Fetch Activity – Calls Apify LinkedIn Profile Posts (latest post) and records current date for recency checks. Recency Check (LLM) – An OpenAI node returns true/false for “post is from the current year.” Branching • If TRUE → AI Agent (Anthropic) crafts a single, natural reference line based on the recent post. • If FALSE → Apify LinkedIn Profile → AI Agent (Anthropic) crafts a one-liner from profile data (headline/about/current role). Write Back (Google Sheets) – Updates the original sheet by matching on email and writing the personalization field. Notify (Telegram) – Sends a brief completion summary with sheet name and link. Requirements • Google Drive & Google Sheets connections • Apify account + token for LinkedIn scrapers • LLM keys: Anthropic (Claude) and/or OpenAI (you can use one or both) • Telegram bot for notifications (bot token + chat ID) How to use Connect credentials for Google, Apify, OpenAI/Anthropic, and Telegram. Set your folder in the Google Drive Trigger to the one where you’ll drop lead sheets. Map sheet columns to the expected headers (e.g., First Name, Last Name, Company Name for Emails, Person Linkedin Url, Email, Website). Replace placeholders ([put your token here], [put your Telegram Bot Chat ID here]) in the respective nodes. Upload a test spreadsheet to the watched folder and run once to validate the flow. Review results in your sheet (new personalization column) and check Telegram for the completion message. Setup Connect credentials - Google Drive/Sheets, Apify, OpenAI and/or Anthropic, Telegram. Configure the Drive trigger - Select the folder where you’ll upload your lead sheets. Map columns - Ensure your sheet has: First Name, Last Name, Company Name for Emails, Person Linkedin Url, Email, Website. Replace placeholders - In HTTP nodes: Bearer [put your token here]. In Telegram node: [put your Telegram Bot Chat ID here] (Optional) Adjust the recency rule - Current logic checks for current-year posts; change the prompt if you prefer 30-day windows. How to use Upload a test spreadsheet to the watched Drive folder. Execute the workflow once to validate. Open your Google Sheet to see the new personalization column populated. Check Telegram for the completion summary. Customizing this template • Data sources: Add company news, website content, or X/Twitter as fallback signals. • LLM choices: Use only Anthropic or only OpenAI; tweak temperature for tone. • Destinations: Write to a CRM (HubSpot/Salesforce/Airtable) instead of Sheets. • Notifications: Swap Telegram for Slack/Email/Discord. Who it’s for • Sales & SDR teams needing authentic, scalable personalization for cold outreach. • Lead gen agencies enriching spreadsheets with ready-to-use openers. • Marketing & growth teams improving reply rates by referencing real prospect activity. Limitations & compliance • LinkedIn scraping may be rate-limited or blocked; follow platform ToS and local laws. • Costs vary with scraping volume and LLM usage. Need help customizing? Contact me for consulting and support: LinkedIn
by Marco Venturi
How it works This workflow sources news from news websites. The information is then passed to an LLM, which processes the article's content. An editor approves or rejects the article. If accepted, the article is first published on the WordPress site and then on the LinkedIn page. Setup Instructions 1. Credentials You'll need to add credentials for the following services in your n8n instance: News API**: A credential for your chosen news provider. LLM**: Your API key for the LLM you want to use. Google OAuth**: For both Gmail and Google Sheets. WordPress OAuth2**: To publish articles via the API. See the WordPress Developer Docs. LinkedIn OAuth2**: To share the post on a company page. 2. Node Configuration Don't forget to: Fetch News (HTTP Request)**: Set the query parameters (keywords, language, etc.) for your news source. Basic LLM Chain: Review and **customize the prompt to match your desired tone, language, and style. Approval request (Gmail)**: Set your email address in the Send To field. HTTP Request WP - Push article**: Replace <site_Id> in the URL with your WordPress Site ID. getImageId (Code Node)**: Update the array with your image IDs from the WordPress Media Library. Create a post (LinkedIn)**: Enter your LinkedIn Organization ID. Append row in sheet (Google Sheets)**: Select your Google Sheet file and the target sheet. All Email Nodes**: Make sure the Send To field is your email.
by AOE Agent Lab
This n8n template demonstrates how to audit your brand’s visibility across multiple AI systems and automatically log the results to Google Sheets. It sends the same prompt to OpenAI, Perplexity, and (optionally) a ChatGPT web actor, then runs sentiment and brand-hierarchy analysis on the responses. Use cases are many: benchmark how often (and how positively) your brand appears in AI answers, compare responses across models, and build a repeatable “AI visibility” report for marketing and comms teams. 💡 Good to know You’ll bring your own API keys for OpenAI and Perplexity. Usage costs depend on your providers’ pricing. The optional APIfy actor automates the ChatGPT web UI and may violate terms of service. Use strictly at your own risk. ⁉ How it works A Manual Trigger starts the workflow (you can replace it with any trigger). Input prompts are read from a Google Sheet (or you can use the included “manual input” node). The prompt is sent to three tools: -- OpenAI (via API) to check baseline LLM knowledge. -- Perplexity (API) to retrieve an answer with citations. -- Optionally, an APIfy actor that scrapes a ChatGPT response (web interface). Responses are normalized and mapped (including citations where available). An LLM-powered sentiment pass classifies each response into: -- Basic Polarity: Positive, Neutral, or Negative -- Emotion Category: Joy, Sadness, Anger, Fear, Disgust, or Surprise -- Brand Hierarchy: ordered list such as Nike>Adidas>Puma The consolidated record (Prompt, LLM, Response, Brand mentioned flag, Brand Hierarchy, Basic Polarity, Emotion Category, Source 1–3/4) is appended to your “Output many models” Google Sheet. A simplified branch shows how to take a single response and push it to a separate sheet. 🗺️ How to use Connect your Google Sheets OAuth and create two tabs: -- Input: a single “Prompt” column -- Output: columns for Prompt, LLM, Response, Brand mentioned, Brand Hierarchy, Basic Polarity, Emotion Category, Source 1, Source 2, Source 3, Source 4 Add your OpenAI and Perplexity credentials. (Optional) Add an APIfy credential (Query Auth with token) if you want the ChatGPT web actor path. Run the Manual Trigger to process prompts in batches and write results to Sheets. Adjust the included “Limit for testing” node or remove it to process more rows. ⚒️ Requirements OpenAI API access (e.g., GPT-4.1-mini / GPT-5 as configured in the template) Perplexity API access (model: sonar) Google Sheets account with OAuth connected in n8n (Optional) APIfy account/token for the ChatGPT web actor 🎨 Customising this workflow Swap the Manual Trigger for a webhook or schedule to run audits automatically. Extend the sentiment analyzer instructions to include brand-specific rules or compliance checks. Track more sources (e.g., additional models or vertical search tools) by duplicating the request→map→append pattern. Add scoring (e.g., “visibility score” per prompt) and charts by pointing the output sheet into Looker Studio or a BI tool.
by Daniel Agrici
This workflow automates business intelligence. Submit one URL, and it scrapes the website, uses AI to perform a comprehensive analysis, and generates a professional report in Google Doc and PDF format. It's perfect for agencies, freelancers, and consultants who need to streamline client research or competitive analysis. How It Works The workflow is triggered by a form input, where you provide a single website URL. Scrape: It uses Firecrawl to scrape the sitemap and get the full content from the target website. Analyze: The main workflow calls a Tools Workflow (included below) which uses Google Gemini and Perplexity AI agents to analyze the scraped content and extract key business information. Generate & Deliver: All the extracted data is formatted and used to populate a template in Google Docs. The final report is saved to Google Drive and delivered via Gmail. What It Generates The final report is a comprehensive business analysis, including: Business Overview: A full company description. Target Audience Personas: Defines the demographic and psychographic profiles of ideal customers. Brand & UVP: Extracts the brand's personality matrix and its Unique Value Proposition (UVP). Customer Journey: Maps the typical customer journey from Awareness to Loyalty. Required Tools This workflow requires n8n and API keys/credentials for the following services: Firecrawl (for scraping) Perplexity (for AI analysis) Google Gemini (for AI analysis) Google Services (for Docs, Drive, and Gmail) ⚠️ Required: Tools Workflow This workflow will not work without its "Tools" sub-workflow. Please create a new, separate workflow in n8n, name it (e.g., "Business Analysis Tools"), and paste the following code into it. { "name": "Business Analysis Workflow Tools", "nodes": [ { "parameters": { "workflowInputs": { "values": [ { "name": "function" }, { "name": "keyword" }, { "name": "url" }, { "name": "location_code" }, { "name": "language_code" } ] } }, "type": "n8n-nodes-base.executeWorkflowTrigger", "typeVersion": 1.1, "position": [ -448, 800 ], "id": "e79e0605-f9ac-4166-894c-e5aa9bd75bac", "name": "When Executed by Another Workflow" }, { "parameters": { "rules": { "values": [ { "conditions": { "options": { "caseSensitive": true, "leftValue": "", "typeValidation": "strict", "version": 2 }, "conditions": [ { "id": "8d7d3035-3a57-47ee-b1d1-dd7bfcab9114", "leftValue": "serp_search", "rightValue": "={{ $json.function }}", "operator": { "type": "string", "operation": "equals", "name": "filter.operator.equals" } } ], "combinator": "and" }, "renameOutput": true, "outputKey": "serp_search" }, { "conditions": { "options": { "caseSensitive": true, "leftValue": "", "typeValidation": "strict", "version": 2 }, "conditions": [ { "id": "bb2c23eb-862d-4582-961e-5a8d8338842c", "leftValue": "ai_mode", "rightValue": "={{ $json.function }}", "operator": { "type": "string", "operation": "equals", "name": "filter.operator.equals" } } ], "combinator": "and" }, "renameOutput": true, "outputKey": "ai_mode" }, { "conditions": { "options": { "caseSensitive": true, "leftValue": "", "typeValidation": "strict", "version": 2 }, "conditions": [ { "id": "4603eee1-3888-4e32-b3b9-4f299dfd6df3", "leftValue": "internal_links", "rightValue": "={{ $json.function }}", "operator": { "type": "string", "operation": "equals", "name": "filter.operator.equals" } } ], "combinator": "and" }, "renameOutput": true, "outputKey": "internal_links" } ] }, "options": {} }, "type": "n8n-nodes-base.switch", "typeVersion": 3.2, "position": [ -208, 784 ], "id": "72c37890-7054-48d8-a508-47ed981551d6", "name": "Switch" }, { "parameters": { "method": "POST", "url": "https://api.dataforseo.com/v3/serp/google/organic/live/advanced", "authentication": "genericCredentialType", "genericAuthType": "httpBasicAuth", "sendBody": true, "specifyBody": "json", "jsonBody": "=[\n {\n \"keyword\": \"{{ $json.keyword.replace(/[:'\"\\\\/]/g, '') }}\",\n \"location_code\": {{ $json.location_code }},\n \"language_code\": \"{{ $json.language_code }}\",\n \"depth\": 10,\n \"group_organic_results\": true,\n \"load_async_ai_overview\": true,\n \"people_also_ask_click_depth\": 1\n }\n]", "options": { "redirect": { "redirect": { "followRedirects": false } } } }, "type": "n8n-nodes-base.httpRequest", "typeVersion": 4.2, "position": [ 384, 512 ], "id": "6203f722-b590-4a25-8953-8753a44eb3cb", "name": "SERP Google", "credentials": { "httpBasicAuth": { "id": "n5o00CCWcmHFeI1p", "name": "DataForSEO" } } }, { "parameters": { "content": "## SERP Google", "height": 272, "width": 688, "color": 4 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 288, 432 ], "id": "81593217-034f-466d-9055-03ab6b2d7d08", "name": "Sticky Note3" }, { "parameters": { "assignments": { "assignments": [ { "id": "97ef7ee0-bc97-4089-bc37-c0545e28ed9f", "name": "platform", "value": "={{ $json.tasks[0].data.se }}", "type": "string" }, { "id": "9299e6bb-bd36-4691-bc6c-655795a6226e", "name": "type", "value": "={{ $json.tasks[0].data.se_type }}", "type": "string" }, { "id": "2dc26c8e-713c-4a59-a353-9d9259109e74", "name": "keyword", "value": "={{ $json.tasks[0].data.keyword }}", "type": "string" }, { "id": "84c9be31-8f1d-4a67-9d13-897910d7ec18", "name": "results", "value": "={{ $json.tasks[0].result }}", "type": "array" } ] }, "options": {} }, "type": "n8n-nodes-base.set", "typeVersion": 3.4, "position": [ 592, 512 ], "id": "a916551a-009b-403f-b02e-3951d54d2407", "name": "Prepare SERP output" }, { "parameters": { "content": "# Google Organic Search API\n\nThis API lets you retrieve real-time Google search results with a wide range of parameters and custom settings. \nThe response includes structured data for all available SERP features, along with a direct URL to the search results page. \n\n👉 Documentation\n", "height": 272, "width": 496, "color": 4 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 976, 432 ], "id": "87672b01-7477-4b43-9ccc-523ef8d91c64", "name": "Sticky Note17" }, { "parameters": { "method": "POST", "url": "https://api.dataforseo.com/v3/serp/google/ai_mode/live/advanced", "authentication": "genericCredentialType", "genericAuthType": "httpBasicAuth", "sendBody": true, "specifyBody": "json", "jsonBody": "=[\n {\n \"keyword\": \"{{ $json.keyword }}\",\n \"location_code\": {{ $json.location_code }},\n \"language_code\": \"{{ $json.language_code }}\",\n \"device\": \"mobile\",\n \"os\": \"android\"\n }\n]", "options": { "redirect": { "redirect": {} } } }, "type": "n8n-nodes-base.httpRequest", "typeVersion": 4.2, "position": [ 384, 800 ], "id": "fb0001c4-d590-45b3-a3d0-cac7174741d3", "name": "AI Mode", "credentials": { "httpBasicAuth": { "id": "n5o00CCWcmHFeI1p", "name": "DataForSEO" } } }, { "parameters": { "content": "## AI Mode", "height": 272, "width": 512, "color": 6 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 288, 720 ], "id": "2cea3312-31f8-4ff0-b385-5b76b836274c", "name": "Sticky Note11" }, { "parameters": { "assignments": { "assignments": [ { "id": "b822f458-ebf2-4a37-9906-b6a2606e6106", "name": "keyword", "value": "={{ $json.tasks[0].data.keyword }}", "type": "string" }, { "id": "10484675-b107-4157-bc7e-b942d8cdb5d2", "name": "result", "value": "={{ $json.tasks[0].result[0].items }}", "type": "array" } ] }, "options": {} }, "type": "n8n-nodes-base.set", "typeVersion": 3.4, "position": [ 592, 800 ], "id": "6b1e7239-ee2b-4457-8acb-17ce87415729", "name": "Prepare AI Mode Output" }, { "parameters": { "content": "# Google AI Mode API\n\nThis API provides AI-generated search result summaries and insights from Google. \nIt returns detailed explanations, overviews, and related information based on search queries, with parameters to customize the AI overview. \n\n👉 Documentation\n", "height": 272, "width": 496, "color": 6 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 800, 720 ], "id": "d761dc57-e35d-4052-a360-71170a155f7b", "name": "Sticky Note18" }, { "parameters": { "content": "## Input", "height": 384, "width": 544, "color": 7 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ -528, 672 ], "id": "db90385e-f921-4a9c-89f3-53fc5825b207", "name": "Sticky Note" }, { "parameters": { "assignments": { "assignments": [ { "id": "b865f4a0-b4c3-4dde-bf18-3da933ab21af", "name": "platform", "value": "={{ $json.platform }}", "type": "string" }, { "id": "476e07ca-ccf6-43d4-acb4-4cc905464314", "name": "type", "value": "={{ $json.type }}", "type": "string" }, { "id": "f1a14eb8-9f10-4198-bbc7-17091532b38e", "name": "keyword", "value": "={{ $json.keyword }}", "type": "string" }, { "id": "181791a0-1d88-481c-8d98-a86242bb2135", "name": "results", "value": "={{ $json.results[0].items }}", "type": "array" } ] }, "options": {} }, "type": "n8n-nodes-base.set", "typeVersion": 3.4, "position": [ 800, 512 ], "id": "83fef061-5e0b-417c-b1f6-d34eb712fac6", "name": "Sort Results" }, { "parameters": { "content": "## Internal Links", "height": 272, "width": 272, "color": 5 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 288, 1024 ], "id": "9246601a-f133-4ca3-aac8-989cb45e6cd2", "name": "Sticky Note7" }, { "parameters": { "method": "POST", "url": "https://api.firecrawl.dev/v2/map", "sendHeaders": true, "headerParameters": { "parameters": [ { "name": "Authorization", "value": "Bearer your-firecrawl-apikey" } ] }, "sendBody": true, "specifyBody": "json", "jsonBody": "={\n \"url\": \"https://{{ $json.url }}\",\n \"limit\": 400,\n \"includeSubdomains\": false,\n \"sitemap\": \"include\"\n }", "options": {} }, "type": "n8n-nodes-base.httpRequest", "typeVersion": 4.2, "position": [ 368, 1104 ], "id": "fd6a33ae-6fb3-4331-ab6a-994048659116", "name": "Get Internal Links" }, { "parameters": { "content": "# Firecrawl Map API\n\nThis endpoint maps a website from a single URL and returns the list of discovered URLs (titles and descriptions when available) — extremely fast and useful for selecting which pages to scrape or for quickly enumerating site links. (Firecrawl)\n\nIt supports a search parameter to find relevant pages inside a site, location/languages options to emulate country/language (uses proxies when available), and SDK + cURL examples in the docs,\n\n👉 Documentation\n\n[1]: https://docs.firecrawl.dev/features/map \"Map | Firecrawl\"\n", "height": 272, "width": 624, "color": 5 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 560, 1024 ], "id": "08457204-93ff-4586-a76e-03907118be3c", "name": "Sticky Note24" } ], "pinData": { "When Executed by Another Workflow": [ { "json": { "function": "serp_search", "keyword": "villanyszerelő Largo Florida", "url": null, "location_code": 2840, "language_code": "hu" } } ] }, "connections": { "When Executed by Another Workflow": { "main": [ [ { "node": "Switch", "type": "main", "index": 0 } ] ] }, "Switch": { "main": [ [ { "node": "SERP Google", "type": "main", "index": 0 } ], [ { "node": "AI Mode", "type": "main", "index": 0 } ], [ { "node": "Get Internal Links", "type": "main", "index": 0 } ] ] }, "SERP Google": { "main": [ [ { "node": "Prepare SERP output", "type": "main", "index": 0 } ] ] }, "AI Mode": { "main": [ [ { "node": "Prepare AI Mode Output", "type": "main", "index": 0 } ] ] }, "Prepare SERP output": { "main": [ [ { "node": "Sort Results", "type": "main", "index": 0 } ] ] }, "Sort Results": { "main": [ [] ] } }, "active": false, "settings": { "executionOrder": "v1" }, "versionId": "6fce16d1-aa28-4939-9c2d-930d11c1e17f", "meta": { "instanceId": "1ee7b11b3a4bb285563e32fdddf3fbac26379ada529b942ee7cda230735046a1" }, "id": "VjpOW2V2aNV9HpQJ", "tags": [] } `
by Jitesh Dugar
👤 Who’s it for This workflow is designed for employees who need to submit expense claims for business trips. It automates the process of extracting data from receipts/invoices, logging it to a Google Sheet, and notifying the finance team via email. Ideal users: Employees submitting business trip expense claims HR or Admins reviewing travel-related reimbursements Finance teams responsible for processing claims ⚙️ How it works / What it does Employee submits a form with trip information (name, department, purpose, dates) and uploads one or more receipts/invoices (PDF). Uploaded files are saved to Google Drive for record-keeping. Each PDF is passed to a DocClaim Assistant agent, which uses GPT-4o and a structured parser to extract structured invoice data. The data is transformed and formatted into a standard JSON structure. Two parallel paths are followed: Invoice records are appended to a Google Sheet for centralized tracking. A detailed HTML email summarizing the trip and expenses is generated and sent to the finance department for claim processing. 🛠 How to set up Create a form to capture: Employee Name Department Trip Purpose From Date / To Date Receipt/Invoice File Upload (multiple PDFs) Configure file upload node to store files in a specific Google Drive folder. Set up DocClaim Agent using: GPT-4o or any LLM with document analysis capability Output parser for standardizing extracted receipt data (e.g., vendor, total, tax, date) Transform extracted data into a structured claim record (Code Node). Path 1: Save records to a Google Sheet (one row per expense). Path 2: Format the employee + claim data into a dynamic HTML email Use Send Email node to notify the finance department (e.g., finance@yourcompany.com) ✅ Requirements Jotform account with expense form setup Sign up for free here n8n running with access to: Google Drive API (for file uploads) Google Sheets API (for logging expenses) Email node (SMTP or Gmail for sending) GPT-4o or equivalent LLM with document parsing ability PDF invoices with clear formatting Shared Google Sheet for claim tracking Optional: Shared inbox for finance team 🧩 How to customize the workflow Add approval steps**: route the email to a manager before finance Attach original PDFs**: include uploaded files in the email as attachments Localize for other languages**: adapt form labels, email content, or parser prompts Sync to ERP or accounting system**: replace Google Sheet with QuickBooks, Xero, etc. Set limits/validation**: enforce max claim per trip or required fields before submission Auto-tag expenses**: add categories (e.g., travel, accommodation) for better reporting
by Michael Taleb
How it works Watches a Google Drive folder for new (scanned) invoices. Each new file automatically triggers the workflow. Downloads and processes each invoice through OCR Space to extract the text. Extracts the company name (e.g. from the “billed to” field) and uses an AI agent to cross-reference it against a database in Google Sheets. If a match is found, retrieves the correct recipient email and sends the invoice as an attachment. If no match or an error occurs, the workflow alerts an operator by email for manual review. Setting up the workflow Connect Google Drive • In n8n, connect your Google Drive account. • Create or select a folder where you will upload scanned invoices. Connect Gmail (or another email service) • Add your Gmail account as a credential in n8n. • This will be used to send the processed invoice to the correct recipient. Set up OCR.Space • Create a free OCR.Space account: https://ocr.space • In n8n, create a Generic Credential (Header Auth). • Use apikey as the name and your OCR API key as the value. Connect the AI Agent • Add your OpenAI API key as a credential in n8n. • The AI Agent will extract the company name from the invoice text and match it against your database. • If a match is found, it retrieves the correct email. Prepare the Google Sheet database • Make a copy of the database sheet: Google Sheet Template • Fill it with company names and recipient emails. • Connect your Google account to n8n and link this sheet to the workflow. Run the workflow • When a new invoice is uploaded to your Google Drive folder, the workflow will: • Extract the text with OCR.Space • Use the AI Agent to identify the company name • Cross-reference it with your Google Sheet database • Send the invoice automatically to the correct recipient via Gmail • If no match is found, an error email is sent to you for manual review
by Razvan Bara
How it works: This n8n workflow automates communication with meeting invitees to decrease no-show rates by sending timely email and WhatsApp reminders, and a clarification request if more information is needed to prepare the meeting. Step-by-step: The workflow is triggered by an incoming email notification from Calendly about a newly scheduled meeting. It uses AI to extract key meeting data from the email content. It checks if the invitee didn't provide sufficient information, and, if there is a need for more information, sends a clarification request email. It calculates the waiting time required for the 24-hour and 1-hour reminders. It uses an If node to determine the correct waiting path based on the meeting time. It uses Wait nodes for timing the reminders correctly. Finally, it sends a reminder email and a WhatsApp reminder before the meeting. Customization Options: Replace Google Gemini with your preferred LLM model (though Gemini works on the free tier). Tailor email and WhatsApp messages to speak your brand's language. Replace Twillio node to WhatsApp node to be a completly free usage flow.
by Rahul Joshi
Description Automatically generate polished, n8n-ready template descriptions from your saved JSON workflows in Google Drive. This AI-powered automation processes workflow files, drafts compliant descriptions, and delivers Markdown and HTML outputs directly to your inbox. 🚀💌📊💬 What This Template Does Manually triggers the workflow to start processing. Searches a specified Google Drive folder for JSON workflow files. Iterates through each JSON file found in that folder. Downloads each file and prepares it for data extraction. Parses workflow data from the downloaded JSON content. Uses Azure OpenAI GPT-4 to generate concise titles and detailed descriptions. Converts the AI output into structured Markdown for n8n template publishing. Creates an HTML version of the description for email delivery. Logs generated details into a Google Sheet for record-keeping. Sends an email containing the Markdown and HTML descriptions to the target recipient. Key Benefits ✅ Fully automates n8n template description creation. ✅ Ensures consistency with official n8n publishing guidelines. ✅ Saves time while eliminating human writing errors. ✅ Provides dual Markdown + HTML outputs for flexibility. ✅ Centralizes workflow metadata in Google Sheets. ✅ Simplifies collaboration and version tracking via email delivery. Features Manual workflow trigger for controlled execution. Integration with Google Drive for locating and downloading JSON files. Intelligent parsing of workflow data from JSON structure. GPT-4-powered AI for title and description generation. Automatic Markdown + HTML formatting for n8n publishing. Google Sheets integration for persistent record-keeping. Automated Gmail delivery of generated documentation. Requirements n8n instance (cloud or self-hosted). Google Drive OAuth2 credentials with file read permissions. Google Sheets OAuth2 credentials with edit permissions. Azure OpenAI GPT-4 API key for AI text generation. Gmail OAuth2 credentials for email sending. Target Audience n8n content creators documenting workflows. 👩💼 Automation teams handling multiple template deployments. 🔄 Agencies and freelancers managing workflow documentation. 🏢 Developers leveraging AI for faster template creation. 🌐 Technical writers ensuring polished, standardized outputs. 📊 Step-by-Step Setup Instructions Connect your Google Drive account and specify the folder containing JSON workflows. 🔑 Authorize Google Sheets and confirm access to the tracking spreadsheet. ⚙️ Add Azure OpenAI GPT-4 API credentials for AI-powered text generation. 🧠 Connect Gmail credentials for automated email delivery. 📧 Run the workflow manually using a test JSON file to validate all nodes. ✅ Enable the workflow to automatically generate and send descriptions as needed. 🚀
by Cheng Siong Chin
Introduction Automates travel planning by aggregating flights, hotels, activities, and weather via APIs, then uses AI to generate professional itineraries delivered through Gmail and Slack. How It Works Webhook receives requests, searches APIs (Skyscanner, Booking.com, Kiwi, Viator, weather), merges data, AI builds itineraries, scores options, generates HTML emails, delivers via Gmail/Slack. Workflow Template Webhook → Extract → Parallel Searches (Flights/Hotels/Activities/Weather) → Merge → Build Itinerary → AI Processing → Score → Generate HTML → Gmail → Slack → Response Workflow Steps Trigger & Extract: Receives destination, dates, preferences, extracts parameters. Data Gathering: Parallel APIs fetch flights, hotels, activities, weather, merges responses. AI Processing: Analyzes data, creates itinerary, ranks recommendations. Delivery: Generates HTML email, sends via Gmail/Slack, confirms completion. Setup Instructions API Configuration: Add keys for Skyscanner, Booking.com, Kiwi, Viator, OpenWeatherMap, OpenRouter. Communication: Connect Gmail OAuth2, Slack webhook. Customization: Adjust endpoints, AI prompts, HTML template, scoring criteria. Prerequisites API keys: Skyscanner, Booking.com, Kiwi, Viator, OpenWeatherMap, OpenRouter Gmail account Slack workspace n8n instance Use Cases Corporate travel planning Vacation itinerary generation Group trip coordination Customization Add sources (Airbnb, TripAdvisor) Filter by budget preferences Add PDF generation Customize Slack format Benefits Saves 3-5 hours per trip Real-time pricing aggregation AI-powered personalization Automated multi-channel delivery
by Trung Tran
Automated SSL/TLS Certificate Expiry Report for AWS > Automatically generates a weekly report of all AWS ACM certificates, including status, expiry dates, and renewal eligibility. The workflow formats the data into both Markdown (for PDF export to Slack) and HTML (for email summary), helping teams stay on top of certificate compliance and expiration risks. Who’s it for This workflow is designed for DevOps engineers, cloud administrators, and compliance teams who manage AWS infrastructure and need automated weekly visibility into the status of their SSL/TLS certificates in AWS Certificate Manager (ACM). It's ideal for teams that want to reduce the risk of expired certs, track renewal eligibility, and maintain reporting for audit or operational purposes. How it works / What it does This n8n workflow performs the following actions on a weekly schedule: Trigger: Automatically runs once a week using the Weekly schedule trigger. Fetch Certificates: Uses Get many certificates action from AWS Certificate Manager to retrieve all certificate records. Parse Data: Processes and reformats certificate data (dates, booleans, SANs, etc.) into a clean JSON object. Generate Reports: 📄 Markdown Report: Uses the Certificate Summary Markdown Agent (OpenAI) to generate a Markdown report for PDF export. 🌐 HTML Report: Uses the Certificate Summary HTML Agent to generate a styled HTML report for email. Deliver Reports: Converts Markdown to PDF and sends it to Slack as a file. Sends HTML content as a formatted email. How to set up Configure AWS Credentials in n8n to allow access to AWS ACM. Create a new workflow and use the following nodes in sequence: Schedule Trigger: Weekly (e.g., every Monday at 08:00 UTC) AWS ACM → Get many certificates Function Node → Parse ACM Data: Converts and summarizes certificate metadata OpenAI Chat Node (Markdown Agent) with a system/user prompt to generate Markdown Configure Metadata → Define file name and MIME type (.md) Create document file → Converts Markdown to document stream Convert to PDF Slack Node → Upload the PDF to a channel (Optional) Add a second OpenAI Chat Node for generating HTML and sending it via email Connect Output: Markdown report → Slack file upload HTML report → Email node with embedded HTML Requirements 🟩 n8n instance (self-hosted or cloud) 🟦 AWS account with access to ACM 🟨 OpenAI API key (for ChatGPT Agent) 🟥 Slack webhook or OAuth credentials (for file upload) 📧 Email integration (e.g., SMTP or SendGrid) 📝 Permissions to write documents (Google Drive / file node) How to customize the workflow Change report frequency**: Adjust the Weekly schedule trigger to daily or monthly as needed. Filter certificates**: Modify the function node to only include EXPIRED, IN_USE, or INELIGIBLE certs. Add tags or domains to include/exclude. Add visuals**: Enhance the HTML version with colored rows, icons, or company branding. Change delivery channels**: Replace Slack with Microsoft Teams, Discord, or Telegram. Send Markdown as email attachment instead of PDF. Integrate ticketing**: Create a JIRA/GitHub issue for each certificate that is EXPIRED or INELIGIBLE.
by Raymond Camden
How It Works This N8N template demonstrates using Foxit's Extraction API to get information from an incoming document and then using Diffbot's APIs to turn the text into a list of organizations mentioned in the document and create a summary. How it works Listen for a new file added to a Google Drive folder. When executed, the bits are downloaded. Upload the bits to Foxit. Call the Extract API to get the text contents of the document. Poll the API to see if it's done, and when it is, grab the text. Send the text to Diffbot API to get a list of entities mentioned in the doc as well as the summary. Use a code step to filter the entities returned from Diffbot to ones that are organizations, as well as filtering to a high confidence score. Use another code step to make an HTML string from the previous data. Email it using the GMail node. Requirements A Google account for Google Drive and GMail Foxit developer account (https://developer-api.foxit.com) Diffbot developer account (https://app.diffbot.com/get-started) Next Steps This workflow assumes PDF input, but Foxit has APIs to convert Office docs to PDF and that flow could be added before the Extract API is called. Diffbot returns an incredible set of information and more could be used in the email. Instead of emailing, you could sort documents by organizations into new folders.
by Alex
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This n8n template automatically parses bank transaction emails (HDFC, Indian Bank, Indian Overseas Bank, UPI apps like Google pay, Paytm, etc.) - The from email(bank name/UPI apps) is changable, classifies them using Gemini AI, and logs them into a structured Google Sheets budget tracker. It helps you consolidate expenses, compare against monthly budgets, and get real-time alerts when limits are exceeded. 📝 Problem Statement Tracking expenses manually from different bank emails and UPI apps is frustrating, time-consuming, and error-prone. Small transactions often slip through, making budget control difficult. This workflow solves that by: Automatically extracting financial data from Gmail. Categorizing expenses using AI parsing. Saving all data into Google Sheets in a structured way. Comparing with monthly budgets and raising alerts. Target Audience: Individuals who want personal budget automation. Families managing shared household spending. Small teams looking for a lightweight financial log. ⚙️ Setup Prerequisites An n8n instance (self-hosted or cloud). A Google account with Gmail + Google Sheets enabled. Pre-created Google Sheets file with 2 tabs: Expenses Budgets A configured Gemini API connection in n8n. 📊 Google Sheets Template Expenses Tab (columns in order): Timestamp | Date | Account | From | To | Type | Category | Description | Amount | Currency | Source | MessageId | Status Budget Tab (columns in order): Month | Category | Budget Amount | Notes | UpdatedAt Yearly Summary Tab (auto-calculated): Year | Month | Category | Total Expense | Budget | Variance | Alert Variance = Budget - Total Expense Alert = ⚠️ Over Budget when spending > budget 🚀 How It Works Gmail: Gmail Trigger captures new bank/UPI emails. Gemini AI Parser extracts structured details (date, amount, category, etc.). Filter Node ensures only valid financial transactions are logged. Information extractor will extract the information like Date, account, transaction type(Credit/Debit), description, currency, status, messageId, from email, to email, category -> checks if the transaction is 'Credit' or 'Debit' then appends the details to the respective google sheet Budget Validator checks against monthly allocations. If the expense is above the budget is raises an alert and will send a email to the connected account. For sending email I wrote a Google Sheet App script: var ss = SpreadsheetApp.getActiveSpreadsheet(); var monthly = ss.getSheetByName("MonthlySummary"); var yearly = ss.getSheetByName("YearlySummary"); // Get values from Monthly Summary var totalExpense = monthly.getRange("D2").getValue(); var budget = monthly.getRange("E2").getValue(); // Get current date info var now = new Date(); var month = Utilities.formatDate(now, "GMT+5:30", "MM"); var year = Utilities.formatDate(now, "GMT+5:30", "yyyy"); var status = (totalExpense > budget) ? "Alert" : ""; // Append to Yearly Summary yearly.appendRow([year, month, totalExpense, status]); // If budget exceeded, send alert email if (status === "Alert") { var emailAddress = "YOUR EMAIL"; var subject = "⚠️ Budget Exceeded - " + month + "/" + year; var body = "Your total expenses this month (" + totalExpense + ") have exceeded your budget of " + budget + ".\n\n" + "Please review your spending."; MailApp.sendEmail(emailAddress, subject, body); } // 🔄 Reset Monthly Summary var lastRow = monthly.getLastRow(); if (lastRow > 3) { // assuming headers in first 2-3 rows monthly.getRange("A4:C" + lastRow).clearContent(); } // Reset total in D2 monthly.getRange("D2").setValue(0); } Monthly summary auto-calculates the expense and updates the expense for every month and budgets(sum all budgets if there are more than 1 budgets). Yearly Summary auto-updates and raises over-budget alerts. Telegram: Takes input from a telegram bot which is connected to the n8n workflow telegram trigger. Gemini AI Parser extracts structured details (date, amount, category, etc.). Then it checks, whether the manually specified details is 'budget' or 'expense', then splits the data -> parse the data -> then again check whether it is 'Budget' or 'Expense' then appends the structured data to the respective google sheet. Monthly summary auto-calculates the expense and updates the expense for every month and budgets(sum all budgets if there are more than 1 budgets). Yearly Summary auto-updates and raises over-budget alerts. 🔧 Customization Add support for more banks/UPI apps by extending the parser schema. const senderEmail = $input.first().json.From || ""; // Account detection let account = ""; // you can modify the bank names and UPI names here if (/alerts@hdfcbank\.net/i.test(senderEmail)) account = "HDFC Bank"; // you can modify the bank names and UPI names here else if (/ealerts@iobnet\.co\.in/i.test(senderEmail)) account = "Indian Overseas Bank"; else if (/alerts@indianbank\.in/i.test(senderEmail)) account = "Indian Bank"; else if (/@upi|@okhdfcbank|@okaxis|@okicici/i.test(emailBody)) { if (/gpay|google pay/i.test(emailBody)) account = "Google Pay"; else if (/phonepe/i.test(emailBody)) account = "PhonePe"; else if (/paytm/i.test(emailBody)) account = "Paytm"; else account = "UPI"; } else { account = "Other"; } // If account is "Other", skip output if (account === "Other") { return []; } // Output return [{ account, from: senderEmail, // exact Gmail "From" metadata snippet: emailBody, messageId: $input.first().json.id || "" }]; Create custom categories (e.g., Travel, Groceries, Subscriptions). Send real-time alerts via Telegram/Slack/Email using n8n nodes. Share the Google Sheet with family or team for collaborative use. 📌 Usage The workflow runs automatically on every new Gmail transaction email and financial input on the telegram bot. At the end of each month, totals are calculated in the Yearly Summary tab. Users only need to maintain the Budget tab with updated monthly allocations.