by WhySoSerious
What it is This workflow listens for new tickets in HaloPSA via webhook, generates a professional AI-powered summary of the issue using Gemini (or another LLM), and posts it back into the ticket as a private note. It’s designed for MSPs using HaloPSA who want to reduce triage time and give engineers a clear head start on each support case. ⸻ ✨ Features • 🔔 Webhook trigger from HaloPSA on new ticket creation • 🚧 Optional team filter (skip Sales or other queues) • 📦 Extracts ticket subject, details, and ID • 🧠 Builds a structured AI prompt with MSP context (NinjaOne, M365, CIPP) • 🤖 Processes via Gemini or other LLM • 📑 Cleans & parses JSON output (summary, next step, troubleshooting) • 🧱 Generates a branded HTML private note (logo + styled sections) • 🌐 Posts the note back into HaloPSA via API ⸻ 🔧 Setup Webhook • Replace WEBHOOK_PATH and paste the generated Production URL into your HaloPSA webhook. Guard filter (optional) • Change teamName or teamId to skip tickets from specific queues. Branding • Replace YOUR_LOGO_URL and Your MSP Brand in the HTML note builder. HaloPSA API • In the HTTP node, replace YOUR_HALO_DOMAIN and add your Halo API token (Bearer auth). LLM credentials • Set your API key in the Gemini / OpenAI node credentials section. (Optional) Adjust the AI prompt with your own tools or processes. ⸻ ✅ Requirements • HaloPSA account with API enabled • Gemini / OpenAI (or other LLM) API key • SMTP (optional) if you want to extend with notifications ⸻ ⚡ Workflow overview `🔔 Webhook → 🚧 Guard → 📦 Extract Ticket → 🧠 Build AI Prompt → 🤖 AI Agent (Gemini) → 📑 Parse JSON → 🧱 Build HTML Note → 🌐 Post to HaloPSA`
by Avkash Kakdiya
How it works This workflow creates a Slack-based CRM assistant that allows users to query HubSpot data using natural language. When a user mentions the bot in Slack, the message is cleaned and processed to remove Slack-specific formatting. The workflow then retrieves and filters relevant data from HubSpot (deals, companies, and contacts). Finally, an AI agent formats the response and sends a structured reply back to Slack. Step-by-step Trigger on Slack mention** Slack Trigger – Listens for app mentions in Slack channels. Code in JavaScript – Cleans the message by removing Slack IDs and formatting. Fetch and filter CRM data** Get Deals – Retrieves deals from HubSpot. Filter Deals – Filters deals based on the user query. Get many companies – Fetches company records from HubSpot. Filter Companies – Matches companies against the query. Get Contacts – Retrieves contact data from HubSpot. Filter Contacts – Filters contacts using name-based matching. Merge – Combines filtered deals, companies, and contacts into one dataset. Generate and send AI response** AI Agent – Uses AI to format and structure the CRM data into a readable response. Google Gemini Chat Model – Provides the language model for the AI agent. Send a message – Sends the final response back to the Slack channel. Why use this? Enables instant CRM access directly from Slack without logging into HubSpot Simplifies data lookup using natural language queries Combines multiple CRM objects into a single intelligent response Improves team productivity with faster decision-making Easily customizable for additional fields, filters, or AI formatting
by Alex Huy
How it works This workflow automatically curates and sends a daily AI/Tech news digest by aggregating articles from premium tech publications and using AI to select the most relevant and trending stories. 🔄 Automated News Pipeline RSS Feed Collection - Fetches articles from 14 premium tech news sources (TechCrunch, MIT Tech Review, The Verge, Wired, etc.) Smart Article Filtering - Limits articles per source to ensure diverse coverage and prevent single-source domination Data Standardization - Cleans and structures article data (title, summary, link, date) for AI processing AI-Powered Curation - Uses Google Vertex AI to analyze articles and select top 10 most relevant/trending stories Newsletter Generation - Creates professional HTML newsletter with summaries and direct links Email Delivery - Automatically sends formatted digest via Gmail 🎯 Key Features Premium Sources** - Curates from 14 top-tier tech publications AI Quality Control** - Intelligent article selection and summarization Balanced Coverage** - Prevents source bias with smart filtering Professional Format** - Clean HTML newsletter design Scheduled Automation** - Daily delivery at customizable times Error Resilience** - Continues processing even if some feeds fail Setup Steps 1. 🔑 Required API Access Google Cloud Project** with Vertex AI API enabled Google Service Account** with AI Platform Developer role Gmail API** enabled for email sending 2. ☁️ Google Cloud Setup Create or select a Google Cloud Project Enable the Vertex AI API Create a service account with these permissions: AI Platform Developer Service Account User Download the service account JSON key Enable Gmail API for the same project 3. 🔐 n8n Credentials Configuration Add these credentials to your n8n instance: Google Service Account (for Vertex AI): Upload your service account JSON key Name it descriptively (e.g., "Vertex AI Service Account") Gmail OAuth2: Use your Google account credentials Authorize Gmail API access Required scopes: gmail.send 4. ⚙️ Workflow Configuration Import the workflow into your n8n instance Update node configurations: Google Vertex AI Model: Set your Google Cloud Project ID Send Newsletter Email: Update recipient email address Daily Newsletter Trigger: Adjust schedule time if needed Verify credentials are properly connected to respective nodes 5. 📰 RSS Sources Customization (Optional) The workflow includes 14 premium tech news sources: TechCrunch (AI & Startups) The Verge (AI section) MIT Technology Review Wired (AI/Science) VentureBeat (AI) ZDNet (AI topics) AI Trends Nature (Machine Learning) Towards Data Science NY Times Technology The Guardian Technology BBC Technology Nikkei Asia Technology To customize sources: Edit the "Configure RSS Sources" node Add/remove RSS feed URLs as needed Ensure feeds are active and properly formatted 6. 🚀 Testing & Deployment Manual Test: Execute the workflow manually to verify setup Check Email: Confirm newsletter arrives with proper formatting Verify AI Output: Ensure articles are relevant and well-summarized Schedule Activation: Enable the daily trigger for automated operation 💡 Customization Options Newsletter Timing: Default: 8:00 AM UTC daily Modify "triggerAtHour" in the Schedule Trigger node Add multiple daily sends if desired Content Focus: Adjust the AI prompt in "AI Tech News Curator" node Specify different topics (e.g., focus on startups, enterprise AI, etc.) Change output language or format Email Recipients: Update single recipient in Gmail node Or modify to send to multiple addresses Integrate with mailing list services Article Limits: Current: Max 5 articles per source Modify the filtering logic in "Filter & Balance Articles" node Adjust total article count in AI prompt 🔧 Troubleshooting Common Issues: RSS Feed Failures**: Individual feed failures won't stop the workflow AI Rate Limits**: Vertex AI has generous limits, but monitor usage Gmail Sending**: Ensure sender email is authorized in Gmail settings Missing Articles**: Some RSS feeds may be inactive - check source URLs Performance Tips: Monitor execution times during peak RSS activity Consider adding delays if hitting rate limits Archive old newsletters for reference This workflow transforms daily news consumption from manual browsing to curated, AI-powered intelligence delivered automatically to your inbox.
by Matt Chong
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Gmail Auto-Reply with AI Automatically draft smart email replies using ChatGPT. Reclaim your time typing the same responses again and again. Who is this for? If you're overwhelmed with emails and constantly repeating yourself in replies, this workflow is for you. Whether you're a freelancer, business owner, or team lead, it saves you time by handling email triage and drafting replies for you. What does it solve? This workflow reads your unread Gmail messages and uses AI to: Decide whether the email needs a response Automatically draft a short, polite reply when appropriate Skip spam, newsletters, or irrelevant emails Save the AI-generated reply as a Gmail draft (you can edit it before sending) It takes email fatigue off your plate and keeps your inbox moving. How it works Trigger on New Email: Watches your Gmail inbox for unread messages. AI Agent Review: Analyzes the content to decide if a reply is needed. OpenAI ChatGPT: Drafts a short, polite reply (under 120 words). Create Gmail Draft: Saves the response as a draft for you to review. Label It: Applies a custom label like Action so you can easily find AI-handled emails. How to set up? Connect credentials: Gmail (OAuth2) OpenAI (API key) Create the Gmail label: In your Gmail, create a label named Action (case-sensitive). How to customize this workflow to your needs Change the AI prompt**: Add company tone, extra context, or different reply rules. Label more intelligently**: Add conditions or labels for “Newsletter,” “Meeting,” etc. Adjust frequency**: Change how often the Gmail Trigger polls your inbox. Add manual review**: Route drafts to a team member before sending.
by WeblineIndia
Send Automated Recruitment Rejection Emails with Google Sheets and Gmail at End-of-Day. Automatically reads a “Candidate Status” tab in Google Sheets every day at 18:00 Asia/Kolkata, filters rows with exact (case-sensitive) rejection statuses and sends one personalized rejection email per candidate via SMTP (Gmail). It rate-limits sends, supports DRY\_RUN previews and writes a timestamp back to rejection_sent_at to avoid duplicates. Who’s it for Recruiters needing consistent, respectful closure at day end. Teams tracking hiring outcomes in Google Sheets. Coordinators who prefer a scheduled, hands-off workflow with safeguards. How it works Cron (18:00 IST) triggers daily Google Sheets Read → loads Candidate Status tab Filter → keep rows where status REJECT_STATUS_CSV (exact match), with valid candidate_email and empty rejection_sent_at DRY\RUN? If true → output preview only; if false → proceed Rate limit → wait RATE_LIMIT_SECONDS (default 10s) between emails SMTP (Gmail) → send personalized email per row using templates Mark as sent → write current timestamp to rejection_sent_at How to set up Sheet & Columns**: Create “Candidate Status” tab with: candidate_name, candidate_email, role, status, recruiter_name, recruiter_email, company_name, interview_feedback (optional), template_variant (optional), language (optional), rejection_sent_at Credentials: Connect **Google Sheets (OAuth) and SMTP (Gmail) in n8n (use App Password if 2FA) Config (Set node)**: SPREADSHEET_ID SOURCE_SHEET = Candidate Status TIMEZONE = Asia/Kolkata REJECT_STATUS_CSV = e.g., Rejected SMTP_FROM = e.g., careers@company.com SUBJECT_TEMPLATE = Regarding your application for {{role}} at {{company_name}} HTML_TEMPLATE / TEXT_TEMPLATE RATE_LIMIT_SECONDS = 10 INCLUDE_WEEKENDS = true DRY_RUN = false Activate**: Enable the workflow Requirements Google Sheet with the “Candidate Status” tab and columns above. SMTP (Gmail) account for sending. n8n (cloud or self-hosted) with Google Sheets + SMTP credentials. How to customize Statuses**: REJECT_STATUS_CSV supports comma-separated exact values (e.g., Rejected,Not Selected) Templates**: Edit SUBJECT_TEMPLATE, HTML_TEMPLATE, TEXT_TEMPLATE Variables:** {{candidate_name}}, {{role}}, {{company_name}}, {{recruiter_name}}, and optional {{feedback_text}}/{{feedback_html}} from interview_feedback Schedule**: Change Cron time from 18:00 to your preferred hour Rate limit**: Tune RATE_LIMIT_SECONDS for SMTP policy Preview**: Set DRY_RUN=true for a safe, no-send preview Add-ons Dynamic Reply-To** per recruiter_email Localization/Variants** via language or template_variant columns Daily summary** email: sent/skip/error counts Validation & logging**: log invalid emails to another tab Gmail API**: swap SMTP with Gmail nodes if preferred Use Case Examples Daily round-up**: 18:00 IST closure emails for all candidates marked Rejected today Multi-brand hiring**: Switch company_name per row and personalize subject lines Compliance/logging**: DRY\RUN each afternoon, review, then flip to live sends Common troubleshooting No emails sent**: Ensure status exactly matches REJECT_STATUS_CSV (case-sensitive) and candidate_email is present Duplicates**: Verify rejection_sent_at is blank before run; workflow sets it after sending Blank variables**: Fill candidate_name, role, company_name, recruiter_name in the sheet SMTP errors**: Check credentials, sender permissions, and daily limits Timing**: Confirm workflow timezone Asia/Kolkata and Cron = 18:00 Need Help? Want us to tailor the template, add a summary report or wire up company-based variants? Contact our n8n automation engineers at WeblineIndia and we’ll plug it in.
by WeblineIndia
(Retail) Social Review Monitoring This workflow automatically monitors WooCommerce product reviews, detects low-rated and approved reviews, checks whether the review already exists in Google Sheets, updates or inserts records accordingly, and sends a clear Slack alert generated using OpenAI for new low-rated reviews. This workflow runs on a schedule, fetches WooCommerce product reviews, filters only approved reviews with low ratings (≤ 2 stars), checks if the review already exists in Google Sheets, and then: Updates** the existing record if the review is already stored Generates a Slack alert* using OpenAI and *adds a new row** if the review is new You receive: Automated monitoring of customer reviews** Centralized Google Sheets tracking** for low-rated reviews Instant Slack alerts** for new negative feedback Ideal for support, product and operations teams who want fast visibility into unhappy customer feedback without manual checks. What It Does This workflow automates negative review detection and response: Runs automatically every 5 hours Fetches product reviews from WooCommerce Processes reviews one by one Filters only approved reviews Identifies low-rated reviews (≤ 2 stars) Checks if the review already exists in Google Sheets Updates existing records or inserts new ones Sends a professional Slack alert for new reviews This ensures no duplicate alerts and keeps your review data up to date. Who’s It For This workflow is ideal for: E-commerce teams Customer support teams Product managers QA and operations teams Store owners monitoring customer satisfaction Requirements to Use This Workflow To run this workflow, you need: n8n instance** (cloud or self-hosted) WooCommerce REST API credentials** Google Sheets account** with edit access Slack workspace** with API permissions OpenAI API key** How It Works Scheduled Trigger – Workflow runs automatically every 5 hours Fetch Reviews – Pulls product reviews from WooCommerce Normalize Data – Extracts required fields Loop Reviews – Processes reviews one by one Approval Check – Allows only approved reviews Rating Check – Filters reviews rated 2 stars or lower Sheet Lookup – Checks if review ID already exists Decision Logic – Routes based on review existence AI Message Creation – Generates a Slack alert for new reviews Slack Notification – Sends alert to the configured channel Sheet Update – Updates or appends review data Setup Steps Import the provided n8n workflow JSON file Configure WooCommerce HTTP Request node with your credentials Connect your Google Sheets account and select the correct sheet Connect your Slack account and choose a channel Add your OpenAI API key to the OpenAI node Verify column names match your Google Sheet Activate the workflow How To Customize Nodes Change Rating Threshold Modify the Check Low Rating IF node: Adjust rating value (e.g., ≤ 3 stars) Add additional conditions if needed Customize Slack Message Edit the OpenAI prompt to: Change tone Add mentions Include product links Customize Google Sheet You can add extra columns such as: Response status Assigned team member Resolution notes Add-Ons (Optional Enhancements) You can extend this workflow to: Auto-create support tickets Send email alerts Detect repeated negative reviewers Add sentiment analysis Generate daily or weekly summary reports Use Case Examples 1\. Customer Support Alerts Notify support teams instantly about negative feedback. 2\. Product Quality Tracking Identify recurring product issues early. 3\. Review Auditing Maintain a clean, duplicate-free review log. Troubleshooting Guide | Issue | Possible Cause | Solution | |----------------------|--------------------------|----------------------------------------| | No Slack alert | Slack credentials missing| Reconnect Slack API | | Duplicate rows | Review ID mismatch | Verify lookup column | | Sheet update fails | Column name mismatch | Match sheet headers | | Workflow not running | Trigger disabled | Enable Schedule Trigger | Need Help? If you need help customizing or extending this workflow with advanced features like adding ticketing, dashboards or analytics, then our n8n workflow developers at WeblineIndia will be happy to assist.
by PDF Vector
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Transform Research Papers into a Searchable Knowledge Graph This workflow automatically builds and maintains a comprehensive knowledge graph from academic papers, enabling researchers to discover connections between concepts, track research evolution, and perform semantic searches across their field of study. By combining PDF Vector's paper parsing capabilities with GPT-4's entity extraction and Neo4j's graph database, this template creates a powerful research discovery tool. Target Audience & Problem Solved This template is designed for: Research institutions** building internal knowledge repositories Academic departments** tracking research trends and collaborations R&D teams** mapping technology landscapes Libraries and archives** creating searchable research collections It solves the problem of information silos in academic research by automatically extracting and connecting key concepts, methods, authors, and findings across thousands of papers. Prerequisites n8n instance with PDF Vector node installed OpenAI API key for GPT-4 access Neo4j database instance (local or cloud) Basic understanding of graph databases At least 100 API credits for PDF Vector (processes ~50 papers) Step-by-Step Setup Instructions Configure PDF Vector Credentials Navigate to Credentials in n8n Add new PDF Vector credentials with your API key Test the connection to ensure it's working Set Up Neo4j Database Install Neo4j locally or create a cloud instance at Neo4j Aura Note your connection URI, username, and password Create database constraints for better performance: CREATE CONSTRAINT paper_id IF NOT EXISTS ON (p:Paper) ASSERT p.id IS UNIQUE; CREATE CONSTRAINT author_name IF NOT EXISTS ON (a:Author) ASSERT a.name IS UNIQUE; CREATE CONSTRAINT concept_name IF NOT EXISTS ON (c:Concept) ASSERT c.name IS UNIQUE; Configure OpenAI Integration Add OpenAI credentials in n8n Ensure you have GPT-4 access (GPT-3.5 can be used with reduced accuracy) Set appropriate rate limits to avoid API throttling Import and Configure the Workflow Import the template JSON into n8n Update the search query in the "PDF Vector - Fetch Papers" node to your research domain Adjust the schedule trigger frequency based on your needs Configure the PostgreSQL connection for logging (optional) Test with Sample Papers Manually trigger the workflow Monitor the execution for any errors Check Neo4j browser to verify nodes and relationships are created Adjust entity extraction prompts if needed for your domain Implementation Details The workflow operates in several stages: Paper Discovery: Uses PDF Vector's academic search to find relevant papers Content Parsing: Leverages LLM-enhanced parsing for accurate text extraction Entity Extraction: GPT-4 identifies concepts, methods, datasets, and relationships Graph Construction: Creates nodes and relationships in Neo4j Statistics Tracking: Logs processing metrics for monitoring Customization Guide Adjusting Entity Types: Edit the GPT-4 prompt in the "Extract Entities" node to include domain-specific entities: // Add custom entity types like: // - Algorithms // - Datasets // - Institutions // - Funding sources Modifying Relationship Types: Extend the "Build Graph Structure" node to create custom relationships: // Examples: // COLLABORATES_WITH (between authors) // EXTENDS (between papers) // FUNDED_BY (paper to funding source) Changing Search Scope: Modify providers array to include/exclude databases Adjust year range for historical or recent focus Add keyword filters for specific subfields Scaling Considerations: For large-scale processing (>1000 papers/day), implement batching Use Redis for deduplication across runs Consider implementing incremental updates to avoid reprocessing Knowledge Base Features: Automatic concept extraction with GPT-4 Research timeline tracking Author collaboration networks Topic evolution visualization Semantic search interface via Neo4j Components: Paper Ingestion: Continuous monitoring and parsing Entity Extraction: Identify key concepts, methods, datasets Relationship Mapping: Connect papers, authors, concepts Knowledge Graph: Store in graph database Search Interface: Query by concept, author, or topic Visualization: Interactive knowledge exploration
by Eugene
Find which AI search topics each domain owns with SE Ranking and GPT Who is this for SEO teams wanting to understand topic-level AI search dominance across competitors Content strategists building editorial plans around AI visibility gaps Marketing managers benchmarking brand presence across AI search topics What this workflow does Pulls AI search prompts for your domain and up to 2 competitors, then uses GPT to cluster them into topics and reason about which domain owns each one — turning a flat list of prompts into a strategic competitive topic map. What you'll get AI search leaderboard with share of voice across ChatGPT, Perplexity, Gemini, AI Overviews, and AI Mode A topic-level competitive map showing which domain wins each topic area Prompt counts per domain per topic so you can see exactly where you're ahead or behind A one-line actionable insight per topic to guide your content strategy An overall winner and competitive summary saved to Google Sheets How it works Add your domain and 2 competitors in the form — pulls the AI search leaderboard across all 5 LLM engines Fetches up to 10 prompts per domain (both brand and target) for you and each competitor Filters competitor prompts to keep only SEO-relevant topics — removes noise like gaming or sports Sends all prompts to GPT with instructions to cluster them into topics and identify which domain appears most per topic GPT reasons about dominance per cluster and returns a structured competitive topic map Saves the leaderboard and topic map to separate tabs in Google Sheets Requirements SE Ranking community node installed SE Ranking API token (Get one here) OpenAI API key Google Sheets account (optional) Setup Install the SE Ranking community node Add your SE Ranking API credentials Add your OpenAI API credentials Connect your Google Sheets account and set a spreadsheet URL in each export node Activate the workflow — n8n generates a unique form URL you can share or embed Open the form, fill in your domain and competitors, and the workflow runs automatically Customization Change prompts_limit in the Configuration node to fetch more or fewer prompts per domain Change source in the Configuration node for a different regional database (us, uk, de, fr, es, etc.) Edit the system prompt in the GPT node to adjust how topics are clustered or how insights are written
by Florent
Automated n8n Workflows & Credentials Backup to Local/Server Disk & FTP Complete backup solution that saves both workflows and credentials to local/server disk with optional FTP upload for off-site redundancy. What makes this workflow different: Backs up workflows AND credentials together Saves to local/server disk (not Git, GitHub, or any cloud services) Optional FTP upload for redundancy (disabled by default) Comprehensive error handling and email notifications Timezone-aware** scheduling Ready to use with minimal configuration How it works Backup Process (Automated Daily at 4 AM): Initialisation - Sets up timezone-aware timestamps and configurable backup paths for both local/server disk and FTP destinations Folder Creation - Creates date-stamped backup directories (YYYY-MM-DD format) on local/server disk Dual Backup Operations - Processes credentials and workflows in two separate branches: Credentials Branch: Exports n8n credentials using the built-in CLI command with backup flag Lists exported credential files in the credentials folder Reads each credential file from disk Optional: Uploads to FTP server (disabled by default) Optional: Logs FTP upload results for credentials Workflows Branch: Retrieves all workflows via n8n API Cleans workflow names for cross-platform compatibility Converts workflows to formatted JSON files Writes files to local/server disk Optional: Uploads to FTP server (disabled by default) Optional: Logs FTP upload results for workflows Data Aggregation - Combines all workflow data with binary attachments for comprehensive reporting Results Merging - Consolidates credentials FTP logs, workflows FTP logs, and aggregated workflow data Summary Generation - Creates detailed backup logs including: Statistics (file counts, sizes, durations) Success/failure tracking for local and FTP operations Error tracking with detailed messages Timezone-aware timestamps Notifications - Sends comprehensive email reports with log files attached and saves execution logs to disk How to use Initial Setup: Configure the Init Node - Open the "Init" node and customize these key parameters in the "Workflow Standard Configuration" section: // Admin email for notifications const N8N_ADMIN_EMAIL = $env.N8N_ADMIN_EMAIL || 'youremail@world.com'; // Workflow name (auto-detected) const WORKFLOW_NAME = $workflow.name; // Projects root directory on your server const N8N_PROJECTS_DIR = $env.N8N_PROJECTS_DIR || '/files/n8n-projects-data'; // projects-root-folder/ // └── Your-project-folder-name/ // ├── logs/ // ├── reports/ // ├── ... // └── [other project files] // Project folder name for this backup workflow const PROJECT_FOLDER_NAME = "Workflow-backups"; Then customize these parameters in the "Workflow Custom Configuration" section: // Local backup folder (must exist on your server) const BACKUP_FOLDER = $env.N8N_BACKUP_FOLDER || '/files/n8n-backups'; // FTP backup folder (root path on your FTP server) const FTP_BACKUP_FOLDER = $env.N8N_FTP_BACKUP_FOLDER || '/n8n-backups'; // FTP server name for logging (display purposes only) const FTPName = 'Synology NAS 2To'; These variables can also be set as environment variables in your n8n configuration. Set Up Credentials: Configure n8n API credentials for the "Fetch Workflows" node Configure SMTP credentials for email notifications Optional: Configure FTP credentials if you want to enable off-site backups Configure Backup Folder: Ensure the backup folder path exists on your server Verify proper write permissions for the n8n process If running in Docker, ensure volume mapping is correctly configured Customize Email Settings: Update the "Send email" node with your recipient email address or your "N8N_ADMIN_EMAIL" environment value Adjust email subject and body text as needed Enabling FTP Upload (Optional): By default, FTP upload nodes are disabled for easier setup. To enable off-site FTP backups: Simply activate these 4 nodes (no other changes needed): "Upload Credentials To FTP" "FTP Logger (credentials)" "Upload Workflows To FTP" "FTP Logger (workflows)" Configure FTP credentials in the two upload nodes The workflow will automatically handle FTP operations and include upload status in reports Requirements n8n API credentials (for workflow fetching) SMTP server configuration (for email notifications) Adequate disk space for local backup storage Proper file system permissions for backup folder access Docker environment with volume mapping (if running n8n in Docker) Optional: FTP server access and credentials (for off-site backups) Good to know Security**: Credentials are exported using n8n's secure backup format - actual credential values are not exposed in plain text Timezone Handling**: All timestamps respect configured timezone settings (defaults to Europe/Paris, configurable in Init node) File Naming**: Automatic sanitization ensures backup files work across different operating systems (removes forbidden characters, limits length to 180 characters) FTP Upload**: Disabled by default for easier setup - simply activate 4 nodes to enable off-site backups without any code changes Connection Resilience**: FTP operations include error handling for timeout and connection issues without failing the entire backup Graceful Degradation**: If FTP nodes are disabled, the workflow completes successfully with local backups only and indicates FTP status in logs Error Handling**: Comprehensive error catching with detailed logging and email notifications Dual Logging**: Creates both JSON logs (for programmatic parsing) and plain text logs (for human readability) Storage**: Individual workflow JSON files allow for selective restore and easier version control integration Scalability**: Handles any number of workflows efficiently with detailed progress tracking This automated backup workflow saves your n8n data to both local disk and FTP server. To restore your backups, use: "n8n Restore from Disk - Self-Hosted Solution" for local/server disk restores "n8n Restore from FTP - Remote Backup Solution" for FTP remote restores
by Fabrice
This n8n template shows you how to automate on-page SEO and GEO (Generative Engine Optimization) audits with a sovereign AI. By combining a web crawler and the IONOS AI Model Hub, any URL you submit is fully analyzed and a structured audit report is delivered to your inbox in minutes. Use cases SEO audits at scale:** Submit any URL through a simple form and receive a ready-to-share audit report — no manual checking of title tags, meta descriptions, or schema markup needed. GEO readiness check:** Assess whether a page is likely to be cited by AI-powered search engines like Google AI Overviews, Bing Copilot, or Perplexity — covering E-E-A-T signals, answer-ready content, and structured data quality. On-demand analysis:** Trigger an audit any time directly from the form; How it works A Form Trigger collects three inputs: the target URL, the preferred crawler type, and the recipient email address. A Switch node routes the request to either a simple HTTP crawler (fast, suitable for static and server-rendered sites) or a headless browser crawler powered by Apify Playwright (for JavaScript-heavy SPAs built with React, Vue, or Angular). The Extract SEO Data code node then processes the raw HTML: it preserves the full <head> section including JSON-LD structured data, extracts the heading hierarchy (H1–H6), captures image alt attributes, collects internal and external links with anchor text, and strips the body down to clean readable text — giving the AI a complete and structured picture of the page. The SEO + GEO Audit node sends this data to the IONOS AI Model Hub (Mistral Nemo) with a detailed prompt that instructs the model to return a two-part Markdown report. Part 1 covers SEO signals: title tag, meta description, canonical URL, robots directives, heading structure, schema markup, Open Graph tags, hreflang, image alt texts, and link signals. Part 2 covers GEO signals: E-E-A-T attribution, factual clarity, answer-ready content, structured data quality for AI parsers, citation worthiness, brand entity consistency, and content freshness. Each finding is categorized as a Critical Issue, Quick Win, or Opportunity. A Markdown node converts the report to HTML, and a Gmail node delivers it as a formatted email to the address entered in the form. Good to know HTTP vs Headless: Use HTTP for standard WordPress, static HTML, or server-rendered sites — it is faster and free. Switch to Headless (Apify or other provider) only when the page relies on JavaScript to render its content, such as React or Vue applications. Model selection: The IONOS AI Model Hub offers several LLMs. Mistral Nemo delivers strong reasoning for structured SEO analysis. You can swap the model name directly in the audit node for any model available on the Hub. Data extraction: The workflow extracts headings, image alts, and links as structured fields before stripping tags — so the AI can audit alt text coverage and heading hierarchy accurately, not just guess from plain text. How to set it up Import the workflow JSON into your n8n instance. Add your IONOS Cloud API token to the IONOS Cloud Chat Model node credentials. Connect your Gmail OAuth2 credentials to the Gmail node. For headless crawling: create a free account at console.apify.com, copy your API token, and paste it into the Headless Browser Crawl node. Activate the workflow, open the form URL, submit a test URL, and verify the audit arrives in your inbox. Requirements n8n instance** — self-hosted or n8n Cloud IONOS Cloud account** — to access the AI Model Hub (Mistral Nemo) Gmail account (OAuth2)** — for delivering audit reports Apify account (optional)** — only required for the Headless Browser crawler path
by Zaid
How it works Runs automatically on a weekly schedule (every Monday at 9am by default). Reads all rows from a Google Sheets spreadsheet containing your data. Aggregates the data and sends it to OpenAI to generate a concise summary report. Formats the report with a date-stamped subject line. Emails the summary to your chosen recipient via Gmail. Logs every sent report to a separate spreadsheet for tracking. Turn your raw spreadsheet data into a professional weekly digest without lifting a finger. Setup steps Estimated setup time:** 10 minutes Adjust the Schedule Trigger for your preferred day and time. Connect your Google Sheets account and set the data spreadsheet ID and sheet name. Connect your OpenAI API credentials. Set the recipient email address in the Send Report Email node. Connect your Gmail account for sending. Create a "Report Log" sheet with columns: date, subject, sent_at. Activate the workflow.
by noda
Overview Auto-translate YouTube uploads to Japanese and post to Slack (DeepL + Slack) Who’s it for Marketing or community teams that follow English-speaking creators but share updates with Japanese audiences; language learners who want JP summaries of newly released videos; internal comms teams curating industry channels for a JP workspace. What it does This workflow detects new YouTube uploads, retrieves full metadata, translates the title and description into Japanese using DeepL, and posts a formatted message to a Slack channel. It also skips non-English titles to avoid unnecessary translation. How it works ・RSS watches a channel for new items. ・YouTube API fetches the full snippet (title/description). ・Text is combined into a single payload and sent to DeepL. ・The translated result + original metadata is merged and posted to Slack. Requirements ・YouTube OAuth (for reliable snippet retrieval) ・DeepL API key (Free or Pro) ・Slack OAuth How to set up ・Duplicate this template. ・Open the Config (Set) node and fill in YT_CHANNEL_ID, TARGET_LANG, SLACK_CHANNEL. ・Connect credentials for YouTube, DeepL, and Slack (don’t hardcode API keys in HTTP nodes). ・Click Execute workflow and verify one sample post. How to customize ・Change TARGET_LANG to any language supported by DeepL. ・Add filters (exclude Shorts, skip videos under N characters). ・Switch to Slack Blocks for richer formatting or thread replies. ・Add a fallback translator or retry logic on HTTP errors. Notes & limits DeepL Free/Pro have different endpoints/quotas and monthly character limits. YouTube and Slack also enforce rate limits. Keep credentials in n8n’s credential store; do not commit keys into templates. Rotate keys if you accidentally exposed them.