by Daniel
Secure your n8n automations with this comprehensive template that automates periodic backups to Telegram for instant access while enabling flexible restores from Google Drive links or direct file uploadsโensuring quick recovery without data loss. ๐ What This Template Does This dual-branch workflow handles full n8n instance backups and restores seamlessly. The backup arm runs every 3 days, fetching all workflows via the n8n API, aggregating them into a JSON array, converting to a text file, and sending it to Telegram for offsite storage and sharing. The restore arm supports two entry points: manual execution to pull a backup from Google Drive or form-based upload for local files, then parses the JSON, cleans workflows for compatibility, and loops to create missing ones or update existing by nameโhandling batches efficiently to respect API limits. Scheduled backups with Telegram delivery for easy stakeholder access Dual restore paths: Drive download or direct file upload via form Intelligent create-or-update logic with data sanitization to avoid conflicts Looped processing with existence checks and error continuation ๐ง Prerequisites n8n instance with API enabled (self-hosted or cloud) Telegram account for bot setup Google Drive account (optional, for Drive-based restores) ๐ Required Credentials n8n API Setup In n8n, navigate to Settings โ n8n API Enable the API and generate a new key Add to n8n as "n8n API" credential type, pasting the key in the API Key field Telegram API Setup Message @BotFather on Telegram to create a new bot and get your token Find your chat ID by messaging @userinfobot Add to n8n as "Telegram API" credential type, entering the Bot Token Google Drive OAuth2 API Setup In Google Cloud Console, go to APIs & Services โ Credentials Create an OAuth 2.0 Client ID for Web application, enable Drive API Add redirect URI: [your-n8n-instance-url]/rest/oauth2-credential/callback Add to n8n as "Google Drive OAuth2 API" credential type and authorize โ๏ธ Configuration Steps Import the workflow JSON into your n8n instance Assign the n8n API, Telegram API, and Google Drive credentials to their nodes Update the Telegram chat ID in the "Send Backup to Telegram" node Set the Google Drive file ID in the "Download Backup from Drive" node (from file URL) Activate the workflow and test backup by executing the Schedule node manually Test restore: Run manual trigger for Drive or use the form for upload ๐ฏ Use Cases Dev teams backing up staging workflows to Telegram for rapid production restores during deployments Solo automators uploading local backups via form to sync across devices after n8n migrations Agencies sharing client workflow archives via Drive links for secure, collaborative restores Educational setups scheduling exports to Telegram for student template distribution and recovery โ ๏ธ Troubleshooting Backup file empty: Verify n8n API permissions include read access to workflows Restore parse errors: Check JSON validity in backup file; adjust Code node property reference if needed API rate limits hit: Increase Wait node duration or reduce batch size in Loop Form upload fails: Ensure file is valid JSON text; test with small backup first
by Ms. Phuong Nguyen (phuongntn)
An AI Recruiter that screens, scores, and ranks candidates in minutes โ directly inside n8n. ๐ง Overview An AI-powered recruiter workflow that compares multiple candidate CVs with a single Job Description (JD). It analyzes text content, calculates fit scores, identifies strengths and weaknesses, and provides automated recommendations. โ๏ธ How it works ๐น Webhook Trigger โ Upload one Job Description (JD) and multiple CVs (PDF or text) ๐น File Detector โ Auto-identifies JD vs CV ๐น Extract & Merge โ Reads text and builds candidate dataset ๐น ๐ค AI Recruiter Agent โ Compares JD & CVs โ returns Fit Score, Strengths, Weaknesses, and Recommendation ๐น ๐ค Output Node โ Sends structured JSON or summary table for HR dashboards or Chat UI Example: Upload JD.pdf + 3 candidate CVs โ get instant JSON report with top match and recommendations. ๐งฉ Requirements OpenAI or compatible AI Agent connection (no hardcoded API keys). Input files in PDF or text format (English or Vietnamese supported). n8n Cloud or Self-Hosted v1.50+ with AI Agent nodes enabled. ๐ธ โOpenAI API Key or n8n AI Agent credential requiredโ ๐งฑ Customizing this workflow Swap the AI model with Gemini, Claude, or another LLM. Add a Google Sheets export node to save results. Connect to SAP HR or internal employee APIs. Adjust scoring logic or include additional attributes (experience, skills, etc.). ๐ฉโ๐ผ Author https://www.linkedin.com/in/nguyen-phuong-17a71a147/ Empowering HR through intelligent, data-driven recruitment.
by Automate With Marc
Gemini 3 Image & PDF Extractor (Google Drive โ Gemini 3 โ Summary) Automatically summarize newly uploaded images or PDF reports using Google Gemini 3, triggered directly from a Google Drive folder. Perfect for anyone who needs fast AI-powered analysis of financial reports, charts, screenshots, or scanned documents. ๐ฅ Watch the full step-by-step video tutorial: https://www.youtube.com/watch?v=UuWYT_uXiw0 What this template does This workflow watches a Google Drive folder for new files and automatically: Detects new uploaded files Uses Google Drive Trigger Watches a specific folder for fileCreated events Filters by MIME type: image/png image/webp application/pdf Downloads the file automatically Depending on the file type: Images โ Download via HTTP Request โ Send to Gemini 3 Vision PDFs โ Download via HTTP Request โ Extract content โ Send to Gemini 3 Analyzes content using Gemini 3 Two separate processing lanes: ๐ผ๏ธ Image Lane Image is sent to Gemini 3 (Vision / Image Analyze) Extracts textual + visual meaning from charts, diagrams, or screenshots Passes structured output to an AI Analyst Agent Agent summarizes and highlights top 3 findings ๐ PDF Lane PDF is downloaded Text is extracted using Extract From File Processed using Gemini 3 via OpenRouter Chat Model AI Analyst Agent summarizes charts/tables and extracts insights Why this workflow is useful Save hours manually reading PDFs, charts, and screenshots Convert dense financial or operational documents into digestible insights Great for: Financial analysts Operations teams Market researchers Content & reporting teams Anyone receiving frequent reports via Drive Requirements Before using this template, you will need: Google Drive OAuth credential (for Drive trigger + file download) Gemini 3 / PaLM or OpenRouter API key (Optional) Update folder ID to your own Google Drive target folder โ ๏ธ No credentials are included in this template. Add them manually after importing it. Node Overview Google Drive Trigger Watches a specific Drive folder for newly added files Provides metadata like webContentLink and MIME type Filter by Type (IF Node) Routes files to Image lane or PDF lane png or webp โ Image pdf โ PDF ๐ผ๏ธ Image Processing Lane Download Image (HTTP Request) Analyze Image (Gemini Vision) Analyzer Agent Summarizes findings Highlights actionable insights Powered by OpenRouter Gemini 3 ๐ PDF Processing Lane Download PDF (HTTP Request) Extract From File โ PDF Analyzer Agent (PDF) Summarizes extracted chart/report information Highlights key takeaways Setup Guide Import the template into your n8n workspace Open Google Drive Trigger Select your Drive OAuth credential Replace folder ID with your target folder Open Gemini 3 / OpenRouter AI Model nodes Add your API credentials Test by uploading: A PNG/WebP chart screenshot A multi-page PDF report Check the execution to view summary outputs Customization Ideas Add email delivery (send the summary to yourself daily) Save summaries into: Google Sheets Notion Slack channels n8n Data Tables Add a second agent to convert summaries into: Weekly reports PowerPoint slides Slack-ready bullet points Add classification logic: Revenue reports Marketing analytics Product dashboards Financial charts Troubleshooting Trigger not firing? Confirm your Drive OAuth credential has read access to the folder. Gemini errors? Ensure your model ID matches your API provider: models/gemini-3-pro-preview google/gemini-3-pro-preview PDF extraction empty? Check if the file contains selectable text or only images. (You can add OCR if needed.)
by Chris Pryce
Overview This workflow streamlines the process of setting up a chat-bot using the Signal Messager API. What this is for Chat-bot applications have become very popular on Whatsapp and Telegram. However, security conscious people may be hesitant to connect their AI agents to these applications. Compared to Whatsapp and Telegram, the Signal messaging app is more secure and end-to-end encrypted by default. In part because of this, it is more difficult to create a chat-bot application in this app. However, this is still possible to do if you host your own Signal API endpoint. This workflow requires the installation of a community-node package. Some additional setup for the locally hosted Signal API endpoint is also necessary. As such, it will only work with self-hosted instances of n8n. You may use any AI model you wish for this chat-bot, and connect different tools and APIs depending on your use-case. How to setup Step 1: Setup Rest API Before implementing this workflow, you must setup a local Signal Client Rest API. This can be done using a docker container based on this project: bbernhard/signal-cli-rest-api. version: "3" services: signal-cli-rest-api: image: bbernhard/signal-cli-rest-api:latest environment: MODE=normal #supported modes: json-rpc, native, normal #- AUTO_RECEIVE_SCHEDULE=0 22 * * * #enable this parameter on demand (see description below) ports: "8080:8080" #map docker port 8080 to host port 8080. volumes: "./signal-cli-config:/home/.local/share/signal-cli" #map "signal-cli-config" folder on host system into docker container. the folder contains the password and cryptographic keys when a new number is registered After starting the docker-container, you will be able to interact with a local Signal client over a Rest API at http://localhost:8080 (by default, this setting can be modified in the docker-compose file). Step 2: Install Node Package This workflow requires the community-node package developed by ZBlaZe: n8n-nodes-signal-cli-rest-api. Navigate to ++your-n8n-server-address/settings/community-nodes++, click the 'Install' button, and paste in the communiy-node package name '++n8n-nodes-signal-cli-rest-api++' to install this community node. Step 3: Register and Verify Account The last step requires a phone-number. You may use your own phone-number, a pre-paid sim card, or (if you are a US resident), a free Google Voice digital phone-number. An n8n web-form has been created in this workflow to make headless setup easier. In the Form nodes, replace the URL with the address of your local Signal Rest API endpoint. Open the webform and enter the phone number you are using to register your bot's Signal account Signal needs to verify you are human before registering an account. Visit this page to complete the captcha challenge. The right-click the 'Open Signal' button and copy the link address. Paste this into the second form field and hit 'Submit'. At this point you should receive a verification token as an SMS message to the phone-number you are using. Copy this and paste it into the second web-form. Your bot's signal account should be setup now. To use this account in n8n, add the Rest-API address and account-number (phone-number) as a new n8n credential. Step 4: Optional For extra security it is recommended to restrict communication with this chat-bot. In the 'If' workflow node, enter your own signal account phone-number. You may also provide a UUID. This is an identifier number unique to your mobile device. You can find this by sending a test message to your bot's signal account and copying it from the workflow execution data.
by Avkash Kakdiya
How it works This workflow runs daily to collect the latest funding round data from Crunchbase. It retrieves up to 100 recent funding events, including company, investors, funding amount, and industry details. The data is cleaned and filtered to only include rounds announced in the last 30 days. Finally, the results are saved into both Google Sheets for reporting and Airtable for structured database management. Step-by-step Trigger & Data Fetching Schedule Trigger node โ Runs the workflow once a day. HTTP Request node โ Calls the Crunchbase API to fetch the latest 100 funding rounds with relevant details. Data Processing Code node โ Parses the raw API response into clean fields such as company name, funding type, funding amount, investors, industry, and Crunchbase URL. Filter node โ Keeps only funding rounds from the last 30 days to ensure the dataset remains fresh and relevant. Storage & Outputs Google Sheets node โ Appends or updates the filtered funding records in a Google Sheet for easy sharing and reporting. Airtable node โ Stores the same records in Airtable for more structured, database-style organization and management. Why use this? Automates daily collection of startup funding data from Crunchbase. Keeps only the most recent and relevant records for faster insights. Ensures data is consistently stored in both Google Sheets and Airtable. Supports reporting, collaboration, and database management in one flow.
by Natnail Getachew
How it works New Google Form response triggers the workflow Checks if employee was already onboarded (prevents duplicates) Adds user to department-specific Slack channel If in Software department, grants GitHub repo access Invites user to Jira and creates an onboarding task Updates Google Sheet status to "Completed" Set up steps Estimated setup time: 10-15 minutes Connect Google Sheets (2 min) - Update sheet ID in trigger and update nodes Configure Slack (3 min) - Add channel IDs and admin user ID to Code node config Set up Jira (3 min) - Add project keys and component IDs to Code node config Configure GitHub (2 min) - Add org name and repo names to Code node config Detailed setup instructions are included in the sticky notes within the workflow.
by Philflow
This n8n template lets you run prompts against 350+ LLM models and see exactly what each request costs with real-time pricing from OpenRouter Use cases are many: Compare costs across different models, plan your AI budget, optimize prompts for cost efficiency, or track expenses for client billing! Good to know OpenRouter charges a platform fee on top of model costs. See OpenRouter Pricing for details. You need an OpenRouter account with API credits. Free signup available with some free models included. Pricing data is fetched live from OpenRouter's API, so costs are always up-to-date. How it works All available models are fetched from OpenRouter's API when you start. You select a model and enter your prompt via the form (or just use the chat). The prompt is sent to OpenRouter and the response is captured. Token usage (input/output) is extracted from the response using a LangChain Code node. Real-time pricing for your selected model is fetched from OpenRouter. The exact cost is calculated and displayed alongside your AI response. How to use Chat interface: Quick testing - just type a prompt and get the response with costs. Form interface: Select from all available models via dropdown, enter your prompt, and get a detailed cost breakdown. Click "Show Details" on the result form to see the full breakdown (input tokens, output tokens, cost per type). Requirements OpenRouter account with API key (Get one here) Customising this workflow Add a database node to log all requests and costs over time Connect to Google Sheets for cost tracking and reporting Extend with LLM-as-Judge evaluation to also check response quality
by Niels Berkhout
How it works This template is using a LinkedIn User Profile in combination with your detailed Ideal Customer Profile (ICP) to create a score, including reasoning and outreach messages. It is manually triggered and uses a Google Sheet as an entry point. Inside there are rows with only LinkedIn profile urls in it. Then the SourceGeek Node is being triggered and the complete profile info is retrieved. That info is being sent to an AI Agent where a long description of the Ideal Customer Profile is written down. The AI Agent will process all that data and will return with three values ICP rating (between 0 - 100) ICP reasoning. Where does the score come from A 1st, 2nd and 3rd outreach message which you can use later on After that the original Google Sheet row will be updated with the data created by the AI Agent How to use Populate a Google Sheet with LinkedIn Profile URLs which potentially can be your customers Let the SourceGeek fetch all their data from LinkedIn and enrich it with soft skills and much more Describe in detail your ICP and let the AI Agent determine the score of the profile Update the initial Google Sheet with the ICP Score and the reasoning how this score came to be Requirements A Google Sheet with LinkedIn profile urls The verified SourceGeek node
by Harshal Patil
Automated error monitoring and reporting system using data tables This template helps you monitor workflow failures by automatically logging every error to a data table, then sending periodic summaries via email, Slack, Microsoft Teams, or Discordโso you catch issues before they impact your operations. What This Workflow Does The template uses two synchronized workflows to create a complete error monitoring system: Error Capture Workflow - Uses n8n's native error handling to intercept every workflow failure, extract key details (workflow name, error message, timestamp, node information, execution ID), and store them in your data table or database Report Scheduler Workflow - Runs on your configured schedule (daily, weekly, or custom) to query stored errors, aggregate insights, and send formatted summaries through your notification channel How to Use It Capture errors from all workflows โ Store them in one centralized table โ Get daily/weekly summaries in Slack, email, or Teams โจ Key Features Zero-touch error logging** - No modifications needed to existing workflows; errors are captured automatically Flexible storage** - Configure any data table, PostgreSQL, MySQL, MongoDB, or cloud database as your error repository Multiple notification channels** - Send reports via email, Slack, Microsoft Teams, Discord, or custom HTTP endpoints Customizable schedules** - Daily, weekly, or custom-interval reporting to match your team's needs Rich error context** - Every logged error includes workflow name, error message, affected node, timestamp, and execution ID for quick troubleshooting Historical database** - Build a searchable error archive for pattern analysis and long-term debugging ๐ Use Cases Monitor production workflows** - DevOps and platform teams tracking system health across multiple automated processes Debug ETL failures** - Data engineers identifying where pipelines break and why Oversee complex automation** - Teams managing dozens of interconnected workflows without manual checks Stay informed as a solo developer** - Get notified of issues without constantly logging into n8n ๐ Prerequisites n8n instance (self-hosted or n8n Cloud) Data storage (PostgreSQL, MySQL, MongoDB, n8n's built-in tables, or similar) Notification service configured (Gmail, Slack, Teams, Discord, or custom webhook) โ๏ธ Configuration Steps Connect your data storage - Point the error capture workflow to your chosen database or data table Enable error monitoring - Activate the error handling trigger for workflows you want to monitor Set reporting schedule - Choose daily, weekly, or custom intervals for your summary reports Configure notifications - Add your Slack webhook, email address, Teams channel, or Discord endpoint Customize report format - Optionally adjust which error metrics and insights appear in summaries ๐ก Customization Ideas Add error severity levels (critical, warning, info) to prioritize failures Set up real-time critical error alerts in addition to scheduled reports Create workflow-specific error thresholds and escalation rules Integrate with PagerDuty or Opsgenie for incident management Add visualizations or charts to your error summaries Implement automatic retry logic for specific error types ๐ Sample Error Summary Output Your reports will include: Total errors in the reporting period Error count breakdown by workflow Most frequently occurring error types Error timeline and trends Direct links to failed executions for quick debugging ๐ง Maintenance Tips Review error patterns monthly to identify workflows that need optimization Archive or delete old error logs periodically to keep your database performant Adjust reporting frequency as your workflow volume grows Update notification recipients when team members join or leave
by BytezTech
Run automated daily standups using Slack, Notion, and Redis ๐ Overview This workflow fully automates your team's daily standup process using Slack for communication, Notion for structured data storage, and Redis for real-time session management. It automatically sends standup questions to active team members, collects and stores their responses, manages conversation sessions, and generates structured summary reports for managers. Morning and evening standups run on schedule without manual intervention. Redis ensures fast and reliable session tracking, prevents duplicate standups, and maintains conversation state. All responses are securely stored in Notion for long-term reporting and tracking. This workflow eliminates manual follow-ups, improves reporting consistency, and gives managers full visibility into team progress, blockers, and attendance. โ๏ธ How it works This workflow runs automatically based on configured schedules. Morning standup Fetches active team members from the Notion database Creates standup sessions in Redis Sends standup questions to each team member via Slack direct message Stores responses in Notion Tracks session state using Redis Automated reports The workflow automatically generates: Morning summary report showing attendance, responses, and blockers Evening summary report showing accomplishments, completion status, and help requests Both reports are automatically sent to the Slack admin channel. Redis ensures session tracking and prevents duplicate standups. ๐ Setup steps Import this workflow into n8n Connect your Slack credentials Connect your Notion credentials Connect your Redis credentials Configure your Notion database IDs Configure your Slack admin channel ID Activate the workflow The workflow will run automatically based on the configured schedule. ๐ Features Automated standup management Automatically sends standup questions Tracks team responses Stores responses securely in Notion Prevents duplicate standup sessions Automated reporting Attendance tracking Task completion tracking Blocker detection Missing response detection Automatic Slack summary reports ๐ Requirements You need the following accounts: n8n Slack Notion Redis ๐ฏ Benefits Fully automated standup system No manual follow-ups required Automatic attendance tracking Identifies blockers early Improves team visibility Saves management time ๐จโ๐ป Author BytezTech Pvt Ltd
by deAPI Team
Who is this for? E-commerce store owners using Shopify Product managers who need consistent product imagery Marketing teams looking to automate visual content creation Dropshipping businesses needing quick product photos What problem does this solve? Creating professional product images is time-consuming and expensive. This workflow eliminates the need for manual photo editing by automatically generating styled hero images and transparent PNGs ready for your product catalog. What this workflow does Triggers when a new product is created in Shopify Extracts product title, description, category, and tags AI Agent analyzes the product data and uses the deAPI Prompt Booster tool to create an optimized image generation prompt Generates a professional product image using deAPI Removes background to create a transparent PNG version Updates the Shopify product with both images (hero + transparent) Setup Requirements n8n instance** (self-hosted or n8n Cloud) deAPI account for image generation and prompt boosting Shopify scopes: read_products, write_products, write_files Anthropic account for the AI Agent Installing the deAPI Node n8n Cloud: Go to **Settings โ Community Nodes and toggle the "Verified Community Nodes" option Self-hosted: Go to **Settings โ Community Nodes and install n8n-nodes-deapi Configuration Add your deAPI credentials (API key + webhook secret) Add your Shopify credentials (Shop subdomain + APP secret key + Access token) Add your Anthropic credentials (API key) Ensure your n8n instance has an HTTPS webhook URL Set up the webhook URL for the product creation event in Shopify How to customize this workflow Change the AI model**: Swap Anthropic for any other LLM providers Adjust the prompt**: Modify the AI Agent system message for different photography styles Add more processing**: Insert upscaling(e.g. deAPI Upscale an Image) or additional image transformations Different e-commerce platform**: Replace Shopify nodes with WooCommerce, BigCommerce, etc. Add human review**: Insert a Wait node + Slack notification before uploading to Shopify
by Rahul Joshi
๐ Description This workflow reads your portfolio from Google Sheets, fetches market data, evaluates key risk factors such as sector concentration, volatility, and stock correlation, and generates an easy-to-understand risk summary using AI. When meaningful risk is detected, the workflow sends a structured alert to Slack and stores a weekly risk snapshot in Google Sheets for tracking trends over time. This automation is designed to help investors, analysts, and teams understand portfolio risk clearly without manual analysis or constant monitoring. ๐ What This Workflow Does 1๏ธโฃ Runs automatically on a weekly schedule using a Schedule Trigger 2๏ธโฃ Reads portfolio holdings from Google Sheets 3๏ธโฃ Validates portfolio data to ensure required fields are present 4๏ธโฃ Processes stocks in batches to avoid API rate limits 5๏ธโฃ Fetches historical price data from Alpha Vantage 6๏ธโฃ Normalizes market data into a consistent structure 7๏ธโฃ Calculates portfolio-level risk metrics. 8๏ธโฃ Generates a single portfolio risk score and risk flag 9๏ธโฃ Uses AI to explain detected risks in simple language ๐ Sends a clear weekly risk alert to Slack 1๏ธโฃ1๏ธโฃ Stores a summarized weekly risk snapshot in Google Sheets 1๏ธโฃ2๏ธโฃ Handles invalid data safely to avoid noisy or misleading alerts โญ Key Benefits โ Understand portfolio risk at a glance โ Detect sector overexposure and diversification issues early โ Receive only meaningful, high-signal alerts โ Simple AI explanations instead of raw numbers โ Automatic weekly risk history stored in Google Sheets โ No financial advice, only analytical insights โ Designed for reuse and easy customization ๐งฉ Features Weekly automated portfolio risk analysis Google Sheetsโbased portfolio input Alpha Vantage market data integration Batch processing with rate-limit protection Sector, volatility, and correlation risk analysis Portfolio risk scoring system AI-generated risk explanations Slack alerts for detected risk Google Sheets storage for historical tracking ๐ Requirements Google Sheets credentials Alpha Vantage API key Slack credentials (OAuth or webhook) AI provider API key ๐ฏ Target Audience Long-term and active investors Portfolio and risk analysts Finance and operations teams Fintech and investment platforms Automation engineers building financial workflows