by Paul
Workflow Description and Setup Guide This workflow provides comprehensive AI-driven stock analysis, generating detailed deep reports by leveraging advanced vector-based data retrieval and API integrations for precise financial analytics using Danelfin scoring and strategy. Overview This workflow automates stock analysis utilizing: AI-driven scoring and sector insights** (Danelfin configuration) Vector-based data retrieval** (Supabase) Deep analytical reports** Key Components and API Integrations 1. Danelfin AI Stock Analysis API Provides advanced stock scores, rankings, and sector insights. Endpoints:** /ranking: Stock rankings and scores /sectors: Sector information /industries: Industry details 2. Supabase Vector Store Vector embedding storage and quick data retrieval. Workflow Steps Initialization Chat Trigger: Activates when a message requesting stock analysis is received. Stock Analysis and Deep Reporting Main AI Agent: Processes the query, retrieves relevant data from Danelfin, and generates deep analytical reports. Supabase Vector Store: Facilitates efficient data retrieval using embeddings. Reporting Markdown Conversion: Transforms analysis into a readable HTML format. Email Reporting: Sends detailed reports via Gmail. Setup Instructions Prerequisites Obtain API Keys for Supabase and Danelfin. Configuration Steps Set API Keys: Supabase API credentials for vector storage. Danelfin API credentials for stock analysis. Gmail API Credentials: Configure Gmail node for sending reports. Running the Workflow Triggered automatically by chat messages requesting analysis. Reports sent directly via email. This setup ensures precise, AI-driven stock analysis delivered clearly through automated deep reporting.
by Giovanni Beggiato
How it works: Detects new unread Gmail messages Extracts sender name for personalized replies Classifies the email into one of four categories Applies the correct Gmail label and either sends an auto-reply, creates a draft, or logs the contact in Google Sheets Setup steps: Takes about 5β10 minutes to connect accounts and set labels Connect Gmail, OpenAI, and optional Google Sheets in n8n Add label IDs, Google Sheet ID, sheet name, and phone number in Set β Config Create four Gmail labels for the categories Keep full setup instructions in the sticky notes inside the workflow
by Firecrawl
What this does Receives a URL via webhook, uses Firecrawl to scrape the page into clean markdown, and stores it as vector embeddings in Pinecone. A visual, self-hosted ingestion pipeline for RAG knowledge bases. Adding a new source is as simple as sending a URL. The second part of the workflow exposes a chat interface where an AI Agent queries the stored knowledge base to answer questions, with Cohere reranking for better retrieval quality. How it works Part 1: Ingestion Pipeline Webhook receives a POST request with a url field Verify URL validates and normalizes the domain, returning a 422 error if invalid Firecrawl /scrape fetches the page and converts it to clean markdown Embeddings OpenAI generates 1536-dimensional vector embeddings from the scraped content Default Data Loader attaches the source URL as metadata Pinecone Vector Store inserts the content and embeddings into the index Respond to Webhook confirms how many items were added Part 2: RAG Chat Agent Chat trigger receives a user question AI Agent (OpenRouter / Claude Sonnet) queries the Pinecone vector store Cohere Reranker improves retrieval quality before the agent responds Agent answers based solely on the ingested knowledge base π₯ Firecrawl π² Pinecone π§ OpenAI Embeddings π€ OpenRouter (Claude Sonnet) π― Cohere Reranker Webhook usage Send a POST request to the webhook URL: curl -X POST https://your-n8n-instance/webhook/your-id \ -H "Content-Type: application/json" \ -d '{"url": "firecrawl.dev"}' Pinecone setup Your Pinecone index must be configured with 1536 dimensions to match the OpenAI text-embedding-3-small model output. See the sticky note inside the workflow for the exact index settings. Requirements Firecrawl API key OpenAI API key (for embeddings) OpenRouter API key (for the chat agent) Cohere API key (for reranking) Pinecone account with a properly configured index
by Oneclick AI Squad
This automated workflow monitors your website's keyword rankings daily and sends instant alerts to your team when significant ranking drops occur. It fetches current ranking positions, compares them with historical data, and triggers notifications through Slack and email when keywords drop beyond your defined threshold. Good to know The workflow uses SERP API for accurate ranking data; API costs apply based on your usage volume Ranking checks are performed daily to avoid overwhelming search engines with requests The system tracks ranking changes over time and maintains historical data for trend analysis Slack integration requires workspace permissions and proper bot configuration False positives may occur due to personalized search results or data center variations How it works Daily SEO Check Trigger** initiates the workflow on a scheduled basis Get Keywords Database** retrieves your keyword list and current ranking data Filter Active Keywords Only** processes only keywords marked as active for monitoring Fetch Google Rankings via SERP API** gets current ranking positions for each keyword Wait For Response** Wait for gets current ranking positions Parse Rankings & Detect Changes** compares new rankings with historical data and identifies significant drops Filter Significant Ranking Drops** isolates keywords that dropped beyond your threshold (e.g., 5+ positions) Send Slack Ranking Alert** notifies your team channel about ranking drops Send Email Ranking Alert** sends detailed email reports to stakeholders Update Rankings in Google Sheet** saves new ranking data for historical tracking Generate SEO Monitoring Summary** creates a comprehensive report of all ranking changes How to use Import the workflow into n8n and configure your SERP API credentials Set up your Google Sheet with the required keyword database structure Configure Slack webhook URL and email SMTP settings Define your ranking drop threshold (recommended: 5+ position drops) Test the workflow with a small keyword set before full deployment Schedule the workflow to run daily during off-peak hours Requirements SERP API account** with sufficient credits for daily keyword checks Google Sheets access** for keyword database and ranking storage Slack workspace** with webhook permissions for team notifications Email service** (SMTP or API) for stakeholder alerts Keywords database** properly formatted in Google Sheets Database/Sheet Columns Required Google Sheet: "Keywords Database" Create a Google Sheet with the following columns: | Column Name | Description | Example | |-------------|-------------|---------| | keyword | Target keyword to monitor | "best seo tools" | | domain | Your website domain | "yourwebsite.com" | | current_rank | Latest ranking position | 5 | | previous_rank | Previous day's ranking | 3 | | status | Monitoring status | "active" | | target_url | Expected ranking URL | "/best-seo-tools-guide" | | search_volume | Monthly search volume | 1200 | | difficulty | Keyword difficulty score | 65 | | date_added | When keyword was added | "2025-01-15" | | last_checked | Last monitoring date | "2025-07-30" | | drop_threshold | Custom drop alert threshold | 5 | | category | Keyword grouping | "Product Pages" | Customising this workflow Modify ranking thresholds** in the "Filter Significant Ranking Drops" node to adjust sensitivity (e.g., 3+ positions vs 10+ positions) Add competitor monitoring** by duplicating the SERP API node and tracking competitor rankings for the same keywords Customize alert messages** in Slack and email nodes to include your brand voice and specific stakeholder information Extend to multiple search engines** by adding Bing or Yahoo ranking checks alongside Google Implement ranking improvement alerts** to celebrate when keywords move up significantly Add mobile vs desktop tracking** by configuring separate SERP API calls for different device types
by Oneclick AI Squad
This automated n8n workflow monitors real-time cryptocurrency prices using CoinGecko API and sends smart alerts when price conditions are met. It supports multi-coin tracking, dynamic conditions, and instant notifications via Email, Telegram, and Discord. Good to Know Reads crypto watchlist data from Google Sheets. Monitors prices at defined intervals (24/7 monitoring). Handles upper and lower price limits with direction-based alerts (above, below, both). Implements cooldown logic to avoid duplicate alerts. Updates last alert price and timestamp in Google Sheets. Supports multiple alert channels: Email, Telegram, Discord. Uses CoinGecko API for price data (Free tier supported). How It Works 24/7 Crypto Trigger β Runs every minute (or custom interval) to check latest prices. Read Crypto Watchlist β Fetches symbols and conditions from Google Sheets. Parse Crypto Data β Converts Google Sheet data into structured JSON. Fetch Live Crypto Price β Uses CoinGecko API to get latest market price for each coin. Smart Crypto Alert Logic β Compares live price with upper/lower limits and evaluates conditions: Above β Trigger alert if price > upper\_limit. Below β Trigger alert if price < lower\_limit. Both β Trigger alert if either condition is met. Implements cooldown\_minutes to prevent repeated alerts. Check Crypto Alert Conditions β Validates alerts before sending notifications. Send Crypto Email Alert β Sends email alert if condition is true. Send Telegram Crypto Alert β Sends Telegram alert. Send Discord Crypto Alert β Sends Discord alert. Update Crypto Alert History β Updates last_alert_price and last_alert_time in Google Sheet. Crypto Alert Status Check β Ensures alert process completed successfully. Success Notification β Sends confirmation message on success. Error Notification β Sends an error alert if something fails. Google Sheet Columns (A-G) | Column | Description | | ------ | ---------------------------------- | | A | symbol (BTC, ETH, SOL, etc.) | | B | upper_limit (e.g., 45000) | | C | lower_limit (e.g., 40000) | | D | direction (both / above / below) | | E | cooldown_minutes (e.g., 10) | | F | last_alert_price (auto-updated) | | G | last_alert_time (auto-updated) | How to Use Import the workflow into n8n. Configure Google Sheets credentials and link your watchlist sheet. Add your CoinGecko API endpoint in the Fetch Price node (Free tier). Set up Email, Telegram, and Discord credentials for notifications. Test with sample data: Example: BTC, upper\_limit=45000, lower\_limit=40000, direction=both. Execute the workflow and monitor alerts. Requirements n8n environment with execution permissions. Google Sheets integration (with API credentials). CoinGecko API (Free tier supported). Notification channels: Email (SMTP settings in n8n). Telegram Bot Token. Discord Webhook URL. Customizing This Workflow Add more coins in Google Sheet. Modify alert conditions (e.g., percentage change, moving averages). Add SMS or WhatsApp notifications. Integrate with Slack or Microsoft Teams. Use AI-based price predictions for smarter alerts.
by ε―³η°γζ¦
This workflow automates the entire process of running a Print-on-Demand (POD) business by combining market trend analysis with autonomous AI design and quality control. It acts as a virtual product team that researches, designs, vets, and publishes new products to your store every week. Who is it for? This template is ideal for e-commerce entrepreneurs, content creators, and print-on-demand store owners who want to scale their merchandise inventory without spending hours on design and market research. What it does Market Research: Fetches real-time search data from Google Trends and customer preference data from Typeform. AI Design: Uses OpenAI (GPT-4o) to brainstorm t-shirt concepts based on the gathered trends, then generates high-quality vector-style images using Replicate (Flux/Stable Diffusion). Quality Control: A "Vision AI" agent analyzes the generated image, rates it on a scale of 1-10, and filters out any design scoring below 7. Dynamic Pricing & Publishing: Automatically calculates a premium price for higher-rated designs and publishes the product directly to your Printify store. Logging: Saves the product details to Airtable for your records. How to set up Configure Credentials: Open the "Workflow Configuration" node. Replace the placeholder values with your API keys for OpenAI, Replicate, Printify, and Typeform. Set Printify Details: In the "Workflow Configuration" node, add your Shop ID. In the "Publish to Printify" node, update the blueprint_id (the specific t-shirt model, e.g., Bella+Canvas 3001) and print_provider_id. Airtable Setup: Create a table with columns for Title, Description, Price, Quality Score, and Image URL, then map the IDs in the Airtable node. Requirements n8n: Cloud or Self-hosted instance. API Keys: OpenAI (with GPT-4o access), Replicate, Printify, Typeform, and Airtable. Printify Account: A connected store (e.g., Shopify, Etsy, or Pop-up). How to customize Prompt Engineering: Modify the "Chief Designer AI" system prompt to change the artistic style (e.g., from "vector" to "pixel art" or "vintage"). Pricing Logic: Adjust the JavaScript in the "Dynamic Pricing Calculator" to change your base margins or markup rules. Schedule: Change the "Weekly Schedule Trigger" to run daily or monthly depending on your volume needs.
by Rahul Joshi
Description Turn raw marketing data into actionable insights with this n8n Source/UTM Attribution and Reporting workflow! It automatically aggregates lead submissions, calculates Cost Per Lead (CPL) per channel, and generates AI-powered weekly attribution reportsβdelivered straight to your inbox in a professional HTML format. What This Template Does π Runs hourly to process new lead submissions π Aggregates leads by source (Instagram, LinkedIn, Google Ads, etc.) π° Calculates key metrics like Cost Per Lead (CPL) π§ Uses AI to generate executive-ready HTML reports π Highlights top-performing sources and growth opportunities π§ Sends polished reports via Gmail automatically Prerequisites Google Sheets with lead submission data Google Forms (or similar) as the data input source n8n instance (self-hosted or cloud) Azure OpenAI (GPT-4o-mini) API key for AI-powered reporting Gmail API credentials for automated report delivery Step-by-Step Setup Trigger workflow hourly with n8n Scheduler. Fetch new lead submissions from Google Sheets. Aggregate and group data by Source/UTM parameters. Calculate CPL using spend + lead count per channel. Standardize column names for consistent reporting. Send raw + aggregated data to Azure OpenAI for report generation. Format into a professional HTML report (with insights & recommendations). Send report via Gmail node to stakeholders. Customization Ideas Replace Gmail with Slack/Teams notifications for real-time sharing. Add visual charts (Google Data Studio / Looker) for more analytics. Use additional UTM fields (campaign, adgroup, creative) for deeper granularity. Extend reporting to include ROI and ROAS calculations. Key Benefits β Hands-free attribution tracking and analysis β Accurate CPL metrics per channel β AI-generated reports with actionable insights β Saves time vs. manual data crunching β Weekly reports ensure marketing strategy stays optimized Perfect For Marketing teams managing multi-channel campaigns Agencies providing client attribution reports Business owners optimizing ad spend efficiency Growth teams tracking lead quality by source
by Daniel
AI Email Support Agent with RAG & Cohere Reranking Transform your inbox into an intelligent support system: automatically detect new emails, retrieve relevant knowledge from Pinecone, rerank with Cohere for precision, generate contextual replies using Gemini AI, and respondβall while maintaining conversation history. What It Does This workflow triggers on incoming Gmail messages, leverages a LangChain agent with PostgreSQL memory for context, queries a Pinecone vector store (RAG) enhanced by Cohere reranking and OpenAI embeddings, crafts personalized responses via Gemini 2.5, and auto-replies to keep support flowing. Key Features Gmail Integration** - Real-time polling for new emails every minute RAG with Pinecone** - Retrieves top 10 relevant docs from "agency-info" index as agent tool Cohere Reranking** - Boosts retrieval accuracy by reordering results semantically Persistent Memory** - Postgres chat history keyed by email ID for ongoing threads Gemini-Powered Agent** - Handles queries with custom system prompt for agency support Seamless Auto-Reply** - Sends formatted text responses directly in Gmail Perfect For Agencies**: Automate client FAQs on services, pricing, and ownership Support Teams**: Scale responses without losing conversation context Small Businesses**: Handle inquiries 24/7 with AI-driven accuracy Developers**: Prototype RAG agents with vector stores and rerankers Marketers**: Personalize outreach replies based on knowledge base Consultants**: Quick, informed answers from internal docs Technical Highlights Built on n8n's LangChain ecosystem, this workflow highlights: Trigger-to-response pipeline with polling and webhooks Hybrid retrieval: Embeddings + vector search + semantic reranking Stateful agents with database-backed memory for multi-turn chats Multi-provider setup: OpenAI (embeddings), Cohere (rerank), Google (LLM) Scalable for production with configurable topK and session keys Setup Instructions Prerequisites n8n instance with LangChain nodes enabled Accounts for: Gmail (OAuth2), OpenAI (API key), Cohere (API key), Google Gemini (API key), Pinecone (API key and index), Postgres (database connection, e.g., Neon or Supabase) Required Credentials Gmail OAuth2 Enable Gmail API in Google Cloud Console Create OAuth2 credential in n8n with scopes: https://www.googleapis.com/auth/gmail.readonly, https://www.googleapis.com/auth/gmail.send OpenAI API Get API key from platform.openai.com Add as OpenAI credential in n8n Cohere API Sign up at cohere.com Copy API key to n8n Cohere credential Google Gemini API Generate key at https://aistudio.google.com/ Add as Google PaLM credential in n8n (compatible with Gemini) Pinecone API Create index "agency-info" with dimension 1024 Add API key to n8n Pinecone credential Postgres Set up database (e.g., Neon/Supabase) with a table for chat history Add connection details (host, database, user, password) to n8n Postgres credential Configuration Steps Import the workflow JSON into your n8n instance Assign all required credentials to the respective nodes Populate the Pinecone "agency-info" index with your knowledge base documents (use a separate upsert workflow or Pinecone dashboard) Customize the tableName in the Postgres Memory node if needed (default: "email_support_agent_") Adjust the agent's system prompt or topK retrieval if required for your use case Activate the workflow and test by sending a sample email to trigger it Troubleshooting No trigger firing**: Verify Gmail scopes and polling interval Empty retrieval**: Check Pinecone index population, dimensions (must be 1024), and document embeddings Rerank errors**: Ensure Cohere API key is valid and has sufficient quota Memory issues**: Confirm Postgres connection and that sessionKey uses email ID Perfect for deploying hands-off email automation. Import, connect credentials, and activate!
by Cheng Siong Chin
How It Works This workflow automates multi-cloud billing analysis and FinOps reporting using a supervised multi-agent AI architecture. It targets cloud finance teams, FinOps practitioners, DevOps leads, and CTOs seeking continuous visibility into cloud spend, resource waste, and carbon impact. A daily trigger fetches billing exports via HTTP and parses CSV data. A central Multi-Cloud Optimisation supervisor agent then coordinates four specialised sub-agents: a Resource Utilisation Analyser that identifies idle and over-provisioned assets, a Cost Optimisation Agent that surfaces savings opportunities across providers, a Carbon Footprint Analysis Agent that quantifies emissions per workload, and a FinOps Narrative Generator that produces human-readable financial commentary. Shared tools including a Financial Calculator and Advanced Analytics Code Tool support cross-agent computation. Results are parsed through a Structured Output Parser and formatted into a final consolidated report for stakeholder distribution. Setup Steps Configure the HTTP GET node with your cloud provider. Connect OpenAI credentials to all four sub-agent model nodes. Link the Financial Calculator and Advanced Analytics Code Tool nodes Configure the Structured Output Parser schema to match your reporting fields. Test end-to-end with a sample CSV billing export before activating the daily schedule. Prerequisites Cloud provider billing export URLs (AWS, GCP, Azure) n8n instance (v1.0+) HTTP access to billing APIs Report destination (email, Slack, or storage) configured Use Cases FinOps teams generating daily multi-cloud spend digests Customisation Add a Slack or email node to distribute the final report automatically Benefits Daily automation eliminates manual billing export and analysis effort
by Ishita Virmani
This N8N template helps keep track of multiple school websites for admission updates and sends an email notification. Good To Know This template uses all free tier tools like Gemini for LLM, Email alerting, How it works For each School Website provided: Get & clean the content through HTTP Request Node Gemini model takes the HTML content and defined prompt that instructs on how to identify if Pre-nursery Admissions for year 2026-207 have announced yet. If LLM response confirms the announcement, trigger an email to the configured address. Features Scheduled daily checks HTTP scraping Google Gemini text extraction for admission for Pre-nursery Email alerts How to use Import workflow. Provide already created or new Gemini API key within "Are admissions open" node. Setup SMTP account credentials within "Send Email" node, along with From-Email and To-Email. Finally update your list of School and their Admission URLs within "Shortlisted Schools" node. Customizing the workflow It can be used for tracking school admissions for any class including Pre-Nursery/ Bal-vatika/ 1st etc. via modifying the prompt. It can be used for tracking any school that has details uploaded on their websites and can be extracted via HTTP request node.
by Florent
Restore workflows & credentials from FTP - Remote Backup Solution This n8n template provides a safe and intelligent restore solution for self-hosted n8n instances, allowing you to restore workflows and credentials from FTP remote backups. Perfect for disaster recovery or migrating between environments, this workflow automatically identifies your most recent FTP backup and provides a manual restore capability that intelligently excludes the current workflow to prevent conflicts. Works seamlessly with date-organized backup folders stored on any FTP/SFTP server. Good to know This workflow uses n8n's native import commands (n8n import:workflow and n8n import:credentials) Works with date-formatted backup folders (YYYY-MM-DD) stored on FTP servers The restore process intelligently excludes the current workflow to prevent overwriting itself Requires FTP/SFTP server access and proper Docker volume configuration All downloaded files are temporarily stored server-side before import Compatible with backups created by n8n's export commands and uploaded to FTP Supports selective restoration: restore only credentials, only workflows, or both How it works Restore Process (Manual) Manual trigger with configurable pinned data options (credentials: true/false, worflows: true/false) The Init node sets up all necessary paths, timestamps, and configuration variables using your environment settings The workflow connects to your FTP server and scans for available backup dates Automatically identifies the most recent backup folder (latest YYYY-MM-DD date) Creates temporary restore folders on your local server for downloaded files If restoring credentials: Lists all credential files from FTP backup folder Downloads credential files to temporary local folder Writes files to disk using "Read/Write Files from Disk" node Direct import using n8n's import command Credentials are imported with their encrypted format intact If restoring workflows: Lists all workflow JSON files from FTP backup folder Downloads workflow files to temporary local folder Filters out the credentials subfolder to prevent importing it as a workflow Writes workflow files to disk Intelligently excludes the current restore workflow to prevent conflicts Imports all other workflows using n8n's import command Optional email notifications provide detailed restore summaries with command outputs Temporary files remain on server for verification (manual cleanup recommended) How to use Prerequisites Existing n8n backups on FTP server in date-organized folder structure (format: /ftp-backup-folder/YYYY-MM-DD/) Workflow backups as JSON files in the date folder Credentials backups in subfolder: /ftp-backup-folder/YYYY-MM-DD/n8n-credentials/ FTP/SFTP access credentials configured in n8n For new environments: N8N_ENCRYPTION_KEY from source environment (see dedicated section below) Initial Setup Configure your environment variables: N8N_ADMIN_EMAIL: Your email for notifications (optional) FTP_BACKUP_FOLDER: FTP path where backups are stored (e.g., /n8n-backups) N8N_PROJECTS_DIR: Projects root directory (e.g., /files/n8n-projects-data) GENERIC_TIMEZONE: Your local timezone (e.g., Europe/Paris) N8N_ENCRYPTION_KEY: Required if restoring credentials to a new environment (see dedicated section below) Create your FTP credential in n8n: Add a new FTP/SFTP credential Configure host, port, username, and password/key Test the connection Update the Init node: (Optional) Configure your email here: const N8N_ADMIN_EMAIL = $env.N8N_ADMIN_EMAIL || 'youremail@world.com'; Set PROJECT_FOLDER_NAME to "Workflow-backups" (or your preferred name) Set FTP_BACKUP_FOLDER to match your FTP backup path (default: /n8n-backups) Set credentials to "n8n-credentials" (or your backup credentials folder name) Set FTPName to a descriptive name for your FTP server (used in notifications) Configure FTP credentials in nodes: Update the FTP credential in "List Credentials Folders" node Verify all FTP nodes use the same credential Test connection by executing "List Credentials Folders" node Optional: Configure SMTP for email notifications: Add SMTP credential in n8n Activate "SUCCESS email Credentials" and "SUCCESS email Workflows" nodes Or remove email nodes if not needed Performing a Restore Open the workflow and locate the "Start Restore" manual trigger node Edit the pinned data to choose what to restore: { "credentials": true, "worflows": true } credentials: true - Restore credentials from FTP worflows: true - Restore workflows from FTP (note: typo preserved from original) Set both to true to restore everything Update the node's notes to reflect your choice (for documentation) Click "Execute workflow" on the "Start Restore" node The workflow will: Connect to FTP and find the most recent backup Download selected files to temporary local folders Import credentials and/or workflows Send success email with detailed operation logs Check the console logs or email for detailed restore summary Important Notes The workflow automatically excludes itself during restore to prevent conflicts Credentials are restored with their encryption intact. If restoring to a new environment, you must configure the N8N_ENCRYPTION_KEY from the source environment (see dedicated section below) Existing workflows/credentials with the same names will be overwritten Temporary folders are created with date prefix (e.g., 2025-01-15-restore-credentials) Test in a non-production environment first if unsure Critical: N8N_ENCRYPTION_KEY Configuration Why this is critical: n8n generates an encryption key automatically on first launch and saves it in the ~/.n8n/config file. However, if this file is lost (for example, due to missing Docker volume persistence), n8n will generate a NEW key, making all previously encrypted credentials inaccessible. When you need to configure N8N_ENCRYPTION_KEY: Restoring to a new n8n instance When your data directory is not persisted between container recreations Migrating from one server to another As a best practice to ensure key persistence across updates How credentials encryption works: Credentials are encrypted with a specific key unique to each n8n instance This key is auto-generated on first launch and stored in /home/node/.n8n/config When you backup credentials, they remain encrypted but the key is NOT included If the key file is lost or a new key is generated, restored credentials cannot be decrypted Setting N8N_ENCRYPTION_KEY explicitly ensures the key remains consistent Solution: Retrieve and configure the encryption key Step 1: Get the key from your source environment Check if the key is defined in environment variables docker-compose exec n8n printenv N8N_ENCRYPTION_KEY If this command returns nothing, the key is auto-generated and stored in n8n's data volume: Enter the container docker-compose exec n8n sh Check configuration file cat /home/node/.n8n/config Exit container exit Step 2: Configure the key in your target environment Option A: Using .env file (recommended for security) Add to your .env file N8N_ENCRYPTION_KEY=your_retrieved_key_here Then reference it in docker-compose.yml: services: n8n: environment: N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY} Option B: Directly in docker-compose.yml (less secure) services: n8n: environment: N8N_ENCRYPTION_KEY=your_retrieved_key_here Step 3: Restart n8n docker-compose restart n8n Step 4: Now restore your credentials Only after configuring the encryption key, run the restore workflow with credentials: true. Best practice for future backups: Always save your N8N_ENCRYPTION_KEY in a secure location alongside your backups Consider storing it in a password manager or secure vault Document it in your disaster recovery procedures Requirements FTP Server FTP or SFTP server with existing n8n backups Read access to backup folder structure Network connectivity from n8n instance to FTP server Existing Backups on FTP Date-organized backup folders (YYYY-MM-DD format) Backup files created by n8n's export commands or compatible format Credentials in subfolder structure: YYYY-MM-DD/n8n-credentials/ Environment Self-hosted n8n instance (Docker recommended) Docker volumes mounted with write access to project folder Access to n8n CLI commands (n8n import:credentials and n8n import:workflow) Proper file system permissions for temporary folder creation Credentials FTP/SFTP credential configured in n8n Optional: SMTP credentials for email notifications Technical Notes FTP Connection and Download Process Uses n8n's built-in FTP node for all remote operations Supports both FTP and SFTP protocols Downloads files as binary data before writing to disk Temporary local storage required for import process Smart Workflow Exclusion During workflow restore, the current workflow's name is cleaned and matched against backup files This prevents the restore workflow from overwriting itself The exclusion logic handles special characters and spaces in workflow names A bash command removes the current workflow from the temporary restore folder before import Credentials Subfolder Filtering The "Filter out Credentials sub-folder" node checks for binary data presence Only items with binary data (actual files) proceed to disk write Prevents the credentials subfolder from being imported as a workflow Timezone Handling All timestamps use UTC for technical operations Display times use local timezone for user-friendly readability FTP backup folder scanning works with YYYY-MM-DD format regardless of timezone Security FTP connections should use SFTP or FTPS for encrypted transmission Credentials are imported in n8n's encrypted format (encryption preserved) Temporary files stored in project-specific folders Consider access controls for who can trigger restore operations No sensitive credential data is logged in console output Troubleshooting Common Issues FTP connection fails: Verify FTP credentials are correctly configured and server is accessible No backups found: Ensure the FTP_BACKUP_FOLDER path is correct and contains date-formatted folders (YYYY-MM-DD) Permission errors: Ensure Docker user has write access to N8N_PROJECTS_DIR for temporary folders Path not found: Verify all volume mounts in docker-compose.yml match your project folder location Import fails: Check that backup files are in valid n8n export format Download errors: Verify FTP path structure matches expected format (date folder / credentials subfolder / files) Workflow conflicts: The workflow automatically excludes itself, but ensure backup files are properly named Credentials not restored: Verify the FTP backup contains a n8n-credentials subfolder with credential files Credentials decrypt error: Ensure N8N_ENCRYPTION_KEY matches the source environment Error Handling "Find Last Backup" node has error output configured to catch FTP listing issues "Download Workflow Files" node continues on error to handle presence of credentials subfolder All critical nodes log detailed error information to console Email notifications include stdout and stderr from import commands Version Compatibility Tested with n8n version 1.113.3 Compatible with Docker-based n8n installations Requires n8n CLI access (available in official Docker images) Works with any FTP/SFTP server (Synology NAS, dedicated FTP servers, cloud FTP services) This workflow is designed for FTP/SFTP remote backup restoration. For local disk backups, see the companion workflow "n8n Restore from Disk". Works best with backups from: "Automated n8n Workflows & Credentials Backup to Local/Server Disk & FTP"
by WeblineIndia
WooCommerce Product Reviews: Sentiment Analysis with Slack Summary This workflow automatically fetches product reviews from your WooCommerce store, analyzes the sentiment of each review using AI, stores the results in Airtable and sends a summary of positive, neutral and negative reviews to Slack. It helps teams quickly understand overall customer feedback, track product sentiment and stay updated without manually reading all reviews. Quick Start β Implementation Steps Import the workflow JSON file into n8n. Configure credentials: WooCommerce HTTP Basic Auth (for fetching reviews) OpenAI API (for sentiment analysis) Airtable Personal Access Token (for storing reviews) Slack API (for sending summary messages) Adjust the Cron/Schedule Trigger node to your preferred interval (e.g., every 10 minutes). Test the workflow with a few reviews to ensure AI and Slack integrations work correctly. Activate the workflow after confirming functionality. What It Does This workflow automates sentiment analysis and team notification: Schedule Trigger** β Runs the workflow automatically at defined intervals. Set WooCommerce Domain** β Defines the WooCommerce store to fetch reviews from. Fetch Reviews** β Retrieves all recent product reviews using WooCommerce API credentials. Loop Over Items** β Processes reviews in smaller batches for efficiency. Message a Model** β Sends each review to OpenAI to detect sentiment (positive, neutral, negative) and generate a short summary. Merge & Code Nodes** β Combines original review data with AI results and ensures proper data alignment. If Node** β Checks sentiment for further processing. Create a Record (Airtable)** β Stores each review and its sentiment in Airtable. Code Summary Node** β Counts positive, neutral, and negative reviews to create a summary. Send a Message (Slack)** β Posts the sentiment summary to the teamβs Slack channel for visibility. Whoβs It For This workflow is ideal for: E-commerce managers tracking WooCommerce product feedback Customer support teams monitoring review sentiment Marketing and product teams seeking automated insights Teams using Airtable and Slack for data tracking and collaboration Requirements to Use This Workflow An n8n instance (cloud or self-hosted) WooCommerce store** with API access OpenAI API key** for sentiment analysis Airtable account** with base/table configured Slack workspace** with API access for messaging Basic familiarity with APIs and workflow automation How It Works Schedule Trigger β Executes the workflow at the defined interval. Set WooCommerce Domain β Configures which store to fetch reviews from. Fetch Reviews β Retrieves all recent product reviews from WooCommerce. Loop Over Items β Splits reviews into manageable batches. Message a Model β Sends reviews to AI for sentiment analysis and short summary. Merge & Code Nodes β Combines AI results with original review data and prepares it for storage and summary. If Node β Checks review sentiment for saving or further processing. Create a Record (Airtable) β Saves each review with sentiment and AI summary. Code Summary Node β Counts the number of positive, neutral, and negative reviews. Send a Message (Slack) β Sends a concise summary of review sentiment to the Slack channel. Setup Steps Import the workflow JSON into n8n. Add credentials: WooCommerce HTTP Basic Auth OpenAI API Airtable Personal Access Token Slack API Configure the WooCommerce domain in the Set WooCommerce Domain node. Test the workflow with sample reviews to ensure AI outputs correctly. Adjust the Schedule Trigger interval as needed. Activate the workflow after confirming that data flows correctly from WooCommerce β AI β Airtable β Slack. How To Customize Nodes Schedule Trigger Adjust interval (minutes, hours, days) as needed. Set WooCommerce Domain Replace with your store domain URL. Fetch Reviews Update endpoint or filters if needed. Ensure WooCommerce credentials are correct. Message a Model Change AI model or prompts to adjust sentiment analysis or summary style. Create a Record (Airtable) Map additional fields if needed. Ensure the table has the necessary columns for sentiment, summary, rating, and product info. Send a Message (Slack) Customize Slack message format. Change the channel ID to send summaries to the right team. Optional Enhancements Include historical review trends. Automatically trigger notifications only for negative reviews. Send summaries to email or other messaging apps. Visualize sentiment trends in Airtable or external dashboards. Use Case Examples Automated Sentiment Tracking β Understand customer feedback without manual reading. Team Alerts β Notify product and support teams about negative reviews quickly. Data Storage & Reporting β Keep historical sentiment in Airtable for trend analysis. Efficient Batch Processing β Process large number of reviews without overloading the system. Troubleshooting Guide | Issue | Possible Cause | Solution | |--------------------------|--------------------------------------------------|--------------------------------------------------------------| | Reviews not fetched | Wrong WooCommerce credentials or endpoint | Check WooCommerce HTTP Basic Auth and store domain | | AI analysis fails | OpenAI API key invalid or prompt error | Verify OpenAI credentials and prompt syntax | | Slack message missing | Incorrect Slack channel or API token | Check Slack credentials and channel ID | | Airtable not storing reviews | Table or field mismatch | Verify Airtable base, table, and column mapping | Need Help? If you need assistance setting up the workflow, customizing AI sentiment analysis or integrating Slack summaries, feel free to contact our n8n development team at WeblineIndia. We provide workflow automation, AI integration and reporting solutions for WooCommerce stores.