by usamaahmed
๐ HR Resume Screening Workflow โ Smart Hiring on Autopilot ๐ค ๐ฏ Overview: "This workflow builds an AI-powered resume screening system inside n8n. It begins with Gmail and Form triggers that capture incoming resumes, then uploads each file to Google Drive for storage. The resume is downloaded and converted into plain text, where two branches run in parallel: one extracts structured contact details, and the other uses an AI agent to summarize education, job history, and skills while assigning a suitability score. A cleanup step normalizes the data before merging both outputs, and the final candidate record is saved into Google Sheets and Airtable, giving recruiters a centralized dashboard to identify top talent quickly and consistently.โ ๐ Prerequisites: To run this workflow successfully, youโll need: Gmail OAuth** โ to read incoming resumes. Google Drive OAuth** โ to upload and download resume files. Google Sheets OAuth** โ to save structured candidate records. Airtable Personal Access Token** โ for dashboards and record-keeping. OpenAI / OpenRouter API Key** โ to run the AI summarizer and evaluator. โ๏ธ Setup Instructions: Import the Workflow Clone or import the workflow into your n8n instance. Add Credentials Go to n8n โ Credentials and connect Gmail, Google Drive, Google Sheets, Airtable, and OpenRouter/OpenAI. Configure Key Nodes Gmail Trigger โ Update filters.q with the job title you are hiring for (e.g., "Senior Software Engineer"). Google Drive Upload โ Set the folderId where resumes will be stored. Google Sheets Node โ Link to your HR spreadsheet (e.g., โCandidates 2025โ). Airtable Node โ Select the correct base & table schema for candidate records. Test the Workflow Send a test resume (via email or form). Check Google Sheets & Airtable for structured candidate data. Go Live Enable the workflow. It will now run continuously and process new resumes as they arrive. ๐ End-to-End Workflow Walkthrough: ๐ข Section 1 โ Entry & Intake Nodes: ๐ง Gmail Trigger โ Polls inbox every minute, captures job application emails, and downloads resume attachments (CV0, CV1, โฆ). ๐ Form Trigger โ Alternate entry for resumes submitted via a careers page or job portal. โ Quick Understanding: Think of this section as the front desk of recruitment - resumes received either by email or online form, and the system immediately grabs them for processing. ๐ Section 2 โ File Management Nodes: โ๏ธ Upload File (Google Drive) โ Saves the incoming resume into a structured Google Drive folder, naming it after the applicant. โฌ๏ธ Download File (Google Drive) โ Retrieves the stored resume file for further processing. ๐ Extract from File โ Converts the resume (PDF/DOC) into plain text so the AI and extractors can work with it. โ Quick Understanding: This is your digital filing room. Every resume is safely stored, then converted into a readable format for the hiring system. ๐ค Section 3 โ AI Processing (Parallel Analysis) Nodes: ๐งพ Information Extractor โ Pulls structured contact information (candidate name, candidate email and candidate phone number) using regex validation and schema rules. ๐ค AI Agent (LangChain + OpenRouter) โ Reads the full CV and outputs: ๐ Educational Qualifications ๐ผ Job History ๐ Skills Set ๐ Candidate Evaluation Score (1โ10) ๐ Justification for the score โ Quick Understanding: Imagine having two assistants working in parallel, one quickly extracts basic contact info, while the other deeply reviews the CV and gives an evaluation. ๐ ๏ธ Section 4 โ Data Cleanup & Merging Nodes: โ๏ธ Edit Fields โ Standardizes the AI Agentโs output into a consistent field (output). ๐ Code (JS Parsing & Cleanup) โ Converts the AIโs free-text summary into clean JSON fields (education, jobHistory, skills, score, justification). ๐ Merge โ Combines the structured contact info with the AIโs evaluation into a single candidate record. โ Quick Understanding: This is like the data cleaning and reporting team, making sure all details are neat, structured, and merged into one complete candidate profile. ๐ Section 5 โ Persistence & Dashboards Nodes: ๐ Google Sheets (Append Row) โ Saves candidate details into a Google Sheet for quick team access. ๐ Airtable (Create Record) โ Stores the same structured data into Airtable, enabling dashboards, analytics, and ATS-like tracking. โ Quick Understanding: Think of this as your HR dashboard and database. Every candidate record is logged in both Google Sheets and Airtable, ready for filtering, reporting, or further action. ๐ Workflow Overview Table: | Section | Key Roles / Nodes | Model / Service | Purpose | Benefit | | --- | --- | --- | --- | --- | | ๐ฅ Entry & Intake | Gmail Trigger, Form Trigger | Gmail API / Webhook | Capture resumes from email or forms | Resumes collected instantly from multiple sources | | ๐ File Management | Google Drive Upload, Google Drive Download, Extract from File | Google Drive + n8n Extract | Store resumes & convert to plain text | Centralized storage + text extraction for processing | | ๐ค AI Processing | Information Extractor, AI Agent (LangChain + OpenRouter) | Regex + OpenRouter AI {gpt-oss-20b (free)} | Extract contact info + AI CV analysis | Candidate details + score + justification generated automatically | | ๐ Data Cleanup & Merge | Edit Fields, Code (JS Parsing & Cleanup), Merge | n8n native + Regex Parsing | Standardize and merge outputs | Clean, structured JSON record with all candidate info | | ๐ Persistence Layer | Google Sheets Append Row, Airtable Create Record | Google Sheets + Airtable APIs | Store structured candidate data | HR dashboards & ATS-ready records for easy review and analytics | | ๐ Execution Flow | All connected | Gmail + Drive + Sheets + Airtable + AI | End-to-end automation | Automated resume โ structured record โ recruiter dashboards | ๐ Workflow Output Overview: Each candidateโs data is standardized into the following fields: Candidate Name Candidate Email Contact Number Educational Qualifications Job History Skills Set AI Score (1โ10) Justification ๐ Example (Google Sheet row): ๐ Benefits of This Workflow at a Glance: โฑ๏ธ Lightning-Fast Screening** โ Processes hundreds of resumes in minutes instead of hours. ๐ค AI-Powered Evaluation** โ Automatically summarizes candidate education, work history, skills, and gives a suitability score (1โ10) with justification. ๐ Centralized Storage** โ Every resume is securely saved in Google Drive for easy access and record-keeping. ๐ Data-Ready Outputs** โ Structured candidate profiles go straight into Google Sheets and Airtable, ready for dashboards and analytics. โ Consistency & Fairness** โ Standardized AI scoring ensures every candidate is evaluated on the same criteria, reducing human bias. ๐ ๏ธ Flexible Intake** โ Works with both Gmail (email applications) and Form submissions (job portals or career pages). ๐ Recruiter Productivity Boost** โ Frees HR teams from manual extraction and data entry, allowing them to focus on interviewing and hiring the best talent. ๐ Practical HR Use Case: โScreen resumes for a Senior Software Engineer role and shortlist top candidates.โ Gmail Trigger โ Captures incoming job applications with CVs attached. Google Drive โ Stores resumes for record-keeping. Extract from File โ Converts CVs into plain text. Information Extractor โ Pulls candidate name, email, and phone number. AI Agent โ Summarizes education, job history, skills, and assigns a suitability score (1โ10). Code & Merge โ Cleans and combines outputs into a structured candidate profile. Google Sheets โ Logs candidate data for quick HR review. Airtable โ Builds dashboards to filter and identify top-scoring candidates. โ Result: HR instantly sees structured candidate records, filters by score, and focuses interviews on the best talent.
by Rajeet Nair
Overview This workflow implements a self-healing Retrieval-Augmented Generation (RAG) maintenance system that automatically updates document embeddings, evaluates retrieval quality, detects embedding drift, and safely promotes or rolls back embedding updates. Maintaining high-quality embeddings in production RAG systems is difficult. When source documents change or embedding models evolve, updates can accidentally degrade retrieval quality or introduce semantic drift. This workflow solves that problem by introducing an automated evaluation and rollback pipeline for embeddings. It periodically checks for document changes, regenerates embeddings for updated content, evaluates the new embeddings against a set of predefined golden test questions, and compares the results with the currently active embeddings. Quality metrics such as Recall@K, keyword similarity, and answer variance are calculated, while embedding vectors are also analyzed for semantic drift using cosine distance. If the new embeddings outperform the current ones and remain within acceptable drift limits, they are automatically promoted to production. Otherwise, the system safely rolls back or flags the update for manual review. This creates a robust, production-safe RAG lifecycle automation system. How It Works 1. Workflow Trigger The workflow can start in two ways: Scheduled trigger** running daily Webhook trigger** when source documents change Both paths lead to a centralized configuration node that defines parameters such as chunk size, thresholds, and notification settings. 2. Document Retrieval & Change Detection Documents are fetched from the configured source (GitHub, Drive, Confluence, or other APIs). The workflow then: Splits documents into deterministic chunks Computes SHA-256 hashes for each chunk Compares them with previously stored hashes in Postgres Only new or modified chunks proceed for embedding generation, which significantly reduces processing cost. 3. Embedding Generation Changed chunks are processed through: Recursive text splitting Document loading OpenAI embedding generation These embeddings are stored as a candidate vector store rather than immediately replacing the production embeddings. Metadata about the embedding version is stored in Postgres. 4. Golden Question Evaluation A set of golden test questions stored in the database is used to evaluate retrieval quality. Two AI agents are used: One queries the candidate embeddings One queries the current production embeddings Both generate answers using retrieved context. 5. Quality Metrics Calculation The workflow calculates several evaluation metrics: Recall@K** to measure retrieval effectiveness Keyword similarity** between generated answers and expected answers Answer length variance** to detect inconsistencies These are combined into a weighted quality score. 6. Embedding Drift Detection The workflow compares embedding vectors between versions using cosine distance. This identifies semantic drift, which may occur due to: embedding model updates chunking changes document structure changes 7. Promotion or Rollback The workflow checks two conditions: Quality score exceeds the configured threshold Embedding drift remains below the drift threshold If both conditions pass: The candidate embeddings are promoted to active If not: The system rolls back to the previous embeddings Or flags the update for human review 8. Notifications A webhook notification is sent with: update status quality score drift score timestamp This allows teams to monitor embedding health automatically. Setup Instructions Configure Document Source Edit the Workflow Configuration node and set: documentSourceUrl API endpoint or file source containing your documents. Examples include: GitHub repository API Google Drive export API Confluence REST API Configure Postgres Database Create the following tables in your Postgres database: document_chunks embeddings embedding_versions golden_questions These tables store chunk hashes, embedding vectors, version metadata, and evaluation questions. Connect the Postgres nodes using your database credentials. Add OpenAI Credentials Configure credentials for: OpenAI Embeddings** OpenAI Chat Model** These are used for generating embeddings and answering evaluation questions. Populate Golden Questions Insert evaluation questions into the golden_questions table. Each record should include: question_text expected passages expected answer keywords These questions represent critical queries your RAG system must answer correctly. Configure Notification Webhook Add a Slack or Teams webhook URL in the configuration node. Notifications will be sent whenever: embeddings are promoted embeddings are rolled back manual review is required Adjust Quality Thresholds In the configuration node you can modify: qualityThreshold driftThreshold chunkSize chunkOverlap These parameters control the sensitivity of the evaluation system. Use Cases Production RAG Monitoring Automatically evaluate and update embeddings in production knowledge systems without risking degraded results. Continuous Knowledge Base Updates Keep embeddings synchronized with frequently changing documentation, repositories, or internal knowledge bases. Safe Embedding Model Upgrades Test new embedding models against production data before promoting them. AI System Reliability Detect retrieval regressions before they affect end users. Enterprise AI Governance Provide automated evaluation and rollback capabilities for mission-critical RAG deployments. Requirements This workflow requires the following services: n8n** Postgres Database** OpenAI API** Recommended integrations: Slack or Microsoft Teams (for notifications) Required nodes include: Schedule Trigger Webhook HTTP Request Postgres Compare Datasets Code nodes OpenAI Embeddings OpenAI Chat Model Vector Store nodes AI Agent nodes Summary This workflow provides a fully automated self-healing RAG infrastructure for maintaining embedding quality in production systems. By combining change detection, golden-question evaluation, embedding drift analysis, and automatic rollback, it ensures that retrieval performance improves safely over time. It is ideal for teams running production AI assistants, knowledge bases, or internal search systems that depend on high-quality vector embeddings.
by Oneclick AI Squad
This workflow transforms traditional REST APIs into structured, AI-accessible MCP (Model Context Protocol) tools. It provides a unified gateway that allows Claude AI to safely, granularly, and auditibly interact with any business system โ CRM, ERP, databases, SaaS โ through a single MCP-compliant interface. How it works Receive MCP Tool Request - Webhook ingests tool call from AI agent or MCP client Validate & Authenticate - Verifies API key, checks JWT token, validates MCP schema Tool Registry Lookup - Resolves requested tool name to backend API config and permission scope Claude AI Intent Verification - Confirms tool call parameters are safe, well-formed, and within policy Rate Limit & Quota Check - Enforces per-client tool call limits before execution Execute Backend API Call - Routes to the correct business system API with mapped parameters Normalize & Enrich Response - Standardizes API response into MCP tool result schema Audit & Log - Writes immutable access log for compliance and observability Return MCP Tool Result - Delivers structured response back to the AI agent Setup Steps Import workflow into n8n Configure credentials: Anthropic API - Claude AI for intent verification and parameter validation Google Sheets - Tool registry, rate limit tracking, and audit log SMTP - Alert notifications for policy violations Populate the Tool Registry sheet with your API endpoints Set your MCP gateway API key in the validation node Activate the workflow and point your MCP client to the webhook URL Sample MCP Tool Call Payload { "mcpVersion": "1.0", "clientId": "agent-crm-001", "apiKey": "mcp-key-xxxx", "toolName": "crm.get_customer", "parameters": { "customerId": "CUST-10042", "fields": ["name", "email", "tier"] }, "requestId": "req-abc-123", "callerContext": "User asked: show me customer details" } Supported Tool Categories CRM Tools** โ get_customer, update_contact, list_deals ERP Tools** โ get_inventory, create_order, update_stock Database Tools** โ query_records, insert_record, update_record Communication Tools** โ send_email, post_slack, create_ticket Analytics Tools** โ run_report, fetch_metrics, export_data Features MCP-compliant schema** โ works with any MCP-compatible AI agent Granular permission scopes** โ read/write/admin per tool per client Claude AI intent guard** โ blocks malformed or policy-violating calls Rate limiting** โ per-client quota enforcement Full audit trail** โ every tool call logged for SOC2 / ISO 27001 Explore More Automation: Contact us to design AI-powered lead nurturing, content engagement, and multi-platform reply workflows tailored to your growth strategy.
by Lakindu Siriwardana
๐ง Automated Video Generator (n8n Workflow) ๐ Features End-to-End Video Creation from user idea or transcript AI-Powered Scriptwriting using LLMs (e.g., DeepSeek via OpenRouter) Voiceover Generation with customizable TTS voices Image Scene Generation using generative models like together.ai Clip Creation & Concatenation into a full video Dynamic Caption Generation with styling options Google Drive & Sheets Integration for asset storage and progress tracking โ๏ธ How It Works User Submits Form with: Main topic or transcript Desired duration TTS voice Visual style (e.g., Pixar, Lego, Cyberpunk) Image generation provider AI generates a script: A catchy title, description, hook, full script, and CTA using a language model. Text-to-Speech (TTS): The script is turned into audio using the selected voice, with timestamped captions generated. Scene Segmentation: The script is split into 5โ6 second segments for visual storyboarding. Image Prompt Creation: Each scene is converted into an image prompt in the selected style (e.g., "anime close-up of a racing car"). Image Generation: Prompts are sent to together.ai or fal.ai to generate scenes. Clip Creation: Each image is turned into a short video clip (Ken Burns-style zoom) based on script timing. Video Assembly: All clips are concatenated into a single video. Captions are overlaid using the earlier timestamps. Final Output is uploaded to Google Drive, Telegram and links are saved in Google Sheets. ๐ Inital Setup ๐ฃ๏ธ 1. Set Up TTS Voice (Text-to-Speech) Run your TTS server locally using Docker. ๐งฐ 2. Set Up NCA-Toolkit The nca-toolkit appears to be a custom video/image processing backend used via HTTP APIs: http://host.docker.internal:9090/v1/image/transform/video http://host.docker.internal:9090/v1/video/concatenate http://host.docker.internal:9090/v1/ffmpeg/compose ๐ง Steps: Clone or build the nca-toolkit container (if it's a private tool): Ensure it exposes port 9090. It should support endpoints for: Image to video (zoom effect) Video concatenation Audio + video merging Caption overlay via FFmpeg Run it locally with Docker: docker run -d -p 9090:80 your-nca-toolkit-image ๐ง 3. Set Up together.ai (Image Generation) (Optional You can use ChatGPT API Instead) This handles image generation using models like FLUX.1-schnell. ๐ง Steps: Create an account at: https://www.together.ai Generate your API key
by Abdulrahman Alhalabi
NGO TPM Request Management System Benefits For Beneficiaries: 24/7 Accessibility** - Submit requests anytime via familiar Telegram interface Language Flexibility** - Communicate in Arabic through text or voice messages Instant Acknowledgment** - Receive immediate confirmation that requests are logged No Technical Barriers** - Works on basic smartphones without special apps For TPM Teams: Centralized Tracking** - All requests automatically logged with timestamps and user details Smart Prioritization** - AI categorizes issues by urgency and type for efficient response Action Guidance** - Specific recommended actions generated for each request type Performance Analytics** - Track response patterns and common issues over time For NGO Operations: Cost Reduction** - Automated intake reduces manual processing overhead Data Quality** - Standardized categorization ensures consistent reporting Audit Trail** - Complete record of all beneficiary interactions for compliance Scalability** - Handle high volumes without proportional staff increases How it Works Multi-Input Reception - Accepts both text messages and voice recordings via Telegram Voice Transcription - Uses OpenAI Whisper to convert Arabic voice messages to text AI Categorization - GPT-4 analyzes requests and categorizes issues (aid distribution, logistics, etc.) Action Planning - AI generates specific recommended actions for TPM team in Arabic Data Logging - Records all requests, categories, and actions in Google Sheets with user details Confirmation Feedback - Sends acknowledgment message back to users via Telegram Set up Steps Setup Time: ~20 minutes Create Telegram Bot - Get bot token from @BotFather and configure webhook Configure APIs - Set up OpenAI (transcription + chat) and Google Sheets credentials Customize AI Prompts - Adjust system messages for your NGO's specific operations Set Up Spreadsheet - Link Google Sheets for request tracking and reporting Test Workflows - Verify both text and voice message processing paths Detailed Arabic language configuration and TPM-specific categorization examples are included as sticky notes within the workflow. What You'll Need: Telegram Bot Token (free from @BotFather) OpenAI API key (Whisper + GPT-4) Google Sheets API credentials Google Spreadsheet for logging requests Sample Arabic text/voice messages for testing Key Features: Dual input support (text + voice messages) Arabic language processing and responses Structured data extraction (category + recommended action) Complete audit trail with user information Real-time confirmation messaging TPM team-specific workflow optimization
by Ranjan Dailata
1. Who this is for This workflow is specifically designed for Recruiters, HR analytics teams, and data-driven talent acquisition professionals seeking deeper insights from candidate resume. Valuable for HR tech developers, ATS/CRM engineers, and AI-driven recruitment platforms aiming to automate candidate research. Helps organizations build predictive hiring models and gain actionable talent intelligence. 2. What problem this workflow solves Recruiters often face information overload when analyzing candidate resume manually reviewing experiences, skills, and cultural fit is slow and inconsistent. Traditional scraping tools extract raw data but fail to produce actionable intelligence like career trajectory, skills alignment, and fit for a role. This workflow solves that by: Automating candidate resume data extraction through Decodo Structuring it into JSON Resume Schema Running deep AI-driven analytics using OpenAI GPT-4o-mini Delivering comprehensive candidate intelligence ready for ATS/CRM integration or HR dashboards 3. What this workflow does This n8n workflow combines Decodoโs web scraping with OpenAI GPT-4o-mini to produce advanced recruitment intelligence. Flow Breakdown: Manual Trigger โ Start the workflow manually or schedule it in n8n. Set Input Fields โ Define resume URL, location, and job description. Decodo Node โ Scrapes the candidateโs profile (experience, skills, education, achievements, etc.). Structured Data Extractor (GPT-4o-mini) โ Converts the scraped data into a structured JSON Resume Schema. Advanced Data Mining Engine (GPT-4o-mini) โ Performs: Skills Analysis (strengths, gaps, transferable skills) Experience Intelligence (career trajectory, leadership, project complexity) Cultural Fit Insights (work style, communication style, agility indicators) Career Trajectory Forecasting (promotion trends, growth velocity) Competitive Advantage Analysis (market positioning, salary expectations) Summarizer Node โ Produces an abstractive and comprehensive AI summary of the candidate profile. Google Sheets Node โ Saves the structured insights automatically into your recruitment intelligence sheet. File Writer Node (Optional) โ Writes the JSON report locally for offline storage or integration. The result is a data-enriched candidate intelligence report far beyond what traditional resume parsing provides. 4. Setup Prerequisites If you are new to Decode, please signup on this link visit.decodo.com n8n account with workflow editor access Decodo API credentials OpenAI API key Google Sheets account connected via OAuth2 Make sure to install the Decodo Community node. Setup Steps Import the workflow JSON into your n8n workspace. Set credentials for: Decodo Credentials account OpenAI API (GPT-4o-mini) Google Sheets OAuth2 In the โSet the Input Fieldsโ node, update: url โ Resume link geo โ Candidate region or country jobDescription โ Target job description for matching Ensure the Google Sheet ID and tab name are correct in the โAppend or update row in sheetโ node. Click Execute Workflow to start. 5. How to customize this workflow You can adapt this workflow for different recruitment or analytics scenarios: Add Sentiment Analysis Add another LLM node to perform sentiment analysis on candidate recommendations or feedback notes. Enrich with Job Board Data Use additional Decodo nodes or APIs (Indeed, Glassdoor, etc.) to compare candidate profiles to live job postings. Add Predictive Fit Scoring Insert a Function Node to compute a numerical "fit score" by comparing skill vectors and job requirements. Automate Candidate Reporting Connect to Gmail, Slack, or Notion nodes to automatically send summaries or reports to hiring managers. 6. Summary The Advanced Resume Intelligence & Data Mining via Decodo + OpenAI GPT-4o-mini workflow transforms traditional candidate sourcing into AI-driven intelligence gathering. It integrates: Decodo** โ To perform webscraping of data GPT-4o-mini** โ to interpret, analyze, and summarize with context Google Sheets** โ to store structured results for real-time analysis With this system, recruiters and HR analysts can move from data collection to decision intelligence, unlocking faster and smarter talent insights.
by Cheng Siong Chin
How It Works Scheduled runs collect data from oil markets, global shipping movements, news sources, and official reports. The system performs statistical checks to detect anomalies and volatility shifts. An AI-driven geopolitical model evaluates emerging risks and assigns a crisis score. Based on severity thresholds, results are routed to the appropriate alert channels for rapid response. Setup Steps Data Sources: Connect the oil price API, OPEC report feeds, shipping databases, and news sources. AI Model: Configure the OpenRouter ChatGPT model for geopolitical and risk analysis. Alerts: Define severity rules and route alerts to Email, Slack, or Dashboard APIs. Storage: Configure a database for historical records, audit logging, and trend tracking. Prerequisites Oil market API credentials; news feed access; OPEC data source; OpenRouter API key; Slack/email/dashboard integrations Use Cases Supply chain risk monitoring; energy market crisis detection; geopolitical threat assessment; trader decision support; operational risk management Customization Adjust risk thresholds; add market data sources; modify alert routing rules Benefits Reduces crisis detection lag 90%; consolidates fragmented data; enables proactive response
by Deborah
Use n8n to bring data from any API to your AI. This workflow uses the Chat Trigger to provide the chat interface, and the Custom n8n Workflow Tool to call a second workflow that calls the API. The second workflow uses AI functionality to refine the API request based on the user's query. It then makes an API call, and returns the response to the main workflow. This workflow is used in Advanced AI examples | Call an API to fetch data in the documentation. To use this workflow: Load it into your n8n instance. Add your credentials as prompted by the notes. Requires n8n 1.28.0 or above
by Joachim Hummel
This workflow automates the daily backup of all your n8n workflows to a designated folder in Nextcloud. It ensures that you always have the last 7 days of backups available while automatically deleting older ones to save space. ๐ง Features Scheduled Trigger: Runs automatically once per day (can be executed manually as well). Directory Management: Creates the /N8N-Backup directory in Nextcloud if it doesn't already exist. Backup Collection: Retrieves all workflows from the n8n instance. JSON Conversion: Converts each workflow into a JSON file. Upload to Nextcloud: Saves each backup file into the specified backup directory. Retention Control: Keeps only the latest 7 backups and deletes the rest from Nextcloud. ๐ Notes Make sure to manually create the /N8N-Backup directory in your Nextcloud account before using this flow. Update the Backup Path node if you wish to change the upload directory. Ideal for teams using n8n with self-hosted instances and requiring offsite backup via Nextcloud. ๐ Requirements n8n instance with access to the Nextcloud node. Valid credentials for your Nextcloud account with API access. Update: 08/11/2025 โBackup Flows to Nextcloudโ โ Import format fixed Summary: The workflow now exports one clean JSON object per workflow (no arrays, no backup/meta fields), so files can be imported 1:1 via the n8n UI. What changed: Switched from โConvert to Fileโ to a Set node that builds the JSON in binary data. Enabled filters.include = "all" on Get many workflows to include nodes, connections, settings, pinData, and tags. Sanitized filenames and removed IDs/metadata that can break UI imports. Fixed Nextcloud path and binary property mapping (data). Verification: Generated multiple backups and imported each via UI (โImport from fileโ) without errors. Each file begins with { (single object) and loads with full workflow structure. Notes: Keep โBinary Propertyโ set to data in the Nextcloud node. Filenames are sanitized to avoid special-character issues.
by Oneclick AI Squad
Acts as a virtual receptionist for the restaurant, handling incoming calls via VAPI without human intervention. It collects user details (name, booking time, number of people) for table bookings, checks availability in a PostgreSQL database using n8n, books the table if available, and sends confirmation. It also provides restaurant details to users, mimicking a human receptionist. Key Insights VAPI must be configured to accurately capture user input for bookings and inquiries. PostgreSQL database requires a table to manage restaurant bookings and availability. Workflow Process Initiate the workflow with a VAPI call to collect user details (name, time, number of people). Use n8n to query the PostgreSQL database for table availability. If a table is available, book it using n8n and update the PostgreSQL database. Send a booking confirmation and hotel service details back to VAPI via n8n. Store and update restaurant table data in the PostgreSQL database using n8n. Usage Guide Import the workflow into n8n and configure VAPI and PostgreSQL credentials. Test with a sample VAPI call to ensure proper data collection and booking confirmation. Prerequisites VAPI API credentials for call handling PostgreSQL database with booking and availability tables Customization Options Modify the VAPI input fields to capture additional user details or adjust the PostgreSQL query for specific availability criteria.
by n8n Team
This workflow performs several data integration and synchronization tasks between Google Sheets and a MySQL database. Here is a step-by-step description of what this workflow does: Manual Trigger: The workflow starts when the user clicks "Execute Workflow." Schedule Trigger: This node schedules the workflow to run at specific intervals on weekdays (Monday to Friday) between 6 AM and 10 PM. It ensures regular data synchronization. Google Sheet Data: This node connects to a specific Google Sheets document and retrieves data from the "Form Responses 1" sheet, filtering by the "DB Status" column. SQL Get inquiries from Google: This node retrieves data from a MySQL database table named "ConcertInquiries" where the "source_name" is "GoogleForm." Rename GSheet variables: This node renames the columns retrieved from Google Sheets and transforms the data into a format suitable for MySQL, assigning a value for "source_name" as "GoogleForm." Compare Datasets: This node compares the data retrieved from Google Sheets and the MySQL database based on timestamp and source_name fields. It identifies changes and updates. No reply too long?: This node checks if there has been no reply within the last four hours, using the "timestamp" field from the Google Sheets data. DB Status assigned?: This node checks if the "DB Status" field is not empty in the compared dataset. Update GSheet status: If conditions are met in the previous nodes, this node updates the "DB Status" field in Google Sheets with the corresponding value from the MySQL dataset. DB Status in sync?: This node checks if the "source_name" field in Google Sheets is not empty. Sync MySQL data: If conditions are met in the previous nodes, this node updates the "source_name" field in the MySQL database to "GoogleFormSync." Send Notifications: If conditions are met in the "No reply too long?" node, this node sends notifications or performs actions as needed. Sticky Notes: These nodes provide additional information and documentation links for users.
by Jimleuk
This n8n template showcases the new HTTP tool released in version 1.47.0. Overall, the tool helps simplify AI Agent workflows where custom sub-workflows were performing the same simple http requests. Comparisons 1. AI agent that can scrape webpages Remake of https://n8n.io/workflows/2006-ai-agent-that-can-scrape-webpages/ Changes: Replaces Execute Workflow Tool and Subworkflow Replaces Response Formatting 2. Allow your AI to call an API to fetch data Remake of https://n8n.io/workflows/2094-allow-your-ai-to-call-an-api-to-fetch-data/ Changes: Replaces Execute Workflow Tool and Subworkflow Replaces Manual Query Params Definitions Replaces Response Formatting