by Dataki
This is the first version of a template for a RAG/GenAI App using WordPress content. As creating, sharing, and improving templates brings me joy 😄, feel free to reach out on LinkedIn if you have any ideas to enhance this template! How It Works This template includes three workflows: Workflow 1**: Generate embeddings for your WordPress posts and pages, then store them in the Supabase vector store. Workflow 2**: Handle upserts for WordPress content when edits are made. Workflow 3**: Enable chat functionality by performing Retrieval-Augmented Generation (RAG) on the embedded documents. Why use this template? This template can be applied to various use cases: Build a GenAI application that requires embedded documents from your website's content. Embed or create a chatbot page on your website to enhance user experience as visitors search for information. Gain insights into the types of questions visitors are asking on your website. Simplify content management by asking the AI for related content ideas or checking if similar content already exists. Useful for internal linking. Prerequisites Access to Supabase for storing embeddings. Basic knowledge of Postgres and pgvector. A WordPress website with content to be embedded. An OpenAI API key Ensure that your n8n workflow, Supabase instance, and WordPress website are set to the same timezone (or use GMT) for consistency. Workflow 1 : Initial Embedding This workflow retrieves your WordPress pages and posts, generates embeddings from the content, and stores them in Supabase using pgvector. Step 0 : Create Supabase tables Nodes : Postgres - Create Documents Table: This table is structured to support OpenAI embedding models with 1536 dimensions Postgres - Create Workflow Execution History Table These two nodes create tables in Supabase: The documents table, which stores embeddings of your website content. The n8n_website_embedding_histories table, which logs workflow executions for efficient management of upserts. This table tracks the workflow execution ID and execution timestamp. Step 1 : Retrieve and Merge WordPress Pages and Posts Nodes : WordPress - Get All Posts WordPress - Get All Pages Merge WordPress Posts and Pages These three nodes retrieve all content and metadata from your posts and pages and merge them. Important: ** **Apply filters to avoid generating embeddings for all site content. Step 2 : Set Fields, Apply Filter, and Transform HTML to Markdown Nodes : Set Fields Filter - Only Published & Unprotected Content HTML to Markdown These three nodes prepare the content for embedding by: Setting up the necessary fields for content embeddings and document metadata. Filtering to include only published and unprotected content (protected=false), ensuring private or unpublished content is excluded from your GenAI application. Converting HTML to Markdown, which enhances performance and relevance in Retrieval-Augmented Generation (RAG) by optimizing document embeddings. Step 3: Generate Embeddings, Store Documents in Supabase, and Log Workflow Execution Nodes: Supabase Vector Store Sub-nodes: Embeddings OpenAI Default Data Loader Token Splitter Aggregate Supabase - Store Workflow Execution This step involves generating embeddings for the content and storing it in Supabase, followed by logging the workflow execution details. Generate Embeddings: The Embeddings OpenAI node generates vector embeddings for the content. Load Data: The Default Data Loader prepares the content for embedding storage. The metadata stored includes the content title, publication date, modification date, URL, and ID, which is essential for managing upserts. ⚠️ Important Note : Be cautious not to store any sensitive information in metadata fields, as this information will be accessible to the AI and may appear in user-facing answers. Token Management: The Token Splitter ensures that content is segmented into manageable sizes to comply with token limits. Aggregate: Ensure the last node is run only for 1 item. Store Execution Details: The Supabase - Store Workflow Execution node saves the workflow execution ID and timestamp, enabling tracking of when each content update was processed. This setup ensures that content embeddings are stored in Supabase for use in downstream applications, while workflow execution details are logged for consistency and version tracking. This workflow should be executed only once for the initial embedding. Workflow 2, described below, will handle all future upserts, ensuring that new or updated content is embedded as needed. Workflow 2: Handle document upserts Content on a website follows a lifecycle—it may be updated, new content might be added, or, at times, content may be deleted. In this first version of the template, the upsert workflow manages: Newly added content** Updated content** Step 1: Retrieve WordPress Content with Regular CRON Nodes: CRON - Every 30 Seconds Postgres - Get Last Workflow Execution WordPress - Get Posts Modified After Last Workflow Execution WordPress - Get Pages Modified After Last Workflow Execution Merge Retrieved WordPress Posts and Pages A CRON job (set to run every 30 seconds in this template, but you can adjust it as needed) initiates the workflow. A Postgres SQL query on the n8n_website_embedding_histories table retrieves the timestamp of the latest workflow execution. Next, the HTTP nodes use the WordPress API (update the example URL in the template with your own website’s URL and add your WordPress credentials) to request all posts and pages modified after the last workflow execution date. This process captures both newly added and recently updated content. The retrieved content is then merged for further processing. Step 2 : Set fields, use filter Nodes : Set fields2 Filter - Only published and unprotected content The same that Step 2 in Workflow 1, except that HTML To Makrdown is used in further Step. Step 3: Loop Over Items to Identify and Route Updated vs. Newly Added Content Here, I initially aimed to use 'update documents' instead of the delete + insert approach, but encountered challenges, especially with updating both content and metadata columns together. Any help or suggestions are welcome! :) Nodes: Loop Over Items Postgres - Filter on Existing Documents Switch Route existing_documents (if documents with matching IDs are found in metadata): Supabase - Delete Row if Document Exists: Removes any existing entry for the document, preparing for an update. Aggregate2: Used to aggregate documents on Supabase with ID to ensure that Set Fields3 is executed only once for each WordPress content to avoid duplicate execution. Set Fields3: Sets fields required for embedding updates. Route new_documents (if no matching documents are found with IDs in metadata): Set Fields4: Configures fields for embedding newly added content. In this step, a loop processes each item, directing it based on whether the document already exists. The Aggregate2 node acts as a control to ensure Set Fields3 runs only once per WordPress content, effectively avoiding duplicate execution and optimizing the update process. Step 4 : HTML to Markdown, Supabase Vector Store, Update Workflow Execution Table The HTML to Markdown node mirrors Workflow 1 - Step 2. Refer to that section for a detailed explanation on how HTML content is converted to Markdown for improved embedding performance and relevance. Following this, the content is stored in the Supabase vector store to manage embeddings efficiently. Lastly, the workflow execution table is updated. These nodes mirros the **Workflow 1 - Step 3 nodes. Workflow 3 : An example of GenAI App with Wordpress Content : Chatbot to be embed on your website Step 1: Retrieve Supabase Documents, Aggregate, and Set Fields After a Chat Input Nodes: When Chat Message Received Supabase - Retrieve Documents from Chat Input Embeddings OpenAI1 Aggregate Documents Set Fields When a user sends a message to the chat, the prompt (user question) is sent to the Supabase vector store retriever. The RPC function match_documents (created in Workflow 1 - Step 0) retrieves documents relevant to the user’s question, enabling a more accurate and relevant response. In this step: The Supabase vector store retriever fetches documents that match the user’s question, including metadata. The Aggregate Documents node consolidates the retrieved data. Finally, Set Fields organizes the data to create a more readable input for the AI agent. Directly using the AI agent without these nodes would prevent metadata from being sent to the language model (LLM), but metadata is essential for enhancing the context and accuracy of the AI’s response. By including metadata, the AI’s answers can reference relevant document details, making the interaction more informative. Step 2: Call AI Agent, Respond to User, and Store Chat Conversation History Nodes: AI Agent** Sub-nodes: OpenAI Chat Model Postgres Chat Memories Respond to Webhook** This step involves calling the AI agent to generate an answer, responding to the user, and storing the conversation history. The model used is gpt4-o-mini, chosen for its cost-efficiency.
by Sk developer
🚀 LinkedIn Video to MP4 Automation with Google Drive & Sheets | RapidAPI Integration This n8n workflow automatically converts LinkedIn video URLs into downloadable MP4 files using the LinkedIn Video Downloader API, uploads them to Google Drive with public access, and logs both the original URL and Google Drive link into Google Sheets. It leverages the LinkedIn Video Downloader service for fast and secure video extraction. 📝 Node Explanations (Single-Line) 1️⃣ On form submission → Captures LinkedIn video URL from the user via a web form. 2️⃣ HTTP Request → Calls LinkedIn Video Downloader to fetch downloadable MP4 links. 3️⃣ If → Checks for API errors and routes workflow accordingly. 4️⃣ Download mp4 → Downloads the MP4 video file from the API response URL. 5️⃣ Upload To Google Drive → Uploads the downloaded MP4 file to Google Drive. 6️⃣ Google Drive Set Permission → Makes the uploaded file publicly accessible. 7️⃣ Google Sheets → Logs successful conversions with LinkedIn URL and sharable Drive link. 8️⃣ Wait → Delays execution before logging failed attempts. 9️⃣ Google Sheets Append Row → Logs failed video downloads with N/A Drive link. 📄 Google Sheets Columns URL** → Original LinkedIn video URL entered in the form. Drive_URL** → Publicly sharable Google Drive link to the converted MP4 file. (For failed downloads) → Drive_URL will display N/A. 💡 Use Case Automate LinkedIn video downloading and sharing using LinkedIn Video Downloader for social media managers, marketers, and content creators without manual file handling. ✅ Benefits Time-saving* (auto-download & upload), *Centralized tracking* in Sheets, *Easy sharing* via Drive links, and *Error logging* for failed downloads—all powered by *RapidAPI LinkedIn Video Downloader**.
by Oneclick AI Squad
This comprehensive n8n workflow automates the entire travel business call management process, from initial customer inquiries to trip bookings and marketing outreach. The system handles incoming calls, validates trip details, processes bookings, captures leads, and manages outbound marketing campaigns to promote trip organizer services. It streamlines the complete sales cycle while maintaining organized data records for business intelligence. Essential Information The system operates across four distinct workflows to handle different aspects of travel call management. All call data is automatically captured and stored in organized spreadsheets for analysis and follow-up. The workflow validates trip details before processing to ensure data accuracy and prevent booking errors. Outbound marketing campaigns are automatically triggered based on lead detection and formatting. System Architecture Call Handling Pipeline**: The Detect Incoming Call node captures all incoming customer calls, followed by the Validate Trip Details node which verifies and processes trip information, and the Deliver Organizer Info node that provides relevant trip organizer details to callers. Booking Management Flow**: The Capture Voice Input node records customer booking requests, the Update Booking Record node processes and stores booking information, and the Send Booking Confirmation node delivers confirmation details to customers. Lead Generation Process**: The Detect New Lead node identifies potential customers from call data, the Format Lead Information node structures the lead data for marketing use, and the Initiate Marketing Outreach node launches targeted marketing campaigns. Data Management System**: The Receive Call Response node collects call interaction data, the Log User Input node records customer information in spreadsheets, and the Relay Response to System node ensures data synchronization across all components. Implementation Guide Import the workflow into n8n and configure phone system integration for call detection and voice capture. Set up spreadsheet connections for booking records, lead management, and call logging. Configure marketing automation tools for outbound campaign management. Test each workflow section independently before enabling the complete system. Monitor call handling accuracy and adjust validation rules as needed. Technical Dependencies Phone system API or telephony service for call detection and voice processing Spreadsheet service (Google Sheets, Excel Online) for data storage and management Marketing automation platform for outbound campaign execution Voice recognition service for capturing and processing customer input CRM integration for lead management and customer tracking Database & Sheet Structure Call Tracking Sheet**: Columns should include Call_ID, Customer_Phone, Call_Time, Call_Duration, Call_Status, Trip_Interest, Organizer_Assigned Booking Records Sheet**: Required columns are Booking_ID, Customer_Name, Customer_Phone, Destination, Travel_Dates, Group_Size, Booking_Status, Confirmation_Sent Lead Management Sheet**: Essential columns include Lead_ID, Customer_Name, Phone_Number, Email, Trip_Preference, Lead_Source, Lead_Status, Marketing_Campaign_Sent Trip Organizer Database**: Contains Organizer_ID, Organizer_Name, Specialization, Contact_Info, Availability_Status, Performance_Rating Marketing Outreach Log**: Tracks Campaign_ID, Lead_ID, Campaign_Type, Send_Date, Response_Status, Follow_up_Required Customization Possibilities Adjust the Validate Trip Details node to include specific travel validation rules or partner requirements. Modify the Format Lead Information node to match your CRM system's data structure and marketing campaign formats. Configure the Initiate Marketing Outreach node to integrate with your preferred marketing platforms and campaign templates. Customize the data logging structure in the Log User Input node to capture additional customer information or booking details. Add additional validation steps or approval workflows between booking capture and confirmation sending.
by Nabin Bhandari
This template uses VAPI and Cal.com to book appointments through a voice conversation. It detects whether the user wants to check availability or book an appointment, then responds naturally with real-time scheduling options. Who is this for? This workflow is perfect for: Voice assistant developers AI receptionists and smart concierge tools Service providers (salons, clinics, coaches) needing hands-free scheduling Anyone building voice-based customer experiences What does it do? This workflow turns a natural voice conversation into a working appointment system. It starts with a Webhook connected to your VAPI voice agent. The Set node extracts user intent (like “check availability” or “book now”). A Switch node branches logic based on the intent. If the user wants to check availability, the workflow fetches available times from Cal.com. If the user wants to book, it creates a new event using Cal.com's API. The final result is sent back to VAPI as a conversational voice response. How to use it Import this workflow into your n8n instance. Set up a Webhook node and connect it to your VAPI voice agent. Add your Cal.com API token as a credential (use HTTP Header Auth). Deploy and test using VAPI’s simulator or real phone input. (Optional) Customize the OpenAI prompt if you're using it to process or moderate inputs. Requirements A working VAPI agent A Cal.com account with API access n8n (cloud or self-hosted) An understanding of how to configure webhook and API credentials in n8n Customization Ideas Swap out Cal.com with another booking API (like Calendly) Add a Google Sheets or Supabase node to log appointments Use OpenAI to summarize or sanitize voice inputs before proceeding Build multi-turn conversations in VAPI for more complex bookings
by n8n Team
Who this template is for This template is for developers or teams who need to convert CSV data into JSON format through an API endpoint, with support for both file uploads and raw CSV text input. Use case Converting CSV files or raw CSV text data into JSON format via a webhook endpoint, with error handling and notifications. This is particularly useful when you need to transform CSV data into JSON as part of a larger automation or integration process. How this workflow works Receives POST requests through a webhook endpoint at /tool/csv-to-json Uses a Switch node to handle different input types: File uploads (binary data) Plain text CSV data JSON format data Processes the CSV data: For files: Uses the Extract From File node For raw text: Converts the text to CSV using a custom Code node that handles both comma and semicolon delimiters Aggregates the processed data and returns: Success response (200): Converted JSON data Error response (500): Error message with details In case of errors, sends notifications to a Slack error channel with execution details and a link to debug Set up steps Configure the webhook endpoint by deploying the workflow Set up Slack integration for error notifications: Update the Slack channel ID (currently set to "C0832GBAEN4") Configure OAuth2 authentication for Slack Test the endpoint using either: CURL for file uploads: bash Copy curl -X POST "https://yoururl.com/webhook-test/tool/csv-to-json" \ -H "Content-Type: text/csv" \ --data-binary @path/to/your/file.csv Or send raw CSV data as text/plain content type
by Yaron Been
This workflow automatically scrapes customer reviews from Trustpilot and performs sentiment analysis to extract valuable customer insights. It saves you time by eliminating the need to manually read through reviews and provides structured data on customer feedback, sentiment, and pain points. Overview This workflow automatically scrapes the latest customer reviews from any Trustpilot company page and uses AI to analyze each review for sentiment, extract key complaints or praise, and identify recurring customer pain points. It stores all structured review data in Google Sheets for easy analysis and reporting. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For scraping Trustpilot review pages without being blocked OpenAI**: AI agent for intelligent review analysis and sentiment extraction Google Sheets**: For storing structured review data and analysis results How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your review tracking spreadsheet Customize: Enter target Trustpilot company URLs and adjust review analysis parameters Use Cases Product Teams**: Identify customer pain points and feature requests from reviews Customer Support**: Monitor customer satisfaction and recurring issues Marketing Teams**: Extract positive testimonials and understand customer sentiment Business Intelligence**: Track brand reputation and customer feedback trends Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #trustpilot #reviewscraping #sentimentanalysis #brightdata #webscraping #customerreviews #n8nworkflow #workflow #nocode #reviewautomation #customerinsights #brandmonitoring #reviewanalysis #customerfeedback #reputationmanagement #reviewmonitoring #customersentiment #productfeedback #trustpilotscraping #reviewdata #customerexperience #businessintelligence #feedbackanalysis #reviewtracking #customervoice #aianalysis #reviewmining #customerinsights
by Yaron Been
Description This workflow automatically finds trending headlines and content from various sources and posts them to your social media accounts. It helps maintain an active social media presence without the daily manual effort of content curation. Overview This workflow automatically scrapes trending headlines and content from various sources and posts them to your social media accounts. It uses Bright Data to access content and n8n to schedule and post to platforms like Twitter, LinkedIn, or Facebook. Tools Used n8n:** The automation platform that orchestrates the workflow. Bright Data:** For scraping trending content from news sites, blogs, or other sources without getting blocked. Social Media APIs:** To post content to your accounts. How to Install Import the Workflow: Download the .json file and import it into your n8n instance. Configure Bright Data: Add your Bright Data credentials to the Bright Data node. Connect Social Media: Authenticate your social media accounts. Customize: Set your content preferences, posting schedule, and hashtag strategy. Use Cases Social Media Managers:** Automate content curation and posting. Content Creators:** Share trending topics in your niche. Businesses:** Maintain an active social media presence with minimal effort. Connect with Me Website:** https://www.nofluff.online YouTube:** https://www.youtube.com/@YaronBeen/videos LinkedIn:** https://www.linkedin.com/in/yaronbeen/ Get Bright Data:** https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #socialmedia #brightdata #contentcuration #scheduling #socialmediaautomation #contentmarketing #socialmediamanagement #autoposting #trendingcontent #n8nworkflow #workflow #nocode #socialmediatools #digitalmarketing #contentcalendar #socialmediapresence #headlinecuration #trendalerts #socialmediaschedule #contentautomation #socialmediamarketing #contentdistribution #automatedposting #socialmediastrategy
by Askan
What problem does this solve? It fetches LinkedIn profiles for a multitude of purposes based on a keyword and location via Google search and stores them in an Excel file for download and in a NocoDB database. It tries to avoid using costly services and should be n8n beginner friendly. It uses the serpapi.com to avoid being blocked by Google Search and to process the data in an easier way. What does it do? Based on criteria input, it searches LinkedIn profiles It discards unnecessary data and turns the follower count into a real number The output is provided as an Excel table for download and in a NocoDB database How does it do it? Based on criteria input, it uses serpAPI.com to conduct Google search of the respective LinkedI profiles With OpenAI.com the name of the respective company is being added With OpenAI.com the follower number e.g., 300+ is turned into a real number: 300 All unnecessary metadata is being discarded As an output an Excel file is being created The output is stored in a nocodb.com table Step-by-step instruction Import the Workflow: Copy the workflow JSON from the "Template Code" section below. Import it into n8n via "Import from File" or "Import from URL". Set up a free account at serpapi.com and get API credentials to enable good Google search results Set up an API account at openai.com and get API key Set up a nocodb.com account (or self-host) and get the API credentials Create the credentials for serpapi.com, opemnai.com and nocodb.com in n8n. Set up a table in NocoDB with the fields indicated in the note above the NocoDB node Follow the instructions as detailed in the notes above individual nodes When the workflow is finished, open the Excel node and click download if you need the Excel file
by Yaron Been
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically monitors customer churn indicators and early warning signals to help reduce customer attrition and improve retention rates. It saves you time by eliminating the need to manually track customer behavior and provides proactive insights for preventing customer churn. Overview This workflow automatically scrapes customer data sources, support tickets, usage analytics, and engagement metrics to identify patterns that indicate potential customer churn. It uses Bright Data to access customer data and AI to intelligently analyze behavior patterns and predict churn risk. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For scraping customer data and analytics platforms without being blocked OpenAI**: AI agent for intelligent churn prediction and pattern analysis Google Sheets**: For storing churn indicators and customer retention data How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your churn monitoring spreadsheet Customize: Define customer data sources and churn indicator parameters Use Cases Customer Success**: Proactively identify at-risk customers for retention efforts Account Management**: Prioritize customer outreach based on churn probability Product Teams**: Identify product issues that contribute to customer churn Revenue Operations**: Reduce churn rates and improve customer lifetime value Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #churnprediction #customerretention #brightdata #webscraping #customeranalytics #n8nworkflow #workflow #nocode #churnindicators #customersuccess #retentionanalysis #customerchurn #customerinsights #churnprevention #retentionmarketing #customerdata #churnmonitoring #customerlifecycle #retentionmetrics #churnanalysis #customerbehavior #retentionoptimization #churnreduction #customerengagement #retentionstrategy #churnmanagement #customerhealth #retentiontracking
by Daniel Ng
Auto Backup n8n Workflows to Google Drive Imagine the sinking feeling: hours, weeks, or even months of meticulous work building your n8n workflows, suddenly gone. A server crash, an accidental deletion, data corruption, or an unexpected platform issue – and all your automated processes vanish. Without a reliable backup system, you're facing a complete rebuild from scratch, a scenario that's not just frustrating but can be catastrophic for business operations. Furthermore, consider the daunting task of migrating your n8n instance to a new host or server. Manually exporting each workflow, one by one, then painstakingly importing them into the new environment is not only incredibly time-consuming, especially if you have tens or hundreds of workflows, but also highly prone to errors and omissions. You need a systematic, automated solution. This workflow provides a robust solution for automatically backing up all your n8n workflows to Google Drive on schedule (default to every hour). It creates a uniquely named folder for each backup instance, incorporating the date and hour, and then systematically uploads each workflow as an individual JSON file. To manage storage space, the workflow also includes a cleanup mechanism that deletes backup folders older than a user-defined retention period (defaulting to 7 days). Ideally, this backup workflow should be used in conjunction with a restore solution like our "Restore Workflows from Google Drive Backups" template. For more powerful n8n templates, visit our website or contact us at AI Automation Pro. We help your business build custom AI workflow automation and apps. Feature highlights Triggers on schedule (default to hourly). Creates a \n8n\_backup\_YYYY-MM-DD\_HH\ folder in Google Drive. Fetches all n8n workflows. Saves each workflow as a JSON file to the new folder. Deletes backup folders older than the 'Coverage Period' (default to 7 days). Who is this for? This template is designed for: n8n Administrators and Developers:** Who need a reliable, automated system to safeguard their workflows against accidental loss, corruption, or system issues. Proactive n8n Users:** Who want to maintain a version history of their workflows, enabling easy rollback to previous configurations if necessary. Organizations:** Seeking to implement disaster recovery and data integrity practices for their n8n automation infrastructure. What problem is this workflow solving? / use case This workflow directly addresses these critical risks and challenges by: Automating Backups:** Eliminates the manual effort and inconsistency of ad-hoc backups, ensuring your workflows are regularly and reliably saved. Preventing Data Loss:** Safeguards your valuable automation assets against unforeseen disasters by creating secure, versioned copies in Google Drive. Facilitating Migration & Recovery:** Provides the foundational backups needed for a smoother, more systematic migration or a full disaster recovery, allowing you to restore your operations efficiently. Version Control:** By storing scheduled backups (defaulting to hourly), it allows you to access and restore previous versions of your workflows, offering an undo capability for significant changes or corruptions. Storage Management:** Automatically removes old backups based on a configurable retention period, preventing excessive use of Google Drive storage while keeping a relevant history. What this workflow does Scheduled Trigger: Runs automatically every hour. Timestamping: Fetches the current date and hour to create a unique name for the backup folder. Folder Creation: Creates a new folder in a specified Google Drive location. The folder is named in the format: n8n_backup_YYYY-MM-DD_HH. Workflow Retrieval: Connects to your n8n instance via its API and fetches a list of all existing workflows. Individual Backup: Processes each workflow one by one: Converts the workflow data to a binary JSON file. Uploads the JSON file (named after the workflow) to the hourly backup folder in Google Drive. Includes a short wait step between uploads to respect potential API rate limits. Old Backup Deletion: Calculates a cut-off date based on the "Coverage Period" set in the "Settings" node (e.g., 7 days prior to the current date). Searches Google Drive for backup folders (matching the naming convention) that are older than this cut-off date. Deletes these identified old backup folders to free up storage space. Step-by-step setup Import Template: Upload the provided JSON file into your n8n instance. Configure Credentials: Google Drive Nodes: You will need to create or select existing Google Drive OAuth2 API credentials for these nodes. n8n Node: n8n (node that fetches workflows) Configure n8n API credentials to allow the workflow to access your instance's workflow data. Specify Google Drive Backup Location: Open the "Google Drive Backup Folder Every Hour" node. Under the "Drive ID" parameter: select it from the list or provide its ID. Under the "Folder ID" parameter: select or input the ID of the parent folder in Google Drive where you want the n8n_backup_YYYY-MM-DD_HH folders to be created (e.g., a general "n8n\_Backups" folder). Set Backup Retention Period: Open the "Settings" node. Modify the value for "Coverage Period" (default is 7). This number represents the number of days backups should be kept before being deleted. Activate Workflow: Toggle the "Active" switch for the workflow in your n8n dashboard. How to customize this workflow to your needs Backup Frequency:* Adjust the "Rule" in the *"Schedule Trigger"** node to change the backup interval (e.g., daily, specific times). Folder/File Naming:* Modify the expressions in the "Parameters" tab of the *"Google Drive Backup Folder Every Hour"* node (for folder name) or the *"Google Drive Upload Workflows"** node (for file name) if you require a different naming convention. Targeted Backups:* To back up only specific workflows, insert a "Filter" node after the *"n8n"** node to filter workflows based on criteria like name, tags, or ID before they reach the "Move Binary Data" node. Wait Time:* The *"Wait"** node is set to 3 seconds between uploads. If you have a very large number of workflows or encounter rate limiting, you might adjust this duration. Error Workflow:** The workflow is pre-configured with an "Error Workflow" setting. Ensure this error workflow exists in your n8n instance, or update the setting to point to your preferred error handling workflow. This can be used to send notifications on failure. Important Considerations Resource Usage:** While the workflow includes a wait step between individual workflow uploads to minimize load, backing up an extremely large number of workflows could still consume resources on your n8n instance and make many API calls to Google Drive. Monitor performance if you have thousands of workflows. Testing Restore Process**: Regularly test restoring a few workflows from your Google Drive backups using the companion "Restore All n8n Workflows from Google Drive" template or a manual import. This verifies the integrity of your backups and ensures you can recover when needed. Workflow Modifications**: If you modify this backup workflow (e.g., change the folder naming convention), ensure your restore process or workflow is also updated to match these changes.
by Shrey
This workflow can be used to save all of your workflows in: a raw state (as a json file in Dropbox) an Airtable base, in a pre-designed format. It runs periodically (currently, every 30 minutes) and either updates (if already existing in Airtable) or creates a new record in Airtable for each workflow. Here's the Airtable base to give you an idea: View Airtable base Note: This workflows uses the "http://localhost:5678/rest" API which the UI editor uses but is still not officially supported. Hence, it may suffer breaking changes at some point in the future and the workflow might become dysfunctional then.
by Yaron Been
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically gathers and analyzes feature requests from multiple sources including support tickets, user forums, and feedback platforms to help prioritize product development. It saves you time by eliminating the need to manually monitor various channels and provides intelligent feature request analysis. Overview This workflow automatically scrapes support systems, user forums, social media, and feedback platforms to collect feature requests from customers. It uses Bright Data to access various platforms without being blocked and AI to intelligently categorize, prioritize, and analyze feature requests based on frequency and user impact. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For scraping support platforms and user forums without being blocked OpenAI**: AI agent for intelligent feature request categorization and analysis Google Sheets**: For storing feature requests and generating prioritization reports How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your feature request tracking spreadsheet Customize: Define feedback sources and feature request identification parameters Use Cases Product Management**: Prioritize roadmap items based on customer demand Development Teams**: Understand which features users need most Customer Success**: Track and respond to feature requests proactively Strategy Teams**: Make data-driven decisions about product direction Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #featurerequests #productmanagement #brightdata #webscraping #productdevelopment #n8nworkflow #workflow #nocode #roadmapping #customervoice #productinsights #featureanalysis #productfeedback #userresearch #productdata #featuretracking #productplanning #customerneeds #featurediscovery #productprioritization #featurebacklog #uservoice #productintelligence #developmentplanning #featuremonitoring #productdecisions #feedbackgathering #productautomation