by phil
This workflow automates the backup of your n8n workflows data to Google Drive every day. It ensures that important configurations and execution logs are securely stored, reducing the risk of data loss and improving workflow resilience. 🔹 Why Use This? ✅ Automates routine backups effortlessly. ✅ Reduces manual intervention and potential data loss. ✅ Securely stores critical workflow configurations in Google Drive. With this workflow, you can focus on innovation while n8n takes care of your backups. 🔐✨ 🚀 How It Works This workflow operates seamlessly with a combination of scheduled triggers, JSON data transformation, and secure cloud storage. 🛠 Setup Steps Trigger the backup – Choose between manual execution or automated scheduling at 1:30 AM daily. Data preparation – Your workflow parameters define the backup location and organize files effectively. Transformation & Encoding – The data is processed and converted into a JSON file in base64 format. Cloud Storage – The backup is securely uploaded to your designated Google Drive folder. 🔧 Customization Options You can modify various aspects of the backup workflow to better suit your needs: 1️⃣ Adjusting Backup Frequency By default, the workflow runs daily at 1:30 AM. To change this: Open the Trigger Node in n8n. Modify the Cron Expression or select a different frequency (e.g., hourly, weekly, or custom intervals). 2️⃣ Selecting Specific Workflows to Backup Instead of backing up all workflows, you can filter which ones to include: Add a Filter Node before exporting data. Define specific workflow IDs or names to include in the backup. 3️⃣ Changing the Backup Destination The default destination is Google Drive, but you can change this: Replace the Google Drive Node with a different storage provider (e.g., Dropbox, AWS S3, or local storage via FTP/SFTP). Configure authentication for the new destination. 4️⃣ Modifying Data Format By default, the workflow stores data in JSON format. If you need a different format: Convert JSON to CSV using the Spreadsheet File Node. Store backups in a compressed format (ZIP) by adding a Compression Node. 5️⃣ Encrypting the Backup for Extra Security For added protection: Use the Crypto Node to encrypt the JSON file before uploading. Set up an Access-Controlled Folder in Google Drive with limited permissions. ✅ Verify That Your Backup Works Before relying on this workflow for your automated backups, make sure it works correctly by performing a quick test: Manually trigger the workflow in n8n and check if the backup file appears in your Google Drive. Open Google Drive, navigate to the backup folder, and download the JSON file. Verify its content by checking if the data matches your workflow’s execution logs. Try to import the JSON file back into n8n using the “Import File” function to ensure the workflow structure is intact. Alternatively, copy and paste a test file into Google Drive and confirm that it appears correctly in your workflow logs. This quick test will confirm that your backup is running smoothly and that your data is retrievable whenever needed. 📁 How to Find Your Google Drive Directory ID To ensure that the backup is uploaded to the correct folder, you need to retrieve your Google Drive Directory ID. Follow these simple steps: Open Google Drive. Navigate to the folder where you want to store your backups. Click on the folder and check the URL in your browser. The Directory ID is the long string of characters at the end of the URL after /folders/. Example: 📌 If your folder URL is: https://drive.google.com/drive/folders/14oUlH_LW_NT0Xb2woZWvuzRncV-bhla Then, your Directory ID is: 14oUlH_LW_NT0Xb2woZWvuzRncV-bhla Copy this Directory ID and use it in the workflow's parameters to ensure the backup is saved in the correct location. Phil | Inforeole
by JPres
A Discord bot that responds to mentions by sending messages to n8n workflows and returning the responses. Connects Discord conversations with custom automations, APIs, and AI services through n8n. Full guide on: https://github.com/JimPresting/AI-Discord-Bot/blob/main/README.md Discord Bot Summary Overview The Discord bot listens for mentions, forwards questions to an n8n workflow, processes responses, and replies in Discord. This workflow is intended for all Discord users who want to offer AI interactions with their respective channels. What do you need? You need a Discord account as well as a Google Cloud Project Key Features 1. Listens for Mentions The bot monitors Discord channels for messages that mention it. Optional Configuration**: Can be set to respond only in a specific channel. 2. Forwards Questions to n8n When a user mentions the bot and asks a question: The bot extracts the question. Sends the question, along with channel and user information, to an n8n webhook URL. 3. Processes Data in n8n The n8n workflow receives the question and can: Interact with AI services (e.g., generating responses). Access databases or external APIs. Perform custom logic. n8n formats the response and sends it back to the bot. 4. Replies to Discord with n8n's Response The bot receives the response from n8n. It replies to the user's message in the Discord channel with the answer. Long Responses**: Handles responses exceeding Discord's 2000-character limit by chunking them into multiple messages. 5. Error Handling Includes error handling for: Issues with n8n communication. Response formatting problems. Manages cases where: No question is asked. An invalid response is received from n8n. 6. Typing Indicator While waiting for n8n's response, the bot sends a "typing..." indicator to the Discord channel. 7. Status Update For lengthy n8n processes, the bot sends a message to the Discord channel to inform the user that it is still processing their request. Step-by-Step Setup Guide as per Github Instructions Key Takeaways You’ll configure an n8n webhook to receive Discord messages, process them with your workflow, and respond. You’ll set up a Discord application and bot, grant the right permissions/intents, and invite it to your server. You’ll prepare your server environment (Node.js), scaffold the project, and wire up environment variables. You’ll implement message‐chunking, “typing…” indicators, and robust error handling in your bot code. You’ll deploy with PM2 for persistence and know how to test and troubleshoot common issues. 1. n8n: Create & Expose Your Webhook New Workflow Log into your n8n instance. Click Create Workflow (➕), name it e.g. Discord Bot Handler. Webhook Trigger Add a node (➕) → search Webhook. Set: Authentication: None (or your choice) HTTP Method: POST Path: e.g. /discord-bot Click Execute Node to activate. Copy Webhook URL After execution, copy the Production Webhook URL. You’ll paste this into your bot’s .env. Build Your Logic Chain additional nodes (AI, database lookups, etc.) as required. Format the JSON Response Insert a Function node before the end: return { json: { answer: "Your processed reply" } }; Respond to Webhook Add Respond to Webhook as the final node. Point it at your Function node’s output (with the answer field). Activate Toggle Active in the top‐right and Save. 2. Discord Developer Portal: App & Bot New Application Visit the Discord Developer Portal. Click New Application, name it. Go to Bot → Add Bot. Enable Intents & Permissions Under Privileged Gateway Intents, toggle Message Content Intent. Under Bot Permissions, check: Read Messages/View Channels Send Messages Read Message History Grab Your Token In Bot → click Copy (or Reset Token). Store it securely. Invite Link (OAuth2 URL) Go to OAuth2 → URL Generator. Select scopes: bot, applications.commands. Under Bot Permissions, select the same permissions as above. Copy the generated URL, open it in your browser, and invite your bot. 3. Server Prep: Node.js & Project Setup Install Node.js v20.x sudo apt purge nodejs npm sudo apt autoremove curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash - sudo apt install -y nodejs node -v # Expect v20.x.x npm -v # Expect 10.x.x Project Folder mkdir discord-bot cd discord-bot Initialize & Dependencies npm init -y npm install discord.js axios dotenv 4. Bot Code & Configuration Environment Variables Create .env: nano .env Populate: DISCORD_BOT_TOKEN=your_bot_token N8N_WEBHOOK_URL=https://your-n8n-instance.com/webhook/discord-bot Optional: restrict to one channel TARGET_CHANNEL_ID=123456789012345678 Bot Script Create index.js: nano index.js Implement: Import dotenv, discord.js, axios. Set up client with MessageContent intent. On messageCreate: Ignore bots or non‐mentions. (Optional) Filter by channel ID. Extract and validate the user’s question. Send “typing…” every 5 s; after 20 s send a status update if still processing. POST to your n8n webhook with question, channelId, userId, userName. Parse various response shapes to find answer. If answer.length ≤ 2000, message.reply(answer). Else, split into ~1900‑char chunks at sentence/paragraph breaks and send sequentially. On errors, clear intervals, log details, and reply with an error message. Login client.login(process.env.DISCORD_BOT_TOKEN); 5. Deployment: Keep It Alive with PM2 Install PM2 npm install -g pm2 Start & Monitor pm2 start index.js --name discord-bot pm2 status pm2 logs discord-bot Auto‐Start on Boot pm2 startup Follow the printed command (e.g. sudo env PATH=$PATH:/usr/bin pm2 startup systemd -u your_user --hp /home/your_user) pm2 save 6. Test & Troubleshoot Functional Test In your Discord server: @YourBot What’s the weather like? Expect a reply from your n8n workflow. Common Pitfalls No reply → check pm2 logs discord-bot. Intent Errors → verify Message Content Intent in Portal. Webhook failures → ensure workflow is active and URL is correct. Formatting issues → confirm your Function node returns json.answer. Inspect Raw Data Search your logs for Complete response from n8n: to debug payload shapes. `
by Agent Studio
Automatically store Retell transcripts in Google Sheets/Airtable/Notion from webhook Overview This workflow stores the results of a Retell voice call (transcript, analysis, etc.) once it has ended and been analyzed. It listens for call_analyzed webhook events from Retell and stores the data in Airtable, Google Sheets, and Notion (choose based on your stack). Useful for anyone building Retell agents who want to keep a detailed history of analyzed calls in structured tools. Who is it for For builders of Retell's Voice Agents who want to store call history and essential analytic data. Prerequisites Have a Retell AI Account Create a Retell agent Associate a phone number with your Retell agent Set up one of the following: An Airtable base and table (example: "Transcripts") A Google Sheet with a “Transcripts” tab A Notion database with columns to match the transcript fields Templates: Airtable Google Sheets Notion How it works Receives a webhook POST request from Retell when a call has been analyzed. Filters out any event that is not call_analyzed (Retell sends webhooks for call_started, call_ended and call_analyzed) Extracts useful fields like: Call ID, start/end time, duration, total cost Transcript, summary, sentiment Stores this data in your preferred tool: Airtable Google Sheets Notion How to use it Copy the webhook URL (e.g., https://your-instance.app.n8n.cloud/webhook/poc-retell-analysis) and paste it in your Retell agent under "Webhook settings" then "Agent Level Webhook URL". Make sure your Airtable, Google Sheet, or Notion databases are correctly configured to receive the fields. After each call, once Retell finishes the analysis, this workflow will automatically log the results. Extension If you use any "Post-Call Analysis" fields, you can add columns to your Airtable, Google Sheet, or Notion database. Then fetch the data from the call.call_analysis.custom_analysis_data object. Additional Notes Phone numbers are extracted depending on the call direction (from_number or to_number). Cost is converted from cents to dollars before saving. Dates are converted from timestamps to local ISO strings. You can remove any of the outputs (Airtable, Google Sheets, Notion) if you're only using one. 👉 Reach out to us if you're interested in analysing your Retell Agent conversations.
by Batu Öztürk
🚀 Transform LinkedIn Post Reactions into Content Ideas with Airtable 📝 Description This workflow helps you to turn your LinkedIn activity into a powerful content ideation engine. It captures your most recent post reactions on LinkedIn automatically, filters them based on recency, and structures the content into Airtable—ready for brainstorming, inspiration, or publication planning. ⚙️ What It Does Fetches* the latest liked posts from LinkedIn via a public API (rapidapi.com/Real-Time Linkedin Scraper*). Filters** posts to include only those marked as your decided reaction and posted in the last 7 days. Extracts** the post text, author, links and more. Formats** the data into a database-friendly structure. Saves** the output in Airtable for easy tracking, tagging, or team collaboration. 💡 Use Cases Build a content idea vault from posts you admire. Capture inspiration from thought leaders. Identify trends based on what you find insightful. Supercharge your personal brand or newsletter by turning likes into learning. 🛠 Prerequisites Before using this template, make sure you have: ✅ A RapidAPI account and access to the linkedin-api8 endpoint. ✅ Your RapidAPI key and the target LinkedIn username. ✅ An Airtable account with a base/table set up. 🧰 Setup Instructions Clone this template into your n8n instance. Open the Fetch LinkedIn Likes node and enter: Your LinkedIn username. Your RapidAPI key in the headers. Open the Save to Airtable node and: Connect your Airtable account. Link the correct base (Content Hub) and table (Ideas). Set your desired schedule in the Trigger node. Activate the workflow and you're done! 📋 Airtable Setup Create a base called Content Hub and a table named Ideas with the following columns: | Column Name | Type | Required | Notes | |-------------|------------|----------|----------------------------| | Title | Single line text | ✅ | Generated from author info | | Description | Long text | ✅ | Contains post content | | Source | URL | ✅ | Link to the original post | | Type | Single select | ✅ | Value: Linkedin
by Pavel Duchovny
Building agentic AI workflows often requires multiple moving parts: memory management, document retrieval, vector similarity, and orchestration. Until now, these pieces had to be custom-wired. But with the new native n8n nodes for MongoDB Atlas, we reduce that overhead dramatically. With just a few clicks: Store and recall long-term memory from MongoDB Query vector embeddings stored in Atlas Vector Search Use these results in your LLM chains and automation logic In this example we present an ingestion and AI Agent flows that focus around Travel Planning. The different interest points that we want the agent to know about can be ingested into the vector store. The AI Agent will use the vector store tool to get relevant context about those points of interest if it needs to. Prerequisites MongoDB Atlas project and Cluster OpenAI Valid API Key for embeddings (can be other provider) Gemini API Key for the LLM (can be other provider) How it works: There are 2 main flows. One is ingesting flow: Gets a document from a webhook and use MongoDB Vector Atlas to embed the document title and description into points_of_interest collection. Embeddings are stored in a field named embedding Embeddings used are OpenAI's but it can be any type of supported embedders. Second flow is an AI Agent node with Chat Memory Stored in MongoDB Atlas and a Vector Search node as a tool: Chat Message Trigger**: Chatting with the AI Agent will trigger the conversation store in the MongoDB Chat Memory node. When data is necessary like a location search or details it will go to the "Vector Search" tool. Vector Search Tool** - uses Atlas Vector Search index created on the points_of_interest collection: // index name : "vector_index" // If you change an embedding provider make sure the numDimensions correspond to the model. { "fields": [ { "type": "vector", "path": "embedding", "numDimensions": 1536, "similarity": "cosine" } ] } Additional Resources MongoDB Atlas Vector Search n8n Atlas Vector Search docs
by Lucas Correia
What Does This Flow Do? This workflow demonstrates how to dynamically generate a line chart using the QuickChart node based on data provided in a JSON object and then upload the resulting chart image to Google Drive. Use Cases You can use it in presentations or requesting for chart generation from a software with HTTP requests. Automated report generation (e.g., daily sales charts). Visualizing data fetched from APIs or databases. Simple monitoring dashboards. Adding charts to internal tools or notifications. How it Works Trigger: The workflow starts manually when you click 'Test workflow'. Set Sample Data: A Set node (Edit Fields: Set JSON data to test) defines a sample JSON object named jsonData. This object contains: reportTitle: A title (not used in the chart generation in this example, but useful for context). labels: An array of strings representing the labels for the chart's X-axis (e.g., ["Q1", "Q2", "Q3", "Q4"]). salesData: An array of numbers representing the data points for the chart's Y-axis (e.g., [1250, 1800, 1550, 2100]). Generate Chart: The QuickChart node is configured to: Create a line chart. Dynamically read labels from the jsonData.labels array (Labels Mode: From Array). Use the jsonData.salesData array as the input data (Note: This configuration places data in the top-level 'Data' field. For more complex charts with multiple datasets or specific dataset options, configure datasets under 'Dataset Options' instead). The node outputs the generated chart image as binary data in a field named data. Upload to Google Drive: The Google Drive node (Google Drive: Upload File): Takes the binary data (data) from the QuickChart node. Uploads the image to your specified Google Drive folder. Dynamically names the file based on its extension (e.g., chart.png). Setup Steps Import: Import this template into your n8n instance. Configure Google Drive Credentials: Select the Google Drive: Upload File node. You MUST configure your own Google Drive credentials. Click on the 'Credentials' dropdown and either select existing credentials or create new ones by following the authentication prompts. (Optional) Customize Google Drive Folder: In the Google Drive: Upload File node, you can change the Drive ID and Folder ID to specify exactly where the chart should be uploaded. Activate: Activate the workflow if you want it to run automatically based on a different trigger. How to Use & Customize Change Input Data:** Modify the labels and salesData arrays within the Edit Fields: Set JSON data to test node to use your own data. Ensure the number of labels matches the number of data points. Use Real Data Sources:** Replace the Edit Fields: Set JSON data to test node with nodes that fetch data from real sources like: HTTP Request (APIs) Postgres / MongoDB nodes (Databases) Google Sheets node Ensure the output data from your source node is formatted similarly (providing labels and salesData arrays). You might need another Set node to structure the data correctly before the QuickChart node. Change Chart Type:** In the QuickChart node, modify the Chart Type parameter (e.g., change from line to bar, pie, doughnut, etc.). Customize Chart Appearance:** Explore the Chart Options parameter within the QuickChart node to add titles, change colors, modify axes, etc., using QuickChart's standard JSON configuration options. Use Datasets (Recommended for Complex Charts):** For multiple lines/bars or more control, configure datasets explicitly in the QuickChart node: Remove the expression from the top-level Data field. Go to Dataset Options -> Add option -> Add dataset. Set the Data field within the dataset using an expression like {{ $json.jsonData.salesData }}. You can add multiple datasets this way. Change Output Destination:** Replace the Google Drive: Upload File node with other nodes to handle the chart image differently: Write Binary File: Save the chart to the local filesystem where n8n is running. Slack / Discord / Telegram: Send the chart to messaging platforms. Move Binary Data: Convert the image to Base64 to embed in HTML or return via webhook response. Nodes Used Manual Trigger Set QuickChart Google Drive Tags: (Suggestions for tags field) QuickChart, Chart, Visualization, Line Chart, Google Drive, Reporting, Automation
by n8n Team
This workflow creates an Asana task when a new ticket is created in Zendesk. Subsequent comments on the ticket in Zendesk are added as comments to the task in Asana. Prerequisites Zendesk account and Zendesk credentials. Asana account and Asana credentials. Asana workspace to create tasks in. How it works The workflow listens for new tickets in Zendesk. When a new ticket is created, the workflow creates a new task in Asana. The Asana GID is then saved in one of the ticket's fields (in setup we call this "Asana GID"). The next time a comment is added to the ticket, the workflow retrieves the Asana GID from the ticket's field and adds the comment to the task in Asana. Setup This workflow requires that you set up a webhook in Zendesk. To do so, follow the steps below: In the workflow, open the On new Zendesk ticket node and copy the webhook URL. In Zendesk, navigate to Admin Center > Apps and integrations > Webhooks > Actions > Create Webhook. Add all the required details which can be retrieved from the On new Zendesk ticket node. The webhook URL gets added to the “Endpoint URL” field, and the “Request method” should match what is shown in n8n. Save the webhook. In Zendesk, navigate to Admin Center > Objects and rules > Business rules > Triggers > Add trigger. Give trigger a name such as “New tickets”. Under “Conditions” in “Meet ALL of the following conditions”, add “Status is New”. Under “Actions”, select “Notify active webhook” and select the webhook you created previously. In the JSON body, add the following: { "id": "{{ticket.id}}", "comment": "{{ticket.latest_comment_html}}" } Save the Zendesk trigger. You will also need to set up a field in Zendesk to store the Asana GID. To do so, follow the steps below: In Zendesk, navigate to Admin Center > Objects and rules > Tickets > Fields > Add field. Use the number field option and give the field a name such as “Asana GID”. Save the field. In n8n, open the Update ticket node and select the field you created in Zendesk.
by Paulo Ramirez
Receive realtime call-event data from telli Purpose and Problem Solved This template automates the process of receiving and acting upon real-time call event data from telli, an AI-powered voice agent platform. It solves the challenge of manually updating CRM records and initiating follow-up actions based on call outcomes. By leveraging webhooks and n8n's powerful workflow capabilities, this template enables businesses to instantly update their Airtable CRM and trigger appropriate follow-up actions, enhancing efficiency and responsiveness in customer interactions. Prerequisites An active telli account with API access and webhook capabilities An Airtable base set up as your CRM n8n instance (cloud or self-hosted) Airtable Specifications Create an Airtable base with the following table and fields: Table: Contacts Fields: Name (Single line text) Phone (Phone number) Email (Email) Appointment_Booked (Checkbox) Interest (Single select: High, Medium, Low) Last_Call_Date (Date) Notes (Long text) Step-by-Step Setup Instructions Webhook Configuration in telli: Log into your telli dashboard Navigate to the webhook settings Set the endpoint URL to your n8n Webhook node URL Select the "call_ended" event to trigger the webhook n8n Workflow Setup: Create a new workflow in n8n Add a Webhook node as the trigger Configure the Webhook node to receive POST requests Parse Webhook Data: Add a Set node to extract relevant information from the webhook payload Map fields such as call_outcome, appointment_booked, and interest Decision Logic: Add a Switch node to create different paths based on the call outcome Create branches for scenarios like "Appointment Booked", "Interested", and "Not Interested" Airtable Integration: Add Airtable nodes for each outcome to update the Contacts table Configure the nodes to update fields like Appointment_Booked, Interest, and Last_Call_Date Follow-up Actions: For "Interested" but not booked outcomes, add an Email node to trigger a follow-up email campaign For "Appointment Booked", add a node to create a calendar event or task Testing and Activation: Use the n8n testing feature to simulate webhook calls and verify each path Once satisfied, activate the workflow Example Workflow Webhook receives a "call_ended" event from telli Set node extracts call_outcome: appointment_booked = true, interest = true Switch node directs to the "Appointment Booked" path Airtable node updates the contact record: Set Appointment_Booked to true Set Interest to "High" Update Last_Call_Date Calendar node creates an appointment for the booked slot Example Payload Below is an example of the payload you might receive from telli when a call ends: { "event": "call_ended", "call": { "call_id": "b4a05730-2abc-4eb0-8066-2e4d23b53ba9", "attempt": 1, "from_number": "+17755719467", "to_number": "+16506794960", "external_contact_id": "external-123", "contact_id": "6bd1e7e0-6d00-4c0b-ad5b-daa72457a27d", "agent_id": "d8931604-92ad-45cf-9071-d9cd2afbad0c", "triggered_at": 1731956924302, "started_at": 1731956932264, "booked_slot_for": "2025-02-24T15:30:00Z", "ended_at": 1731957002078, "call_length_min": 2, "call_status": "COMPLETED", "transcript": "Agent: Hello...", "transcriptObject": [ { "role": "agent", "content": "Hello..." } ], "call_analysis": { "summary": { "value": true, "details": "A call between an agent and a customer talking about buying an ice cream machine" }, "appointment": { "value": true, "details": "2025-02-18T15:30:00Z" }, "interest": { "value": true, "details": "The customer is interested in buying an ice cream machine" } } } } In this example, you can see that the call resulted in a booked appointment and showed customer interest. Your n8n workflow would process this data, updating the Airtable CRM and triggering any necessary follow-up actions. By implementing this template, businesses can automate their post-call processes, ensuring timely follow-ups and accurate CRM updates. This real-time integration between telli's AI voice agents and your Airtable CRM streamlines operations, improves customer engagement, and increases the efficiency of your sales and support teams.
by TechDennis
Edit an existing image with OpenAI ImageGen1 via API Request Transform your creative pipeline by letting n8n call OpenAI ImageGen1’s edit image endpoint, automatically replacing or augmenting parts of any image you supply and returning a brand-new version in seconds. Designers, marketers, and product teams can eliminate repetitive manual edits and test more variations, faster. Who is this for? Content creators who need quick, on-brand image tweaks Marketers running A/B visual tests at scale Developers exploring the new ImageGen1 API inside low-code automations Use case / problem solved Opening design software to mask, fill, or swap objects is slow and error-prone. This workflow feeds an input image plus a prompt to OpenAI ImageGen1, receives the edited output, and passes it on to any service you like—perfect for bulk-editing product shots, social visuals, or UI mocks. What this workflow does Read or receive the source image (Webhook → Binary Data). Call OpenAI ImageGen1 with an HTTP Request node, sending the image and edit prompt. Parse the JSON response to capture the returned image URL. Download & hand off the edited file (e.g., upload to S3, post to Slack, or store in Drive). Setup Add your OpenAI API key in the API KEY node. Follow the notes on the workflow for more information. (Optional) Point the final node to your preferred storage or chat tool. > 📝 A sticky note in the workflow summarizes these steps and links to the OpenAI documentation. How to customize this workflow Trigger alternatives**: Replace the Chat with Google Drive, Airtable, etc. Chained edits**: Loop the output back for successive prompts. Conditional flows**: Add an If node to branch actions by image size or category. With renamed nodes, color-coded sticky notes, and a concise setup guide, you’ll be editing images via OpenAI ImageGen1 in under five minutes—no code, maximum creativity.
by phil
This workflow automates web scraping of Amazon search result pages by retrieving raw HTML, cleaning it to retain only the relevant product elements, and then using an LLM to extract structured product data (name, description, rating, reviews, and price), before saving the results back to Google Sheets. It integrates Google Sheets to supply and collect URLs, BrightData to fetch page HTML, a custom n8n Function node to sanitize the HTML, LangChain (OpenRouter GPT-4) to parse product details, and Google Sheets again to store the output. URL to scape . Result Who Needs Amazon Search Result Scraping? This scraping workflow is ideal for teams and businesses that need to monitor Amazon product listings at scale: E-commerce Analysts** – Track competitor pricing, ratings, and inventory trends. Market Researchers** – Collect data on product popularity and reviews for market analysis. Data Teams** – Automate ingestion of product metadata into BI pipelines or data lakes. Affiliate Marketers** – Keep affiliate catalogs up to date with latest product details and prices. If you need reliable, structured data from Amazon search results delivered directly into your spreadsheets, this workflow saves you hours of manual copy-and-paste. Why Use This Workflow? End-to-End Automation** – From URL list to clean JSON output in Sheets. Robust HTML Cleaning** – Strips scripts, styles, unwanted tags, and noise. Accurate Structured Parsing** – Leverages GPT-4 via LangChain for reliable extraction. Scalable & Repeatable** – Processes thousands of URLs in batches. Step-by-Step: How This Workflow Scrapes Amazon Get URLs from Google Sheets – Reads a list of search result URLs. Loop Over Items – Iterates through each URL in controlled batches. Fetch Raw HTML – Uses BrightData’s Web Unlocker proxy to retrieve the page. Clean HTML – A Function node removes doctype, scripts, styles, head, comments, classes, and non-whitelisted tags, collapsing extra whitespace. Extract with LLM – Passes cleaned HTML into LangChain → GPT-4 to output JSON for each product: name, description, rating, reviews, price Save Results – Appends the JSON fields as columns back into a “results” sheet in Google Sheets. Customization: Tailor to Your Needs Adaptable Sites** – This workflow can be adapted to any e-commerce or other website, for example Walmart or eBay. Whitelist Tags** – Modify the allowedTags array in the Code node to keep additional HTML elements. Schema Changes** – Update the Structured Output Parser schema to include more fields (e.g., availability, SKU). Alternate Data Sink** – Instead of Sheets, route output to a database, CSV file, or webhook. 🔑 Prerequisites Google Sheets Credentials** – OAuth credentials configured in n8n. BrightData API token** – Stored in n8n credentials as BRIGHTDATA_TOKEN. OpenRouter API Key** – Configured for the LangChain node to call GPT-4. n8n Instance** – Self-hosted or cloud with sufficient quota for HTTP requests and LLM calls. 🚀 Installation & Setup Configure Credentials** In n8n, set up Google Sheets OAuth under “Credentials.” Add BrightData token as a new HTTP Request credential. Create an OpenRouter API key credential for the LangChain node. Import the Workflow** Copy the JSON workflow into n8n’s “Import” dialog. Map your Google Sheet IDs and GIDs to the {{WEB_SHEET_ID}}, {{TRACK_SHEET_GID}}, and {{RESULTS_SHEET_GID}} placeholders. Ensure the BRIGHTDATA_TOKEN credential is selected on the HTTP Request node. Test & Run** Add a few Amazon search URLs to your “track” sheet. Execute the workflow and verify product data appears in your “results” sheet. Tweak batch size or parser schema as needed. ⚠ Important API Rate Limits** – Monitor your BrightData and OpenRouter usage to avoid throttling. Amazon’s Terms** – Ensure your scraping complies with Amazon’s policies and legal requirements. Summary This workflow delivers a fully automated, scalable solution to extract structured product data from Amazon search pages directly into Google Sheets—streamlining your competitive analysis and data collection. 🚀 Phil | Inforeole
by Yang
Who is this for? This workflow is perfect for operations teams, accountants, e-commerce businesses, or finance managers who regularly process digital invoices and need to automate data extraction and record-keeping. What problem is this workflow solving? Manually reading invoice PDFs, extracting relevant data, and entering it into spreadsheets is time-consuming and error-prone. This workflow automates that process—watching a Google Drive folder, extracting structured invoice data using Dumpling AI, and saving the results into Google Sheets. What this workflow does Watches a specific Google Drive folder for new invoices. Downloads the uploaded invoice file. Converts the file into a Base64 format. Sends the file to Dumpling AI’s extract-document endpoint with a detailed parsing prompt. Parses Dumpling AI’s JSON response using a Code node. Splits the items array into individual rows using the Split Out node. Appends each invoice item to a preformatted Google Sheet along with the full header metadata (order number, PO, addresses, etc.). Setup Google Drive Setup Create or select a folder in Google Drive and place the folder ID in the trigger node. Make sure your n8n Google Drive credentials are authorized for access. Google Sheets Create a Google Sheet with the following headers: Order number, Document Date, Po_number, Sold to name, Sold to address, Ship to name, Ship to address, Model, Description, Quantity, Unity price, Total price Paste the Sheet ID and sheet name (Sheet1) into the Google Sheets node. Dumpling AI Sign up at Dumpling AI Go to your account settings and generate your API key. Paste this key into the HTTP header of the Dumpling AI request node. The endpoint used is: https://app.dumplingai.com/api/v1/extract-document Prompt (already included) This prompt extracts: order number, document date, PO number, shipping/billing details, and detailed line items (model, quantity, unit price, total). How to customize this workflow to your needs Adjust the Google Sheet fields to fit your invoice structure. Modify the Dumpling AI prompt if your invoices have additional or different data points. Add filtering logic if you want to handle different invoice types differently. Replace Google Sheets with Airtable or a database if preferred. Use a different trigger like an email attachment if invoices come via email.
by Hugo
This workflow provides a robust solution for automatically backing up all your n8n workflows to a designated GitHub repository on a daily basis. By leveraging the n8n API and GitHub API, it ensures your workflows are version-controlled and securely stored, safeguarding against data loss and facilitating disaster recovery. How it works The automation follows these key steps: Scheduled trigger: The workflow is initiated automatically every day at a pre-configured time. List existing backups: It first connects to your GitHub repository to retrieve a list of already backed-up workflow files. This helps in determining whether a workflow's backup file needs to be created or updated. Retrieve n8n workflows: The workflow then fetches all current workflows directly from your n8n instance using the n8n REST API. Process and prepare: Each retrieved workflow is individually processed. Its data is converted into JSON format. This JSON content is then encoded to base64, a format suitable for GitHub API file operations. Commit to GitHub: For each n8n workflow: A standardized filename is generated (e.g., workflow-name-tag.json). The workflow checks if a file with this name already exists in the GitHub repository (based on the list fetched in step 2). If the file exists: It updates the existing file with the latest version of the workflow. If it's a new workflow (file doesn't exist): A new file is created in the repository. Each commit is timestamped for clarity. This process ensures that you always have an up-to-date version of all your n8n workflows stored securely in your GitHub version control system, providing peace of mind and a reliable backup history. Pre-requisites Before you can use this template, please ensure you have the following: An active n8n instance (self-hosted or cloud). A GitHub account. A GitHub repository created where you want to store the workflow backups. A GitHub Personal Access Token with repo scope (or fine-grained token with read/write access to the specific backup repository). This token will be used for GitHub API authentication. n8n API credentials (API key) for your n8n instance. Set up steps Setting up this workflow should take approximately 10-15 minutes if you have your credentials ready. Import the template: Import this workflow into your n8n instance. Configure n8n API credentials: Locate the "Retrieve workflows" node. In the "Credentials" section for "n8n API", create new credentials (or select existing ones). Enter your n8n instance URL and your n8n API Key (you can create your n8n api key in the settings of your n8n instance) Configure GitHub credentials: Locate the "List files from repo" node (and subsequently "Update file" / "Upload file" nodes which will use the same credential). In the "Credentials" section for "GitHub API", create new credentials. Select OAuth2/Personal Access Token authentication method. Enter the GitHub Personal Access Token you generated as per the pre-requisites. Specify repository details: In the "List files from repo", "Update file", and "Upload file" GitHub nodes: Set the Owner: Your GitHub username or organization name. Set the Repository: The name of your GitHub repository dedicated to backups. Set the Branch (e.g., main or master) where backups should be stored. (Optional) Specify a Path within the repository if you want backups in a specific folder (e.g., n8n_backups/). Leave blank to store in the root. Adjust schedule (Optional): Select the "Schedule Trigger" node. Modify the trigger interval (e.g., change the time of day or frequency) as needed. By default, it's set for a daily run. Activate the workflow: Save and activate the workflow. Explanation of nodes Here's a detailed breakdown of each node used in this workflow: Schedule trigger** Type: n8n-nodes-base.scheduleTrigger Purpose: This node automatically starts the workflow based on a defined schedule (e.g., daily at midnight). List files from repo** Type: n8n-nodes-base.github Purpose: Connects to your specified GitHub repository and lists all files, primarily to check for existing workflow backups. Aggregate** Type: n8n-nodes-base.aggregate Purpose: Consolidates the list of file names obtained from the "List files from repo" node into a single item for easier lookup later in the "Check if file exists" node. Retrieve workflows** Type: n8n-nodes-base.n8n Purpose: Uses the n8n API to fetch a list of all workflows currently present in your n8n instance. Json file** Type: n8n-nodes-base.convertToFile Purpose: Takes the data of each workflow (retrieved by the "Retrieve workflows" node) and converts it into a structured JSON file format. To base64** Type: n8n-nodes-base.extractFromFile Purpose: Converts the binary content of the JSON file (from the "Json file" node) into a base64 encoded string. This encoding is required by the GitHub API for file content. Commit date & file name** Type: n8n-nodes-base.set Purpose: Prepares metadata for the GitHub commit. It generates: commitDate: The current date and time for the commit message. fileName: A standardized file name for the workflow backup (e.g., my-workflow-vps-backups.json), typically using the workflow's name and its first tag. Check if file exists** Type: n8n-nodes-base.if Purpose: A conditional node. It checks if the fileName (generated by "Commit date & file name") is present in the list of files aggregated by the "Aggregate" node. This determines if the workflow backup already exists in GitHub. Update file** Type: n8n-nodes-base.github Purpose: If the "Check if file exists" node determines the file does exist, this node updates that existing file in your GitHub repository with the latest workflow content (base64 encoded) and a commit message. Upload file** Type: n8n-nodes-base.github Purpose: If the "Check if file exists" node determines the file does not exist, this node creates and uploads a new file to your GitHub repository with the workflow content and a commit message. Customization Here are a few ways you can customize this template to better fit your needs: Backup path**: In the GitHub nodes ("List files from repo", "Update file", "Upload file"), you can specify a Path parameter to store backups in a specific folder within your repository (e.g., workflows/ or daily_backups/). Filename convention**: Modify the "Commit date & file name" node (specifically the expression for fileName) to change how backup files are named. For example, you might want to include the workflow ID or a different date format. Commit messages**: Customize the commit messages in the "Update file" and "Upload file" GitHub nodes to include more specific information if needed. Error handling**: Consider adding error handling branches (e.g., using the "Error Trigger" node or checking for node execution failures) to notify you if a backup fails for any reason. Filtering workflows**: If you only want to back up specific workflows (e.g., those with a particular tag or name pattern), you can add a "Filter" node after "Retrieve workflows" to include only the desired workflows in the backup process. Backup frequency**: Adjust the "Schedule Trigger" node to change how often the backup runs (e.g., hourly, weekly, or on specific days). Template was created in n8n v1.92.2