by Yasser Sami
Olostep Amazon Products Scraper This n8n template automates Amazon product scraping using the Olostep API. Simply enter a search query, and the workflow scrapes multiple Amazon search pages to extract product titles and URLs. Results are cleaned, normalized, and saved into a Google Sheet or Data Table. Who’s it for E-commerce analysts researching competitors and pricing Product sourcing teams Dropshippers and Amazon sellers Automation builders who want quick product lists without manual scraping Growth hackers collecting product data at scale How it works / What it does Form Trigger User enters a search query (e.g., “wireless bluetooth headphones”). The query is used to build the Amazon search URL. Pagination Setup A list of page numbers (1–10) is generated automatically. Each number loads the corresponding Amazon search results page. Scrape Amazon with Olostep For each page, Olostep scrapes Amazon search results. Olostep’s LLM extraction returns: title — product title url — product link Parse & Split Results The JSON output is decoded and turned into individual product items. URL Normalization If the product URL is relative, it is automatically converted into a full Amazon URL. Conditional Check (IF node) Ensures only valid product URLs are stored. Helps avoid scraping Amazon navigation links or invalid items. Insert into Sheet / Data Table Each valid product is saved in: title url Automatic Looping & Rate Management A wait step ensures API rate limits are respected while scraping multiple pages. This workflow gives you a complete, reliable Amazon scraper with no browser automation and no manual copy/paste — everything runs through the Olostep API and n8n. How to set up Import this template into your n8n account. Add your Olostep API key. Connect your Google Sheets or Data Table. Deploy the form and start scraping with any Amazon search phrase. Requirements Olostep API key Google Sheets or Data Table n8n cloud or self-hosted instance How to customize the workflow Add more product fields (price, rating, number of reviews, seller name, etc.). Extend pagination range (1–20 or more pages). Add filtering logic (e.g., ignore sponsored results). Send scraped results to Notion, Airtable, or a CRM. Trigger via Telegram bot instead of a form. 👉 This workflow is perfect for e-commerce research, competitive analysis, or building Amazon product datasets with minimal effort.
by WeblineIndia
This workflow sends a “Join in 10” Slack ping to each interviewer shortly before their interview starts. It checks the Interviews Google Calendar every minute, finds interviews starting in the next LEAD_MINUTES (default 10), and sends a Slack DM with the candidate name, role, local start time, meeting link, and any CV: / Notes: links present in the event description. If the Slack user can’t be found by email, it posts to a fallback channel (default #recruiting-alerts) with an @‑style email mention. A Data Store prevents duplicate pings for the same event + attendee. Who’s It For Interviewers who prefer a timely Slack nudge instead of calendar alerts. Recruiting coordinators who want consistent reminders without manual follow‑ups. Teams that include links directly in the calendar event description. How It Works Cron (every minute) polls near‑term events so pings arrive about 10 minutes before start. Google Calendar (Interviews) fetches upcoming events. Prepare pings filters interviews starting in ≤ LEAD_MINUTES, creates one item per internal attendee (company domain), and extracts meeting/CV/Notes links. Data Store checks a ledger to avoid re‑notifying the same event+attendee. Slack looks up the user by email and sends a DM with Block Kit buttons; otherwise posts to the fallback channel. Data Store records that the ping was sent. Attendees marked declined are skipped; accepted, tentative, and needsAction are included. How To Set Up Ensure interviews are on the Interviews Google Calendar and that interviewers are added as attendees. In each event’s description, optionally add lines like CV: https://... and Notes: https://.... Import the workflow and add credentials: Google Calendar (OAuth) Slack OAuth2 with users:read.email, chat:write, and im:write Open Set: Config and confirm: CALENDAR_NAME = Interviews COMPANY_DOMAIN = weblineindia.com TIMEZONE = Asia/Kolkata LEAD_MINUTES = 10 FALLBACK_CHANNEL = #recruiting-alerts Activate the workflow. It will begin checking every minute. Requirements Google Workspace calendar access for Interviews. Slack workspace + an app with the scopes: users:read.email, chat:write, im:write. n8n (cloud or self‑hosted) with access to both services. How to Customize the Workflow Lead time:* Change LEAD_MINUTES in *Set: Config** (e.g., 5, 15). Audience:** Modify attendee filters to include/exclude tentative or needsAction. Message format:** Tweak the Block Kit text/buttons (e.g., hide CV/Notes buttons). Fallback policy:** Switch the fallback from channel post to “skip and log” if needed. Time windows:** Add logic to silence pings at night or only during business hours. Calendar name:* Update CALENDAR_NAME in *Set: Config** if you use a different calendar. Add-Ons to level up the Workflow with additional nodes Conflict detector:** Warn if an interviewer is double‑booked in the next hour. Escalation:** If no DM can be sent (no Slack user), also notify a coordinator channel. Logging:** Append each ping to Google Sheets/Airtable for audit. Weekday rules:** Auto‑mute on specific days or holidays via a calendar/lookup table. Follow‑up:** Send a post‑interview Slack message with the feedback form link. Common Troubleshooting No pings:** Ensure events actually start within the next LEAD_MINUTES and that attendees include internal emails (@weblineindia.com). Wrong recipients:** Verify interviewer emails on the event match Slack emails; otherwise it will post to the fallback channel. Duplicate pings:* Confirm the *Data Store** is configured and the workflow isn’t duplicated. Missing meeting link:** Add a proper meeting URL to the event description or rely on Google Meet/Zoom links detected in the event. Time mismatch:** Make sure TIMEZONE is Asia/Kolkata (or your local TZ) and calendar times are correct. Need Help ? If you’d like a hand adjusting filters, message formatting or permissions, feel free to reach out we'll be happy to help you get this running smoothly.
by Soumya Sahu
This workflow automatically syncs engagement metrics (Likes, Reposts, Replies) from BlueSky back to your content calendar in Google Sheets. It ensures your reporting is always up to date without manual data entry. Who is this for Great for social media managers, agencies, and creators who need to report on content performance or analyze which post topics are driving the most engagement. What it does The workflow uses a smart "Active Window" strategy to save API calls and ensure stability: Fetch:** Pulls rows from your Google Sheet that are marked as "Posted." Smart Filter:** Only processes posts published in the last 14 days (catching viral spikes while ignoring old archived content). Safe Updates:** Queries the BlueSky API for the latest stats. If a post has been deleted, it handles the error gracefully instead of breaking the workflow. Sync:** Updates the Like Count, Repost Count, and Reply Count columns in your sheet. How to set up Google Sheet: Ensure your sheet has these columns: Post Link, Posted At, Like Count, Repost Count, Reply Count. (A sample Google Sheet link is provided inside the workflow notes). Credentials: Enter your BlueSky Handle and App Password in the "Configuration" node. Select Sheet: In both the "Get row(s)" and "Update row" nodes, select your specific Google Sheet. Schedule: Set the trigger to run once daily (e.g., at 9 AM). 🚀 The BlueSky Growth Suite This workflow is part of a 3-part automation suite designed to help you grow on BlueSky: Part 1: Post Scheduler** (Manage content from Google Sheets) Part 2: Analytics Tracker** (This template) Part 3: Lead Magnet Bot** (Auto-DM users who reply to your posts)
by Barbora Svobodova
Sora 2 Video Generation: Prompt-to-Video Automation with OpenAI API Who’s it for This template is ideal for content creators, marketers, developers, or anyone needing automated AI video creation from text prompts. Perfect for bulk generation, marketing assets, or rapid prototyping using OpenAI's Sora 2 API. Example use cases: E-commerce sellers creating product showcase videos for multiple items without hiring videographers or renting studios Social media managers generating daily content like travel vlogs, lifestyle videos, or brand stories from simple text descriptions Marketing teams producing promotional videos for campaigns, events, or product launches in minutes instead of days How it works / What it does Submit a text prompt using a form or input node. Workflow sends your prompt to the Sora 2 API endpoint to start video generation. It polls the API to check if the video is still processing or completed. When ready, it retrieves the finished video's download link and automatically saves the file. All actions—prompt submission, status checks, and video retrieval—run without manual oversight. How to set up Use your existing OpenAI API key or create a new one at https://platform.openai.com/api-keys Replace Your_API_Key in the following nodes with your OpenAI API key: Sora 2Video, Get Video, Download Video Adjust the Wait node for Video node intervals if needed — video generation typically takes several minutes Enter your video prompt into the Text Prompt trigger form to start the workflow Requirements OpenAI account & OpenAI API key n8n instance (cloud or self-hosted) A form, webhook, or manual trigger for prompt submission How to customize the workflow Connect the prompt input to external forms, bots, or databases. Add post-processing steps like uploading videos to cloud storage or social platforms. Adjust polling intervals for efficient status checking. Limitations and Usage Tips Prompt Clarity: For optimal video generation results, ensure that prompts are clear, concise, and well-structured. Avoid ambiguity and overly complex language to improve AI interpretation. Processing Duration: Video creation may take several minutes depending on prompt complexity and system load. Users should anticipate this delay and design workflows accordingly. Polling Interval Configuration: Adjust polling intervals thoughtfully to balance prompt responsiveness with API rate limits, optimizing both performance and resource usage. API Dependency: This workflow relies on the availability and quota limits of OpenAI’s Sora 2 API. Users should monitor their API usage to avoid interruptions and service constraints.
by Edoardo Guzzi
Auto-update n8n instance with Coolify Who’s it for This workflow is designed for self-hosted n8n administrators who want to keep their instance automatically updated to the latest stable release. It removes the need for manual version checks and ensures deployments are always up to date. What it does The workflow checks your current n8n version against the latest GitHub release. If a mismatch is detected, it triggers a Coolify deployment to update your instance. If both versions match, the workflow ends safely without action. How it works Trigger: Start manually or on a schedule. HTTP Request (n8n settings): Fetches your current version (versionCli). HTTP Request (GitHub): Fetches the latest n8n release (name). Merge (SQL): Keeps only the two fields needed. Set (Normalize): Converts values into comparable variables. IF Check: Compares current vs latest version. If different → Deploy update. If same → Stop with no operation. HTTP Request (Coolify): Triggers a forced redeploy via API. How to set up Replace https://yourn8ndomain/rest/settings with your own n8n domain. Replace the Coolify API URL with your Coolify domain + app UUID. Add an HTTP Bearer credential containing your Coolify API token. Adjust the schedule interval (e.g., every 6 hours). Requirements Self-hosted n8n instance with /rest/settings endpoint accessible. Coolify (or a similar service) managing your n8n deployment. Valid API token configured as Bearer credential in n8n. How to customize Change the schedule frequency depending on how often you want checks. Modify the IF condition if you want stricter or looser version matching (e.g., ignore patch versions). Replace Coolify API call with another service (like Docker, Portainer, or Kubernetes) if you use a different deployment method.
by Grace Gbadamosi
How it works This workflow creates a complete MCPserver that provides comprehensive API integration monitoring and testing capabilities. The server exposes five specialized tools through a single MCP endpoint: API health analysis, webhook reliability testing, rate limit monitoring, authentication verification, and client report generation. External applications can connect to this MCP server to access all monitoring tools. Who is this for This template is designed for DevOps engineers, API developers, integration specialists, and technical teams responsible for maintaining API reliability and performance. It's particularly valuable for organizations managing multiple API integrations, SaaS providers monitoring client integrations, and development teams implementing API monitoring strategies. Requirements MCP Client**: Any MCP-compatible application (Claude Desktop, custom MCP client, or other AI tools) Network Access**: Outbound HTTP/HTTPS access to test API endpoints and webhooks Authentication**: Bearer token authentication for securing the MCP server endpoint Target APIs**: The APIs and webhooks you want to monitor (no special configuration required on target systems) How to set up Configure MCP Server Authentication - Update the MCP Server - API Monitor Entry node with your desired authentication method and generate a secure bearer token for accessing your MCP server Deploy the Workflow - Save and activate the workflow in your n8n instance, noting the MCP server endpoint URL that will be generated for external client connections Connect MCP Client - Configure your MCP client (such as Claude Desktop) to connect to the MCP server endpoint using the authentication token you configured Test Monitoring Tools - Use your MCP client to call the available tools: Analyze Api Health, Validate Webhook Reliability, Monitor API Limits, Verify Authentication, and Generate Client Report with your API endpoints and credentials
by Rahul Joshi
Description: Streamline your cloud storage with this powerful Google Drive File Renamer automation built with n8n. The workflow watches a specific Google Drive folder and automatically renames new files using a standardized format based on their creation date and time—ideal for organizing images, backups, and uploads with consistent timestamp-based names. Whether you're managing daily uploads, sorting Instagram-ready content, or organizing client submissions, this timestamp-based file naming system ensures consistent and searchable file structures—without manual intervention. What This Template Does (Step-by-Step) 🔔 Google Drive Trigger – "Watch Folder" Setup Monitors a specific folder (e.g., “Instagram”) Detects new file creations every minute Captures file metadata like ID, createdTime, and extension 🧠 Set Formatted Name Extracts file creation time (e.g., 2025-07-22T14:45:10Z) Converts it into a structured name like IMG_20250722_1445.jpg Keeps original file extension (JPG, PNG, PDF, etc.) ✏️ Rename File (Google Drive) Renames the original file using Google Drive API Applies the new timestamped name Keeps file content, permissions, and location unchanged Required Integrations: Google Drive API (OAuth2 credentials) Best For: 📸 Content creators organizing uploads from mobile 🏷️ Branding teams enforcing uniform naming 🗄️ Admins managing scanned documents or backups 📂 Automated archives for media, reports, or daily snapshots Key Benefits: ✅ Timestamped naming ensures chronological file tracking ✅ Reduces human error and messy file names ✅ Works in real-time (polls every minute) ✅ No-code: Deploy with drag-and-drop setup in n8n ✅ Fully customizable name patterns (e.g., change IMG_ prefix)
by Joseph
This n8n workflow converts various file formats (.pdf, .doc, .png, .jpg, .webp) to clean markdown text using the datalab.to API. Perfect for AI agents, LLM processing, and RAG (Retrieval Augmented Generation) data preparation for vector databases. Workflow Description Input Trigger Node**: Form trigger or webhook to accept file uploads Supported Formats**: PDF documents, Word documents (.doc/.docx), and images (PNG, JPG, WEBP) Processing Steps File Validation: Check file type and size constraints HTTP Request Node: Method: POST to https://api.datalab.to/v1/marker Headers: X-API-Key with your datalab.to API key Body: Multipart form data with the file Response Processing: Extract the converted markdown text Output Formatting: Clean and structure the markdown for downstream use Output Clean, structured markdown text ready for: LLM prompt injection Vector database ingestion AI agent knowledge base processing Document analysis workflows Setup Instructions Get API Access: Sign up at datalab.to to obtain your API key Configure Credentials: Create a new credential in n8n Add Generic Header: X-API-Key with your API key as the value Import Workflow: Ready to process files immediately Use Cases AI Workflows**: Convert documents for LLM analysis and processing RAG Systems**: Prepare clean text for vector database ingestion Content Management**: Batch convert files to searchable markdown format Document Processing**: Extract text from mixed file types in automated pipelines The workflow handles the complexity of different file formats while delivering consistent, AI-ready markdown output for your automation needs.
by Sarfaraz Muhammad Sajib
Overview This n8n workflow automates the generation of short news videos using the HeyGen video API and RSS feeds from a Bangla news source, Prothom Alo. It is ideal for content creators, media publishers, or developers who want to create daily video summaries from text-based news feeds using AI avatars. The workflow reads the latest news summaries from an RSS feed and sends each item to the HeyGen API to create a video with a realistic avatar and voice narration. The resulting video is suitable for publishing on platforms like YouTube, Instagram, or TikTok. Requirements A HeyGen account with access to the API. HeyGen API key (kept securely in your environment). n8n (self-hosted or cloud instance). Basic understanding of using HTTP request nodes in n8n. Setup Instructions Clone this Workflow into your n8n instance. Replace the placeholder value in the X-Api-Key header with your HeyGen API key. Confirm the RSS feed URL is correct and live: https://prod-qt-images.s3.amazonaws.com/production/prothomalo-bangla/feed.xml The HTTP Request body references {{$json.summary}} from each RSS item. Make sure this field exists. Run the workflow manually or configure a CRON trigger if you want to automate it. Customization Avatar ID* and *Voice ID** can be changed in the HTTP Request body. Use your HeyGen dashboard to get available IDs. Change the video dimensions (1280x720) to suit your platform’s requirements. You can replace the RSS feed with any other news source that supports XML format. Add nodes to upload the video to YouTube, Dropbox, etc., after generation. What It Does Triggers manually. Reads an RSS feed. Extracts summary from each news item. Sends a request to HeyGen to generate a video. Returns the video generation response.
by Avkash Kakdiya
How it works This workflow synchronizes support tickets in Freshdesk with issues in Linear, enabling smooth collaboration between support and development teams. It triggers on new or updated Freshdesk tickets, maps fields to Linear’s format, and creates linked issues through Linear’s API. Reverse synchronization is also supported, so changes in Linear update the corresponding Freshdesk tickets. Comprehensive logging ensures success and error events are always tracked. Step-by-step 1. Trigger the workflow New Ticket Webhook** – Captures new Freshdesk tickets for issue creation. Update Ticket Webhook** – Detects changes in existing tickets. Linear Issue Updated Webhook** – Listens for updates from Linear. 2. Transform and map data Map Freshdesk Fields to Linear** – Converts priority, status, title, and description for Linear. Map Linear to Freshdesk Fields** – Converts Linear state, priority, and extracts ticket ID for Freshdesk updates. 3. Perform API operations Create Linear Issue** – Sends GraphQL mutation to Linear API. Check Linear Creation Success** – Validates issue creation before linking. Link Freshdesk with Linear ID** – Updates Freshdesk with Linear reference. Update Freshdesk Ticket** – Pushes Linear updates back to Freshdesk. 4. Manage logging and errors Log Linear Creation Success** – Records successful ticket-to-issue sync. Log Linear Creation Error** – Captures and logs issue creation failures. Log Freshdesk Update Success** – Confirms successful reverse sync. Log Missing Ticket ID Error** – Handles missing ticket reference errors. Why use this? Keep support and development teams aligned with real-time updates. Eliminate manual ticket-to-issue handoffs, saving time and reducing errors. Maintain full visibility with detailed success and error logs. Enable bidirectional sync between Freshdesk and Linear for true collaboration. Improve response times by ensuring both teams always work on the latest data.
by Tony Ciencia
Overview This template provides an automatic backup solution for all your n8n workflows, saving them directly to Google Drive. It’s designed for freelancers, agencies, and businesses that want to keep their automations safe, versioned, and always recoverable. Why Backups Matter Disaster recovery – Restore workflows quickly if your instance fails. Version control – Track workflow changes over time. Collaboration – Share workflow JSON files easily with teammates. How it Works Fetches the complete list of workflows from your n8n instance via API. Downloads each workflow in JSON format. Converts the data into a file with a unique name (workflow name + ID). Uploads all files to a chosen Google Drive folder. Can be run manually or via an automatic schedule (daily, weekly, etc.). Requirements An active n8n instance with API access enabled API credentials for n8n (API key or basic auth) A Google account with access to Google Drive Google Drive credentials connected in n8n Setup Instructions Connect your n8n API (authenticate your instance). Connect your Google Drive account. Select or create the Drive folder where backups will be stored. Customize the Schedule Trigger to define backup frequency. Run once to confirm files are stored correctly. Customization Options Frequency → Set daily, weekly, or monthly backups. File Naming → Adjust filename expression (e.g., {{workflowName}}-{{workflowId}}-{{date}}.json). Folder Location → Store backups in separate Google Drive folders per project or client. Target Audience This template is ideal for: Freelancers managing multiple client automations. Agencies delivering automation services. Teams that rely on n8n for mission-critical workflows. It reduces risk, saves time, and ensures you never lose your work. ⏱ Estimated setup time: 5–10 minutes.
by Gegenfeld
AI Image Generator Workflow This workflow lets you automatically generate AI images with the APImage API 🡥, download the generated image, and upload it to any serivce you want (e.g., Google Drive, Notion, Social Media, etc.). 🧩 Nodes Overview 1. Generate Image (Trigger) This node contains the following fields: Image Prompt*: *(text input) Dimensions**: Square, Landscape, Portrait AI Model**: Basic, Premium This acts as the entry point to your workflow. It collects input and sends it to the APImage API node. Note: You can swap this node with any other node that lets you define the parameters shown above._** 2. APImage API (HTTP Request) This node sends a POST request to: https://apimage.org/api/ai-image-generate The request body is dynamically filled with values from the first node: { "prompt": "{{ $json['Describe the image you want'] }}", "dimensions": "{{ $json['Dimensions'] }}", "model": "{{ $json['AI Model'] }}" } ✅ Make sure to set your API Key in the Authorization header like this: Bearer YOUR_API_KEY 🔐 You can find your API Key in your APImage Dashboard 🡥 3. Download Image (HTTP Request) Once the image is generated, this node downloads the image file using the URL returned by the API: {{ $json.images[0] }} The image is stored in the output field: generated_image 4. Upload to Google Drive This node takes the image from the generated_image field and uploads it to your connected Google Drive. 📁 You can configure a different target folder or replace this node with: Dropbox WordPress Notion Shopify Any other destination Make sure to pass the correct filename and file field, as defined in the "Download Image" node. Set up Google Drive credentials 🡥 ✨ How To Get Started Double-click the APImage API node. Replace YOUR_API_KEY with your actual key (keep Bearer prefix). Open the Generate Image node and test the form. 🔗 Open the Dashboard 🡥 🔧 How to Customize Replace the Form Trigger with another node if you're collecting data elsewhere (e.g., via Airtable, Notion, Webhook, Database, etc.) Modify the Upload node if you'd like to send the image to other tools like Slack, Notion, Email, or an S3 bucket. 📚 API Docs & Resources APImage API Docs 🡥 n8n Documentation 🡥 🖇️ Node Connections Generate Image → APImage API → Download Image → Upload to Google Drive ✅ This template is ideal for: Content creators automating media generation SaaS integrations for AI tools Text-to-image pipelines