by Yashraj singh sisodiya
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. ATS Resume Maker Workflow Explanation Aim The aim of the ATS Resume Maker according to JD workflow is to automate the creation of an ATS-friendly resume by tailoring a candidate’s resume to a specific job description (JD). It streamlines the process of aligning resume content with JD requirements, producing a professional, scannable PDF resume that can be stored in Google Drive. Goal The goal is to: Allow users to input their resume (text or PDF) and a JD (PDF) via a web form. Extract and merge the text from both inputs. Use AI to customize the resume, prioritizing JD keywords while maintaining the candidate’s truthful information. Generate a clean, ATS-optimized HTML resume and convert it to a downloadable PDF. Upload the final PDF to Google Drive for easy access. This ensures the resume is optimized for Applicant Tracking Systems (ATS), which are used by employers to screen resumes, by incorporating relevant keywords and maintaining a simple, scannable format. Requirements The workflow relies on specific components and configurations: n8n Platform**: The automation tool hosting the workflow. Node Requirements**: Form Trigger: A web form to collect user inputs (resume text/PDF, JD PDF). Process one binary file1: JavaScript to rename and organize PDF inputs. Extracting resume1: Extracts text from PDF files. Merge Resume + JD1: Combines resume and JD text into a single string. Customize resume1: Uses Perplexity AI to generate an ATS-friendly HTML resume. HTML format1: Cleans the HTML output by removing newlines. HTML3: Processes HTML for potential display or validation. HTML to PDF: Converts the HTML resume to a PDF file. Upload file: Saves the PDF to a specified Google Drive folder. Credentials**: CustomJS account for the HTML-to-PDF conversion API. Google Drive account for file uploads. Perplexity account for AI-driven resume customization. Input Requirements**: Resume (plain text or PDF). Job description (PDF). Output**: A tailored, ATS-friendly resume in PDF format, uploaded to Google Drive. API Usage The workflow integrates multiple APIs to achieve its functionality: Perplexity API*: Used in the *Customize resume1 node to leverage the sonar-reasoning model for generating an ATS-optimized HTML resume. The API processes the merged resume and JD text, aligning content with JD keywords while adhering to strict HTML and CSS guidelines (e.g., Arial font, no colors, single-column layout). [Ref: Workflow JSON] CustomJS API*: Used in the *HTML to PDF node to convert the cleaned HTML resume into a PDF file. This API ensures the resume is transformed into a downloadable format suitable for ATS systems. [Ref: Workflow JSON] Google Drive API*: Used in the *Upload file node to store the final PDF in a designated Google Drive folder (Resume folder in My Drive). This API handles secure file uploads using OAuth2 authentication. [Ref: Workflow JSON] These APIs are critical for AI-driven customization, PDF generation, and cloud storage, ensuring a seamless end-to-end process. HTML to PDF Conversion The HTML-to-PDF conversion is a key step in the workflow, handled by the HTML to PDF node: Process*: The node takes the cleaned HTML resume ($json.cleanedResponse) from the *HTML format1 node and uses the @custom-js/n8n-nodes-pdf-toolkit.html2Pdf node to convert it into a PDF. API*: Relies on the *CustomJS API for high-fidelity conversion, ensuring the PDF retains the ATS-friendly structure (e.g., no graphics, clear text hierarchy). Output*: A binary PDF file passed to the *Upload file node. Relevance**: This step ensures the resume is in a widely accessible format, suitable for downloading or sharing with employers. The use of a dedicated API aligns with industry practices for HTML-to-PDF conversion, as seen in services like PDFmyURL or PDFCrowd, which offer similar REST API capabilities for converting HTML to PDF with customizable layouts. Ref:,(https://pdfmyurl.com/) Download from Community Link The workflow does not explicitly include a community link for downloading the final PDF, but the Upload file node stores the PDF in Google Drive, making it accessible via a shared folder or link. To enable direct downloads: Workflow Summary The ATS Resume Maker according to JD workflow automates the creation of a tailored, ATS-friendly resume by: Collecting user inputs via a web form (Form Trigger). Processing and extracting text from PDFs (Process one binary file1, Extracting resume1). Merging and customizing the content using Perplexity AI (Merge Resume + JD1, Customize resume1). Formatting and converting the resume to PDF (HTML format1, HTML3, HTML to PDF). Uploading the PDF to Google Drive (Upload file). The workflow leverages APIs for AI processing, PDF conversion, and cloud storage, ensuring a professional output optimized for ATS systems. Community sharing can be enabled via Google Drive links or external platforms, as discussed in related web resources. Ref:,,(https://pdfmyurl.com/) Timestamp: 02:54 PM IST, Wednesday, August 20, 2025
by Avkash Kakdiya
How it works This workflow listens for new products in Shopify and transforms the product data into polished social media content. It generates captions and hashtags using an AI model, then posts the product to Instagram and Facebook using the Facebook Graph API. It logs every post to Google Sheets and sends a confirmation message to Discord. The flow ensures consistent publishing across all platforms with automated formatting and tracking. Step-by-step Trigger on Shopify product creation** Shopify Trigger – Activates when a new product is added to the store. Prepare product data** parse product data – Extracts product name, price, description, URL, image, and timestamp. Generate caption and hashtags** Generate caption and hashtags – Uses an AI model to craft a caption and produce 10 relevant hashtags. Configure posting parameters** Set Configuration – Stores access tokens, platform IDs, caption text, hashtags, and image URL. Publish to Instagram** Create Instagram Media Container – Sends the image and caption to create a media container. Wait for Processing – Waits for the container to finish processing. Publish Instagram Media – Publishes the processed container to the Instagram feed. Publish to Facebook** Download Image for Facebook – Downloads the product image from Shopify. Post to Facebook Page – Uploads the image with the caption and hashtags to the Facebook Page. Merge publishing results** Merge – Combines responses from Instagram and Facebook for unified logging. Log post to Google Sheets** Log Product Post Data – Appends product info, caption, and hashtags to a spreadsheet. Notify via Discord** Notify Discord About Post – Sends a message summarizing the published product. Why use this? Ensures every new Shopify product is promoted instantly across major social platforms. Eliminates manual posting and caption creation with reliable automation. Maintains centralized logging for auditing, tracking, or analytics. Provides real-time team notifications to confirm successful posts. Reduces errors and keeps brand messaging consistent across channels.
by n8n Automation Expert | Template Creator | 2+ Years Experience
🎯 What This Workflow Does Transform your digital payment business with a fully-featured Telegram bot that handles everything from product listings to transaction processing. Perfect for entrepreneurs looking to automate their PPOB (mobile credit, data packages, bill payments) business operations without coding expertise. ✨ Key Features 📱 Complete Transaction Management Prepaid Services**: Mobile credit, data packages, PLN tokens Gaming**: Game vouchers for popular platforms E-Wallet**: OVO, DANA, GoPay, ShopeePay top-ups Bill Payments**: PLN postpaid, Telkom, cable TV, internet, credit cards 💰 Smart Business Operations Real-time balance checking with low-balance alerts Automated transaction processing with MD5 security Interactive product catalog with categorized browsing Transaction history and status tracking Deposit request management 🤖 User-Friendly Interface Intuitive inline keyboard navigation Multi-step transaction flows with validation Comprehensive error handling and user feedback Professional messaging with emojis and formatting 🛠️ Technical Highlights Robust Architecture Switch-based routing** for efficient command handling MD5 signature authentication** for secure API communications Session management** for multi-step user interactions Comprehensive error handling** with user-friendly messages API Integrations Digiflazz API**: Balance checking, product listings, transactions, bill inquiries Telegram Bot API**: Message handling, inline keyboards, callback queries Secure credential management** with environment variables 📋 Setup Requirements Prerequisites Active Digiflazz account with API credentials Telegram Bot Token from @BotFather n8n instance (cloud or self-hosted) Environment Variables DIGIFLAZZ_USERNAME=your_digiflazz_username DIGIFLAZZ_API_KEY=your_digiflazz_api_key 🎮 How to Use Customer Commands /start - Welcome message and main menu /menu - Access main navigation /balance - Check account balance /products - Browse product catalog /topup - Process prepaid transactions /checkbill - Inquiry postpaid bills /paybill - Pay postpaid services /deposit - Request balance deposit /history - View transaction history Business Features Automated balance monitoring** with threshold alerts Product categorization** for easy browsing Transaction confirmation** with detailed receipts Multi-payment type support** across various service providers 🔒 Security & Compliance MD5 signature verification** for all API calls Input validation** and sanitization Session timeout management** Error logging** and monitoring HTTPS-only communications** 💡 Business Benefits For PPOB Entrepreneurs Reduce manual work** by 90% through automation 24/7 customer service** without human intervention Professional presentation** builds customer trust Scalable operations** handle unlimited transactions For Customers Instant transactions** with real-time confirmations Easy navigation** through intuitive menus Multiple service options** in one convenient bot Reliable service** with comprehensive error handling 📊 Performance Features Sub-second response times** for balance checks Concurrent transaction processing** Automatic retry logic** for failed operations Detailed logging** for business analytics 🎯 Perfect For Digital payment entrepreneurs** starting PPOB businesses Existing businesses** looking to automate customer service Resellers** wanting professional transaction interfaces Developers** seeking proven automation templates 📱 Supported Services Prepaid Products Mobile credit (all Indonesian operators) Data packages and internet vouchers PLN electricity tokens Game vouchers (Mobile Legends, Free Fire, PUBG, etc.) Postpaid Services PLN electricity bills Telkom phone bills Cable TV subscriptions (First Media, MNC, etc.) Internet service providers Credit card payments Multifinance installments 🚀 Getting Started Import the workflow JSON into your n8n instance Configure Telegram and Digiflazz credentials Set up environment variables Activate the workflow Test with your Telegram bot Start serving customers immediately! 💎 Premium Features Comprehensive documentation** with setup guides Error handling** for all edge cases Professional UI/UX** design Scalable architecture** for business growth Community support** and updates Transform your digital payment business today with this production-ready Telegram bot automation. No coding required – just configure and launch! Perfect for the Indonesian PPOB market with full Digiflazz integration and professional customer experience.
by Easy8.ai
Auto-Routing Nicereply Feedback to Microsoft Teams by Team and Sentiment Automatically collect client feedback from Nicereply, analyze sentiment, and send it to the right Microsoft Teams channels — smartly split by team, tone, and comment presence. About this Workflow This workflow pulls customer satisfaction feedback from Nicereply, filters out irrelevant or test entries, and evaluates each item based on the team it belongs to and the sentiment of the response (Great, OK, Bad). It automatically routes the feedback to Microsoft Teams — either as a summary in a channel or a direct message — depending on the team's role and whether a comment is included. Perfect for support, delivery, consulting, and documentation teams that want to stay in the loop with customer sentiment. It ensures that positive feedback reaches the teams who earned it, and that negative feedback is escalated quickly to leads or management. Use Cases Send daily customer feedback directly to the responsible teams in MS Teams Automatically escalate negative responses to leads or managers Avoid clutter by filtering out unimportant or test entries Keep internal teams motivated by sharing only the most relevant praise How it works Schedule Trigger Starts the workflow on a set schedule (e.g., daily at 7:00 AM) Get Feedback Pulls customer feedback from Nicereply using survey ID Split Out Processes each feedback entry separately Edit Feedbacks Renames or adjusts fields for easier filtering and readability Change Survey ID Maps internal survey identifiers for accurate team routing (Survey ID can be found in Nicereply: Settings > Surveys > [Survey] > ID) Filter Excludes old responses Code Node Tag unknown clients Change Happiness Value Converts score into “Great”, “OK”, or “Bad” for routing logic Without Comment Checks if feedback includes a text comment or not Send Feedback Without Comment Routes simple feedback (no comment) to MS Teams based on team + score Send Feedback With Comment Routes full feedback with comment to MS Teams for closer review Feedback Routing Logic Each team receives only what’s most relevant: Support, Docs, Consulting* get only *Great** feedback to boost morale Team Leads* receive *OK and Bad** feedback so they can follow up Management* is only alerted to *Bad** feedback for critical response These rules can be freely customized. For example, you may want Support to receive all responses, or Management only when multiple Bad entries are received. The structure is modular and easily adjustable. How to Use Import the workflow Load the .json file into your Easy Redmine automation workspace Set up connections Nicereply API key or integration setup Microsoft Teams integration (chat and/or channel posting) Insert your Survey ID(s) You’ll find these in the Nicereply admin panel under Survey settings Customize team logic Adjust survey-to-team mappings and message routing as needed Edit Teams message templates Modify message text or formatting based on internal tone or content policies Test with real data Run manually and verify correct delivery to MS Teams Deploy and schedule Let it run on its own to automate the feedback cycle Requirements Nicereply account with active surveys Microsoft Teams account with permissions to post to channels or send chats Optional Enhancements Add AI to summarize long comments Store feedback history in external DB Trigger follow-up tasks or alerts for repeated Bad scores Localize messages for multilingual feedback systems Integrate additional tools like Slack, Easy Redmine, etc. Tips for a Clean Setup Keep team routing logic in one place for easy updates Rename all nodes clearly to reflect their function (e.g., Change Happiness Value) Add logging or alerting in case of failed delivery or empty feedback pull Use environment variables for tokens and survey IDs where possible
by Jonathan Reeve
Who's it for Content creators, social media managers, and marketing teams who want to automate image editing and Instagram posting workflows using AI-powered image analysis and generation. What it does This workflow automatically processes images stored in Airtable, analyzes them using AI vision models, generates optimized editing prompts, creates new variations using Google's Gemini AI, and posts the results directly to Instagram. The entire process is triggered via webhook and includes comprehensive error handling and status tracking. How it works The workflow begins when triggered via webhook with an Airtable record ID. It fetches the original image, analyzes its visual elements using GPT-4 Vision, then uses that analysis along with user-specified editing parameters (composition, lighting, style, atmosphere, color palette, text overlay) to generate an optimized prompt. Google Gemini AI then creates a new image based on these specifications, which gets uploaded back to Airtable and posted to Instagram via the Graph API. Requirements Airtable account with configured base and tables OpenAI API key for image analysis Google Gemini API key for image generation Meta Developer account with Instagram Graph API access Instagram Business account connected to Facebook Page How to set up Configure your Airtable base with the required fields: Status, Picture, Core Subject, Setting, Composition, Lighting, Style, Atmosphere, Color Palette, Text Overlay, Post Description Set up OpenAI credentials in n8n for the image analysis node Configure Google Gemini API credentials for image generation Set up Meta Graph API credentials for Instagram posting Update the Airtable base IDs and table IDs in all Airtable nodes Configure your Instagram Business Account ID in the Instagram posting nodes Test the webhook URL and ensure proper connectivity How to customize Modify the image analysis prompt in the "Analyze image" node to focus on different visual elements Adjust the Gemini generation parameters (temperature, max tokens) for different creative outputs Add additional social media platforms by duplicating the Instagram posting logic Customize error handling and status updates based on your workflow needs Add image format conversion or resizing nodes if needed for Instagram requirements
by vinci-king-01
Enterprise Knowledge Search with GPT-4 Turbo, Google Drive & Academic APIs This workflow provides an enterprise-grade RAG (Retrieval-Augmented Generation) system that intelligently searches multiple sources and generates AI-powered responses using GPT-4 Turbo. How it works This workflow provides an enterprise-grade RAG (Retrieval-Augmented Generation) system that intelligently searches multiple sources and generates AI-powered responses using GPT-4 Turbo. Key Steps Form Input - Collects user queries with customizable search scope, response style, and language preferences Intelligent Search - Routes queries to appropriate sources (web, academic papers, news, internal documents) Data Aggregation - Unifies and processes information from multiple sources with quality scoring AI Processing - Uses GPT-4 Turbo to generate context-aware, source-grounded responses Response Enhancement - Formats outputs in various styles (comprehensive, concise, technical, etc.) Multi-Channel Delivery - Delivers results via webhook, email, Slack, and optional PDF generation Data Sources & AI Models Search Sources Web Search**: Google, Bing, DuckDuckGo integration Academic Papers**: arXiv, PubMed, Google Scholar via Crossref API News Articles**: News API, RSS feeds, real-time news Technical Documentation**: GitHub, Stack Overflow, documentation sites Internal Knowledge**: Google Drive, Confluence, Notion integration AI Models GPT-4 Turbo**: Primary language model for response generation Embedding Models**: For semantic search and similarity matching Custom Prompts**: Specialized prompts for different response styles Set up steps Setup time: 15-20 minutes Configure API credentials - Set up OpenAI API, News API, Google Drive, and other service credentials Set up search sources - Configure academic databases, news APIs, and internal knowledge sources Connect analytics - Link Google Sheets for usage tracking and performance monitoring Configure notifications - Set up Slack channels and email templates for automated alerts Test the workflow - Run sample queries to verify all components are working correctly Keep detailed configuration notes in sticky notes inside your workflow
by Axiomlab.dev
HubSpot Lead Refinement 🚀 How it works Triggers: HubSpot Trigger: Fires when contacts are created/updated. Manual Trigger: Run on demand for testing or batch checks. Get Recently Created/Updated Contacts: Pulls fresh contacts from HubSpot. Edit Fields (Set): Maps key fields (First Name, Last Name, Email) for the Agent. AI Agent: First reads your Google Doc (via the Google Docs tool) to learn the research steps and output format. Then uses SerpAPI (Google engine) to locate the contact’s likely LinkedIn profile and produce a concise result. Code – Remove Think Part: Cleans the model output (removes hidden “think” blocks / formatting) so only the final answer remains. HubSpot Update: Writes the cleaned LinkedIn URL to the contact (via email match). 🔑 Required Credentials: HubSpot App Token (Private App) — for Get/Update contact nodes. HubSpot Developer OAuth (optional) — if you use the HubSpot * Trigger node for event-based runs. Google Service Account — for the Google Docs tool (share your * playbook doc with this service account). OpenRouter — for the OpenRouter Chat Model used by the AI Agent. SerpAPI — for targeted Google searches from within the Agent. 🛠️ Setup Instructions HubSpot Create a Private App and copy the Access Token. Add or confirm the contact property linkedinUrl (Text). Plug the token into the HubSpot nodes. If using HubSpot Trigger, connect your Developer OAuth app and subscribe to contact create/update events. Google Docs (Living Instructions) ➡️ Sample configuration doc file Copy the sample doc file and modify to your need. Share the doc with your Google Service Account (Viewer is fine). In the Read Google Docs node, paste the Document URL. OpenRouter & SerpAPI Add your OpenRouter key to the OpenRouter Chat Model credential. Add your SerpAPI key to the SerpAPI tool node. (Optional) In your Google Doc or Agent prompt, set sensible defaults for SerpAPI (engine=google, hl=en, gl=us, num=5, max 1–2 searches). ✨ What you get Auto-enriched contacts with a LinkedIn URL and profile insights (clean, validated output). A research process you can change anytime by editing the Google Doc—no workflow changes needed. Tight, low-noise searches via SerpAPI to keep costs down. And that’s it—publish and let the Agent enrich new leads automatically while you refine the rules in your doc. It allows handing off to a team who wouldn't necessarily tweak the automation nodes.
by vinci-king-01
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works This workflow automatically monitors trending topics across multiple platforms and generates content strategy insights for marketing teams. Key Steps Daily Trigger - Runs automatically every 24 hours to capture fresh trends and viral content. Multi-Platform Scraping - Uses AI-powered scrapers to analyze trends from LinkedIn, Twitter, Instagram, Google Trends, BuzzSumo, and Reddit. Trend Analysis - Processes collected data to identify viral patterns, engagement metrics, and content opportunities. Content Strategy Generation - Creates actionable insights for content planning and social media strategy. Team Notifications - Sends comprehensive reports to Slack and updates content calendars in Google Sheets. Set up steps Setup time: 10-15 minutes Configure ScrapeGraphAI credentials - Add your ScrapeGraphAI API key for AI-powered trend scraping. Set up Slack connection - Connect your Slack workspace for team notifications. Configure Google Sheets - Set up a Google Sheets connection for content calendar updates. Customize target industries - Modify the configuration to focus on your specific industry verticals (AI, marketing, tech, etc.). Adjust monitoring frequency - Change the trigger timing based on your content planning needs. What you get Daily trend reports** with viral content analysis and engagement metrics Content opportunity scores** for different platforms and topics Automated content calendar updates** with trending topics and suggested content Team notifications** with key insights and actionable recommendations Competitive analysis** of viral content patterns and successful strategies
by Madame AI
Scrape Detailed GitHub Profiles to Google Sheets Using BrowserAct This template is a sophisticated data enrichment and reporting tool that scrapes detailed GitHub user profiles and organizes the information into dedicated, structured reports within a Google Sheet. This workflow is essential for technical recruiters, talent acquisition teams, and business intelligence analysts who need to dive deep into a pre-qualified list of developers to understand their recent activity, repositories, and technical footprint. Self-Hosted Only This Workflow uses a community contribution and is designed and tested for self-hosted n8n instances only. How it works The workflow is triggered manually but can be started by a Schedule Trigger or by integrating directly with a candidate sourcing workflow (like the "Source Top GitHub Contributors" template). A Google Sheets node reads a list of target GitHub user profile URLs from a master candidate sheet. The Loop Over Items node processes each user one by one. A Slack notification is sent at the beginning of the loop to announce that the scraping process has started for the user. A BrowserAct node visits the user's GitHub profile URL and scrapes all available data, including profile info, repositories, and social links. A custom Code node (labeled "Code in JavaScript") performs a critical task: it cleans, fixes, and consolidates the complex, raw scraped data into a single, clean JSON object. The workflow then dynamically manages your output. It creates a new sheet dedicated to the user (named after them) and clears it to ensure a fresh report every time. The consolidated data is separated into three paths: main profile data, links, and repositories. Three final Google Sheets nodes then append the structured data to the user's dedicated sheet, creating a clear, multi-section report (User Data, User Links, User Repositories). Requirements BrowserAct** API account for web scraping BrowserAct* "Scraping GitHub Users Activity & Data*" Template BrowserAct* "* Source Top GitHub Contributors by Language & Location**" Template Output BrowserAct** n8n Community Node -> (n8n Nodes BrowserAct) Google Sheets** credentials for input (candidate list) and structured output (individual user sheets) Slack** credentials for sending notifications Need Help? How to Find Your BrowseAct API Key & Workflow ID How to Connect n8n to Browseract How to Use & Customize BrowserAct Templates How to Use the BrowserAct N8N Community Node Workflow Guidance and Showcase GitHub Data Mining: Extracting User Profiles & Repositories with N8N
by Ranjan Dailata
This workflow automatically scrapes Amazon price-drop data via Decodo, extracts structured product details with OpenAI, generates summaries and sentiment insights for each item, and saves everything to Google Sheets — creating a fully automated price-intelligence pipeline. Disclaimer Please note - This workflow is only available on n8n self-hosted as it’s making use of the community node for the Decodo Web Scraping Who this is for This workflow is designed for e-commerce analysts, product researchers, price-tracking teams, and affiliate marketers who want to: Monitor daily Amazon product price drops automatically. Extract key information such as product name, price, discount, and links. Generate AI-driven summaries and sentiment insights on the latest deals. Store all structured data directly in Google Sheets for trend analysis and reporting. What problem this workflow solves This workflow solves the following: Eliminates the need for manual data scraping or tracking. Turns unstructured web data into structured datasets. Adds AI-generated summaries and sentiment analysis for smarter decision-making. Enables automated, daily price intelligence tracking across multiple product categories. What this workflow does This automation combines Decodo’s web scraping, OpenAI GPT-4.1-mini, and Google Sheets to deliver an end-to-end price intelligence system. Trigger & Setup Manually start the workflow. Input your price-drop URL (default: CamelCamelCamel Daily Drops). Web Scraping via Decodo Decodo scrapes the Amazon price-drop listings and extracts product details (title, price, savings, product link). LLM-Powered Data Structuring The extracted content is sent to OpenAI GPT-4.1-mini to format and clean the output into structured JSON fields. Loop & Deep Analysis Each product URL is revisited by Decodo for content enrichment. The AI performs two analyses per product: Summarization: Generates a comprehensive summary of the product. Sentiment Analysis: Detects tone (positive/neutral/negative), sentiment score, and key topics. Aggregation & Storage All enriched results are merged and aggregated. Structured data is automatically appended to a connected Google Sheet. End Result: A ready-to-use dataset showing each price-dropped product, its summary, sentiment polarity, and key highlights updated in real time. Setup Pre-requisite Please make sure to install the n8n custom node for Decodo. Import and Connect Credentials Import the workflow into your n8n self-hosted instance. Connect: OpenAI API (GPT-4.1-mini)** → for summarization and sentiment analysis Decodo API** → for real-time price-drop scraping Google Sheets OAuth2** → to save structured results Configure Input Fields In the “Set input fields” node: Update the price_drop_url to your target URL (e.g., https://camelcamelcamel.com/top_drops?t=weekly). Run the Workflow Click “Execute Workflow” or schedule it to run daily to automatically fetch and analyze new price-drop listings. Check Output The aggregated data is saved to a Google Sheet (Pricedrop Info). Each record contains: Product name Current price and savings Product link AI-generated summary Sentiment classification and score How to customize this workflow Change Source Replace the price_drop_url with another CamelCamelCamel or Amazon Deals URL. Add multiple URLs and loop through them for category-based price tracking. Modify Extraction Schema In the Structured Output Parser, modify the JSON schema to include fields like: category, brand, rating, or availability. Tune AI Prompts Edit the Summarize Content and Sentiment Analysis nodes to: Add tone analysis (e.g., promotional vs. factual). Include competitive product comparison. Integrate More Destinations Replace Google Sheets with: Airtable → for no-code dashboards. PostgreSQL/MySQL → for large-scale storage. Notion or Slack → for instant price-drop alerts. Automate Scheduling Add a Cron Trigger node to run this workflow daily or hourly. Summary This workflow creates a fully automated price intelligence system that: Scrapes Amazon product price drops via Decodo. Extracts structured data with OpenAI GPT-4.1-mini. Generates AI-powered summaries and sentiment insights. Updates a connected Google Sheet with each run.
by Jose Castillo
This workflow scrapes Google Maps business listings (e.g., carpenters in Tarragona) to extract websites and email addresses — perfect for lead generation, local business prospecting, or agency outreach. 🔧 How it works Manual Trigger – start manually using the “Test Workflow” button. Scrape Google Maps – fetches the HTML from a Google Maps search URL. Extract URLs – parses all business links from the page. Filter Google URLs – removes unwanted Google/tracking links. Remove Duplicates + Limit – keeps unique websites (default: 100). Scrape Site – fetches each website’s HTML. Extract Emails – detects valid email addresses. Filter Out Empties & Split Out – isolates each valid email per site. (Optional) Add to Google Sheet – appends results to your Sheet. 💼 Use cases Local business leads: find emails of carpenters, dentists, gyms, etc., in your city. Agency outreach: collect websites and contact emails to pitch marketing services. B2B prospecting: identify businesses by niche and region for targeted campaigns. 🧩 Requirements n8n instance with HTTP Request and Code nodes enabled. (Optional) Google Sheets OAuth2 credentials. Tip: Add a “Google Sheets → Append Row” node and connect it to your account. 🔒 Security No personal or sensitive data included — only credential references. If sharing this workflow, anonymize the “credentials” field before publishing.
by Davide
This workflow automates the process of creating short videos from multiple image references (up to 7 images). It uses "Vidu Reference to Video" model, a video generation API to transform a user-provided prompt and image set into a consistent, AI-generated video. This workflow automates the process of generating AI-powered videos from a set of reference images and then uploading them to TikTok and Youtube. The process is initiated via a user-friendly web form. Advantages ✅ Consistent Video Creation: Uses multiple reference images to maintain subject consistency across frames. ✅ Easy Input: Just a simple form with prompt + image URLs. ✅ Automation: No manual waiting—workflow checks status until video is ready. ✅ SEO Optimization: Automatically generates a catchy, optimized YouTube title using AI. ✅ Multi-Platform Publishing: Uploads directly to Google Drive, YouTube, and TikTok in one flow. ✅ Time Saving: Removes repetitive tasks of video generation, download, and manual uploading. ✅ Scalable: Can run periodically or on-demand, perfect for content creators and marketing teams. ✅ UGC & Social Media Ready: Designed for creating viral short videos optimized for platforms like TikTok and YouTube Shorts. How It Works Form Trigger: A user submits a web form with two key pieces of information: a text Prompt describing the desired video and a list of Reference images (URLs separated by commas or new lines). Data Processing: The workflow processes the submitted image URLs, converting them from a text string into a proper array format for the AI API. AI Video Generation: The processed data (prompt and image array) is sent to the Fal.ai VIDU API endpoint (reference-to-video) to start the video generation job. This node returns a request_id. Status Polling: The workflow enters a loop where it periodically checks the status of the generation job using the request_id. It waits for 60 seconds and then checks if the status is "COMPLETED". If not, it waits and checks again. Result Retrieval: Once the video is ready, the workflow fetches the URL of the generated video file. Title Generation: Simultaneously, the original user prompt is sent to an AI model (GPT-4o-mini via OpenRouter) to generate an optimized, engaging title for the social media post. Upload & Distribution: The video file is downloaded from the generated URL. A copy is saved to a specified Google Drive folder for storage. The video, along with the AI-generated title, is automatically uploaded to YouTube and TikTok via the Upload-Post.com API service. Set Up Steps This workflow requires configuration and API keys from three external services to function correctly. Step 1: Configure Fal.ai for Video Generation Create an account and obtain your API key. In the "Create Video" HTTP node, edit the "Header Auth" credentials. Set the following values: Name: Authorization Value: Key YOUR_FAL_API_KEY (replace YOUR_FAL_API_KEY with your actual key) Step 2: Configure Upload-Post.com for Social Media Uploads Get an API key from your Upload-Post Manage Api Keys dashboard (10 free uploads per month). In both the "HTTP Request" (YouTube) and "Upload on TikTok" nodes, edit their "Header Auth" credentials. Set the following values: Name: Authorization Value: Apikey YOUR_UPLOAD_POST_API_KEY (replace YOUR_UPLOAD_POST_API_KEY with your actual key) Crucial: In the body parameters of both upload nodes, find the user field and replace YOUR_USERNAME with the exact name of the social media profile you configured on Upload-Post.com (e.g., my_youtube_channel). Step 3: Configure Google Drive (Optional Storage) The "Upload Video" node is pre-configured to save the video to a Google Drive folder named "Fal.run". Ensure your Google Drive credentials in n8n are valid and that you have access to this folder, or change the folderId parameter to your desired destination. Step 4: Configure AI for Title Generation The "Generate title" node uses OpenAI to access the gpt-5-mini model.. Need help customizing? Contact me for consulting and support or add me on Linkedin.