by Yang
👥 Who is this for? This workflow is ideal for virtual assistants, researchers, developers, automation specialists, and data analysts who need to regularly extract and organize structured product information (like books) from a website. It’s especially useful for those working with catalog-based websites who want to automate extraction and delivery of clean, sorted data. 🧩 What problem is this solving? Manually copying product listings like book titles and prices from a website into a spreadsheet is slow and repetitive. This automation solves that problem by scraping content using Dumpling AI, extracting the right data using CSS selectors, and formatting it into a clean CSV file that is sent to your email—all triggered automatically when a new URL is added to Google Sheets. ⚙️ What this workflow does This template automates an entire content scraping and delivery process: Watches a Google Sheet for new URLs Scrapes the HTML content of the given webpage using Dumpling AI Uses CSS selectors in the HTML node to extract each book from the page Splits the HTML array into individual items Extracts the book title and price from each HTML block Sorts the books in descending order based on price Converts the sorted data to a CSV file Sends the CSV via email using Gmail 🛠️ Setup Google Sheets Create a sheet titled something like URLs Add your product listing URLs (e.g., http://books.toscrape.com) Connect the Google Sheets trigger node to your sheet Ensure you have proper credentials connected Dumpling AI Create an account at Dumpling AI) - Generate your API key Set the HTTP Method to POST and pass the URL dynamically from the Google Sheet Use Header Auth to include your API key in the request header Make sure "cleaned": "True" is included in the body for optimized HTML output HTML Node The first HTML node extracts the main book container blocks using: .row > li The second HTML node parses out the individual fields: title: h3 > a (via the title attribute) price: .price_color Sort Node Sorts books by price in descending order Note: price is extracted as a string, ensure it's parsable if you plan to use numeric filtering later Convert to CSV The JSON data is passed into a Convert node and transformed into a CSV file Gmail Sends the CSV as an attachment to a designated email 🔄 How to customize this workflow Extract more data**: Add more CSS selectors in the second HTML node to pull fields like author, availability, or product links Switch destinations**: Replace Gmail with Slack, Google Drive, Dropbox, or another platform Adjust sorting**: Sort alphabetically or based on another extracted value Use a different source**: As long as the site structure is consistent, this can scrape any listing-like page Trigger differently**: Use a webhook, form submission, or schedule trigger instead of Google Sheets ⚠️ Dependencies and Notes This workflow uses Dumpling AI to perform the web scraping. This requires an API key and uses credits per request. The HTML node depends on valid CSS selectors. If the site layout changes, the selectors may need to be updated. Ensure you’re not scraping content from websites that prohibit automated scraping.
by GYANENDRA DWIVEDI
🚀 WhatsApp Automation Template Designed & Developed by Infridet Solutions Private Limited 🔧 Objective: Automate your lead nurturing and sales process from YouTube/Instagram → Landing Page → CRM → Email → WhatsApp → Sales → Deal Closure using tools like: 🌐 WordPress (Landing Page + Fluent Forms) 🧾 Google Sheets (Backup Log) 📩 FluentCRM (Lead Tagging + Email Sequences) 💬 Whinta.com (WhatsApp Messaging API) ⚙️ N8N (Workflow Automation Engine) 🧩 System Flow Overview: Lead Source: YouTube or Instagram CTA Landing Page: Built on WordPress with a story-driven design Form Capture: Fluent Forms with dynamic input fields Data Sync: Backup to Google Sheets Push lead to FluentCRM and tag as New Lead Email Sequence: Warm-up emails (1 to 5) Introduce offer or service WhatsApp Outreach: Send personalized message via Whinta Triggered 1 hour after form fill or last email Sales Follow-Up: Sales team handles replies manually CRM tag updated to Customer upon closing 📁 Folder Structure (Optional Git/Zip File): 📦 WhatsApp-Automation-Infridet/ │ ├── whatsapp-automation-n8n.json # N8N Flowchart Import File ├── email-templates.docx # Warm-up Email Scripts ├── whinta-api-integration.pdf # API Documentation ├── crm-tagging-notes.txt # CRM Tag Setup Details └── readme.md # This Instruction File 🛠️ Required Integrations & Setup ✅ Fluent Forms (WordPress) Embed form with Name, Email, Phone Enable webhook to N8N: /lead-capture ✅ Google Sheets Use n8n-nodes-base.googleSheets node Capture name, email, phone, source, timestamp ✅ FluentCRM REST API enabled Push contact and assign tag New Lead Setup Email Automation via tag trigger ✅ SMTP Email (Optional) Use Gmail SMTP or Brevo Trigger email on form submission ✅ Whinta.com (WhatsApp API) Send POST request Payload includes phone, message, sender_id Customize message with personalization 💬 Sample WhatsApp Message: Hey {{name}}, Gyan here from Account Craft 👋 I saw your form submission – would you like help in starting your YouTube journey this week? Let me know. I'm just one text away. ✅ 📧 Sample Email (Warmup Day 1): > Subject: Welcome to Account Craft 🚀 > Body: > Hi {{name}}, > > I’m Gyan from Account Craft. Thanks for joining us! > Here’s what’s coming next: exclusive videos, personalized tips, and real support to get your YouTube channel earning. > > Let’s go! > – Gyan 🔁 CRM Tag Updates: | Action | Tag Assigned | |-------------------|------------------| | On form fill | New Lead | | After WhatsApp | Engaged | | After sale closed | Customer | 📌 Final Output: Once completed, the system will: Log all leads into a database Automatically send emails and WhatsApp messages Notify your sales team Update lead status without manual entry > Automation Template Designed & Deployed by > Infridet Solutions Private Limited > Smart Integrations. Seamless Business. > 🌐 www.infridetsolutions.com | 📞 +91-8853354829
by Corentin Ribeyre
This template can be used to search for an email address with Icypeas. Be sure to have an active account to use this template. How it works This workflow can be divided into three steps : The workflow initiates with a manual trigger (On clicking 'execute'). It connects to your Icypeas account. It performs an HTTP request to search for an email address. Set up steps You will need a working icypeas account to run the workflow and get your API Key, API Secret and User ID. You will need a personn firstname, lastname and domain/company name to perform the search.
by Paulo Ramirez
Upload your CRM contacts to telli and schedule AI voice-agent calls Introduction to telli and AI Voice-Agent Calls telli is an innovative platform that provides AI-powered voice agents capable of making calls and performing tasks tailored to specific customer use cases. These AI voice-agents can handle a wide range of communication tasks, from appointment scheduling to customer support, with remarkable efficiency and natural conversation flow. This template is designed for businesses and organizations looking to automate their outbound calling processes using telli's AI voice-agents in conjunction with Airtable as their CRM. It solves the problem of manual call scheduling and data transfer between your CRM and calling system, saving time and reducing human error. Prerequisites telli account Airtable base with contact information n8n instance Step-by-Step Setup Guide n8n Setup: Create a new workflow in n8n. Add the Airtable node to connect to your CRM table. telli API Configuration: Log in to your telli dashboard. Locate and copy your API key under telli - Settings - API/Webhooks. Workflow Configuration: Add two HTTP Request nodes to your n8n workflow. Set the "Authorization" header in both POST requests, replacing the value with your telli API key. Configure the first request to use the /add-contact endpoint. Set up the second request to use the /schedule-call endpoint. Data Mapping: Map the relevant fields from your Airtable node to the telli API requests. Testing and Activation: Run a test execution of your workflow. Once satisfied with the results, activate the workflow. API Endpoint Details Add Contact Endpoint URL**: https://api.telli.com/v1/add-contact Method**: POST Headers**: Authorization: YOUR-API-KEY Content-Type: application/json Payload**: { "external_contact_id": "string", "salutation": "string", "first_name": "string", "last_name": "string", "phone_number": "string", "email": "jsmith@example.com", "contact_details": {}, "timezone": "string" } Schedule Call Endpoint URL**: https://api.telli.com/v1/schedule-call Method**: POST Headers**: Authorization: YOUR-API-KEY Content-Type: application/json Payload**: { "contact_id": TELLI-CONTACT-ID, "agent_id": "string", "max_retry_days": 123, "call_details": { "message": "Hello, this is your friendly reminder!", "questions": [ { "fieldName": "email", "neededInformation": "email of the customer", "exampleQuestion": "What is your email address?", "responseFormat": "email string" } ] }, "override_from_number": "string" } Use Cases This template is versatile and can be applied to various scenarios, including: Lead Qualification*: Automatically schedule calls to new leads entered in your CRM. Appointment Reminders*: Set up calls to remind clients of upcoming appointments. Customer Feedback*: Schedule follow-up calls after product deliveries or service completions. Uploading Multiple Contacts For bulk operations, you have two options: Loop Node: Include a Loop node in your n8n workflow to process multiple contacts sequentially. Batch Endpoints: Instead of /add-contact and /schedule-call, use telli's batch endpoints: /add-contacts-batch: Add multiple contacts within an array. /schedule-calls-batch: Schedule multiple calls at once. Example of batch endpoint usage: { "contacts": [ {"name": "John Doe", "phone": "+1234567890"}, {"name": "Jane Smith", "phone": "+1987654321"} ] } By leveraging this template, you can seamlessly integrate your Airtable CRM with telli's powerful AI voice-agents, automating your outbound calling process and enhancing your customer communication strategy.
by Mario
Dynamically switch between LLMs for AI Agents using LangChain Code Purpose This example workflow demonstrates a way to connect multiple LLMs to a single AI Agent/LangChain Node and programmatically use one – or in this case loop through them. What it does This AI workflow takes in customer complaints and generates a response that is being validated before returned. If the answer was not satisfactory, the response will be generated again with a more capable model. How it works A LangChain Code Node allows multiple LLMs to be connected to a single Basic LLM Chain. On every call only one LLM is actually being connected to the Basic LLM Chain, which is determined by the index defined in a previous Node. The AI output is later validated by a Sentiment Analysis Node If the result was not satisfactory, it loops back to the beginning and executes the same query with the next available LLM The loop ends either when the result passed the requirements or when all LLMs have been used before. Setup Clone the workflow and select the belonging credentials. You'll need an OpenAI Account, alternatively you can swap the LLM nodes with ones from a different provider like Anthropic after the import. How to use Beware that the order of the used LLMs is determined by the order they have been added to the workflow, not by the position on the canvas. After cloning this workflow into your environment, open the chat and send this example message: > I really love waiting two weeks just to get a keyboard that doesn’t even work. Great job. Any chance I could actually use the thing I paid for sometime this month? Most likely you will see that the first validation fails, causing it to loop back to the generation node and try again with the next available LLM. Since AI responses are unpredictable, the results and number of tries will differ for each run. Disclaimer Please note, that this workflow can only run on self-hosted n8n instances, since it requires the LangChain Code Node.
by Sean Lon
Target Audience You will find this workflow or template perfect if you are in the internal talent acquisition teams, recruitment agencies, HR professionals, and hiring managers seeking to bulk automate the initial screening of CVs and resumes. Eg. Automatically get result of candidate who has been shortlisted/rejected with its rationale and score automatically. By eliminating manual evaluation and screening, you get smart AI-Agent helping you to have standardized efficient, and scalable solution for handling large volumes of applications. With bulk automation, you can focus strategic decision-making rather than tedious screening tasks, ensuring a faster, more accurate, and fair hiring process. Key focus This workflow focusses on having a more organized file-folder management, trackable candidate cv, maintainable job description, autonomous ai-agent. Organized Folder-File Structure – CVs are automatically categorized based on their status, ensuring a structured workflow and easy retrieval Candidate Tracker – A real-time tracking system records the state of each CV, allowing recruiters to monitor the shortlisted, rejected, or KIV (Keep in View) candidates. AI Agent for Decision Automation – The AI autonomously orchestrates screening decisions, replacing manual LLM configurations with dynamic AI-driven evaluations for scalability and accuracy. Maintainable Job Description Management – A structured job description file ensures continuous updates, keeping hiring criteria flexible and aligned with recruitment needs. Email Notifications – The system automatically sends receipt confirmations upon processing completion, providing timely updates to recruiters. Features - Workflow Automated Resume Screening Workflow This workflow leverages Groq Llama4 for intelligent resume analysis, speeding the screening process by generating a matching score, result (shortlisted/rejected/kiv), and key insights/rationale into their suitability for provided job description. Step-by-Step Process: Monitors Google Drive:** Listens and checks for new resume cv in google drive . Retrieve Resume:** Downloads the CV resumes from google drive . Extract Resume Data:* Extract *text content** from CV resume PDF files Extract Job Description Data:* Extract *text content** from job description Analyze with Groq:** Generate a matching score based on job requirements. [SCORE: 1-10] Provide decision into their job suitability. [SHORTLISTED/REJECTED/KIV] Provide actionable insights into their job suitability. [REASON] This ensures a fast, efficient, and accurate screening process, eliminating manual evaluation. Setup Guide Step-by-Step Instructions Ensure all credentials are ready and setup (groq, gdrive ,gmail, gsheet, gdoc) View official n8n documentation on node setup accordingly. See also the notes of setup . Folder & File Setup 1. Create a google-drive folder like this View directory example 2. Create a job description like this View file example 3. Configure a tracker like this ( Candidate Name, AI Score,AI Verdict, AI Reason) View file example email conversations report as you like. You are ready to go!
by Mark de Jonge
About the workflow The workflow reads every reply that is received from a cold email campaign and qualifies if the lead is interested in a meeting. If the lead is interested, a deal is made in pipedrive. You can add as many email inboxes as you need! Setup: Add credentials to the Gmail, OpenAI and Pipedrive Nodes. Add a in_campaign field in Pipedrive for persons. In Pipedrive click on your credentials at the top right, go to company settings > Data fields > Person and click on add custom field. Single option [TRUE/FALSE]. If you have only one email inbox, you can delete one of the Gmail nodes. If you have more than two email inboxes, you can duplicate a Gmail node as many times as you like. Just connect it to the Get email node, and you are good to go! In the Gmail inbox nodes, select Inbox under label names and uncheck Simplify.
by Ranjan Dailata
Who this is for? The Automate Etsy Data Mining with Bright Data Scrape & Google Gemini workflow is designed for eCommerce analysts, product researchers, and AI developers seeking to extract actionable insights from Etsy listings at scale. It is ideal for: eCommerce Entrepreneurs** - Researching product demand and competition. Market Analysts** - Tracking pricing, reviews, and trends across Etsy categories. Product Managers** - Identifying niche opportunities and design inspirations. Data Scientists & AI Engineers** - Automating product intelligence pipelines. Growth Hackers** - Leveraging Etsy insights to refine product-market fit. What problem is this workflow solving? Manually browsing Etsy to analyze product listings, pricing, reviews, and seller activity is slow, inconsistent, and unscalable. Scraping Etsy requires unlocking JavaScript-heavy content and structuring noisy data for analysis. This workflow solves: Automated and scalable scraping of Etsy product listings using Bright Data’s infrastructure. A fully paginated data structured Estry production data extraction via the Google Gemini LLM. Enables faster decision-making for product research and competitive analysis via the fully automated paginated data extraction. What this workflow does Receives input: Sets the Esty URL for the data extraction and analysis. Uses Bright Data's Web Unlocker to extract content from relevant sites. Cleans and preprocesses the scraped content for readability. Sends the content to Google Gemini for: Enriched results including: Data persistence over the disk. Sends the response to a target system via Webhook notification. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). Update the Set Esty Search Query for setting the brand content URL and the Bright Data Zone name. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Input Sources** : Replace the static URL with dynamic input from Google Sheets, Webhook, or Airtable to research multiple niches. Prompt Customization** : Adjust Gemini prompts to extract specific insights for example: List key features of the product Summarization of the review themes Data Output Options** : Update the Webhook notification to save data to: Google Sheets Notion or Airtable SQL/NoSQL Slack/Email
by Milorad Filipovic
How it works? This workflows sends you a monthly lists of live music events for a specific location. Events are fetched from songkick.com and delivered to you by email to a provided email address(es). Each event in the list has a link to a full SongKick page where you can see more details about the event and buy tickets for it. Example email: How to set it up? First thing that this workflow needs is a location link for your desired city from the SongKick website. You can get it by following these steps: Visit songkick.com and enter the city name in the search box: From the results page, click the result that contains the location info. These will have the Location tag on top of the location name: Once on the location page, copy the url Back in the n8n workflow page, paste the url in the location parameter of the node called Setup location and email: Second thing it needs is the email address which will receive the monthly list. For this, simply enter it in the email field of the Setup location and email node. If you want to set multiple receivers, simply separate email addresses by ,: Required accounts This workflow requires you to have a properly set up Gmail account that will be used by Gmail Node to send emails. You can read more about how to create credentials for a Gmail node in n8n documentation here.
by mariskarthick
QuantumDefender AI is a next-generation intelligent cybersecurity assistant designed to harness the symbolic strength of quantum computing’s promise alongside cutting-edge AI capabilities. This sophisticated agent empowers SOC analysts, red teamers, and security researchers with rapid threat investigation, operational automation, and intelligent command execution—all driven by GPT-4 and integrated tools, accessible through Telegram or on any medium. 🔑 Key Features: Expert-Level Cybersecurity Research & Analysis: Leverages powerful AI models to deliver clean, detailed, domain-specific insights across detection, remediation, and offensive security. Command & Control: Executes Linux shell commands, autonomous scripts, and system operations securely in isolated environments. Real-Time Web Intelligence: Utilizes integrated Langsearch API to provide timely internet research with contextual relevance. Calendar & Scheduling Automation: Manage Google Calendar events or any similar application(create, update, delete, retrieve) dynamically from chat. Multi-Tool Orchestration: Combines calculator functions, internet searches, command execution, and messaging for comprehensive operational support. Telegram-native Chatbot: Delivers an adaptive, memory-informed, and interactive conversational experience with immediate typing indicators and high responsiveness. Conversation & Session Management: Maintains context-aware, session-based memory to enable smooth, multi-turn dialogues with individual users. Sends “typing…” indicators during processing to ensure an interactive, user-friendly chat experience. Operates exclusively within Telegram, delivering rich, timely responses and leveraging all Telegram bot capabilities. Execution Intelligence & Safety: Fully autonomous in deciding which tools to invoke, how frequently, and in what sequence to fulfill user requests comprehensively and responsibly. Operates within a secure temporary folder environment to contain all command executions safely and avoid persistent or harmful side effects. Enforces strict safety protocols to avoid running malicious or destructive commands, maintaining ethical standards and compliance. Use Cases: Cybersecurity researchers and operators seeking an intelligent assistant to accelerate investigations and automate routine tasks. Red team professionals requiring on-the-fly command execution and information gathering integrated with tactical chat interactions. SOC teams aiming to augment their alert triage and incident handling workflows with AI-powered analysis and action. Anyone looking for a robust multi-tool AI chatbot integrated with real-world operational capabilities. Setup Requirements: OpenAI API key for GPT-4.1-nano language processing. Telegram Bot API credentials with proper webhook setup to receive and respond to messages. Google OAuth credentials for Calendar integration if calendar features are used. SSH access credentials for executing commands on remote hosts, if remote execution is enabled. Internet connectivity for the Langsearch web search API. Customization & Extensibility: The workflow is built modularly with n8n’s flexible node system. Users can extend it by adding more tools, integrating other services (ticketing, threat intel, scanning tools), or modifying interaction logic to suit specialized operational needs and environments. Created by Mariskarthick M Senior Security Analyst | Detection Engineer | Threat Hunter | Open-Source Enthusiast
by Davi Saranszky Mesquita
Log errors and avoid sending too many emails Use case Most of the time, it’s necessary to log all errors that occur. However, in some cases, a scheduled task or service consuming excessive resources might trigger a surge of errors. To address this, we can log all errors but limit alerts to a maximum of one notification every 5 minutes. What this workflow does This workflow can be configured to receive error events, or you can integrate it before your own error-handling logic. If used as the primary error handler, note that this flow will only add a database log entry and take no further action. You’ll need to add your own alerts (e.g., email or push notifications). Below is an example of a notification setup I prefer to use. At the end, there’s an error cleanup option. This feature is particularly useful in development environments. If you already have an error-handling workflow, you can call this one as a sub-workflow. Its final steps include cleanup logic to reset the execution state and terminate the workflow. Setup Verify all Postgres nodes and credentials when using the 'Error Handling Sample' How to adjust it to your needs 1) You can set this workflow as a sub-workflow within your existing error-handling setup. 2) Alternatively, you can add the "Error Handling Sample" at the end of this workflow, which sends email and push notifications. Configuration Requirements: ⚠️ You must create a database table for this to work! DDL of this sample: create table p1gq6ljdsam3x1m."N8Err" ( id serial primary key, created_at timestamp, updated_at timestamp, created_by varchar, updated_by varchar, nc_order numeric, title text, "URL" text, "Stack" text, json json, "Message" text, "LastNode" text ); alter table p1gq6ljdsam3x1m."N8Err" owner to postgres; create index "N8Err_order_idx" on p1gq6ljdsam3x1m."N8Err" (nc_order); by Davi Saranszky Mesquita https://www.linkedin.com/in/mesquitadavi/
by Krishna Kumar Eswaran
🧠 Problem This Solves: For developers and creators, consistently posting quality content on LinkedIn can be time-consuming. This workflow automates the process by: Fetching the latest Dev.to articles Posting them to LinkedIn twice daily Preventing duplicates using Airtable Sending success alerts to Telegram This ensures you're always active on LinkedIn, with zero manual effort. 👥 Who This Template Is For Developers who want to build their presence on LinkedIn Tech creators or solo founders looking to grow an audience Community/page managers who want regular, curated content Busy professionals aiming for consistent LinkedIn engagement without doing it manually ⚙️ Workflow Breakdown This automation runs twice a day (9:00 AM and 7:00 PM) and performs the following steps: Fetches Dev.to articles based on a tag Checks Airtable to avoid reposting the same article Posts to LinkedIn if it’s new Sends a Telegram message after posting successfully 🧩 Step-by-Step Setup Instructions ✅ 1. Airtable Configuration Create a new base in Airtable with just one table and one column: Table Name: PostedArticles Column: ArticleID (Single line text – stores the unique ID of each Dev.to article posted) This column is used to track posted articles and prevent duplicates. 🔗 2. Dev.to API Setup Use the following endpoint in the HTTP Request node: arduino Copy Edit https://dev.to/api/articles?tag=YOUR_TAG_HERE&per_page=10 Replace YOUR_TAG_HERE with a tag like android, webdev, ai, etc. 💬 3. Telegram Bot Setup Open @BotFather in Telegram and create a new bot Save the bot token Get your chat ID using @userinfobot or via Telegram API Add a Telegram node in n8n using this token and chat ID This will notify you when a post is successfully published. 🧾 4. LinkedIn Setup Create a LinkedIn Developer App Use OAuth2 to connect it in n8n Choose to post on either a user profile or a company page 🧱 5. n8n Workflow Structure Here’s the basic structure of the workflow: Cron Node – Triggers at 9:00 AM and 7:00 PM daily HTTP Request – Fetches latest articles from Dev.to Airtable Search – Checks if ArticleID already exists IF Node – Filters new vs. already-posted articles LinkedIn Post – Publishes new article Airtable Create – Saves the new ArticleID Telegram Message – Sends success confirmation 🛠️ Customization Tips Change the Dev.to tag in the API URL Modify LinkedIn post format (add hashtags, emojis, personal notes) Adjust posting times in the Cron node Use additional filters (e.g., only post articles with a cover image or certain word count)