by Lorena
This workflow allows you to collect tweets, store them in MongoDB, analyse their sentiment, insert them into a Postgres database, and post positive tweets in a Slack channel. Cron node: Schedule the workflow to run every day Twitter node: Collect tweets MongoDB node: Insert the collected tweets in MongoDB Google Cloud Natural Language node: Analyse the sentiment of the collected tweets Set node: Extract the sentiment score and magnitude Postgres node: Insert the tweets and their sentiment score and magnitude in a Posgres database IF node: Filter tweets with positive and negative sentiment scores Slack node: Post tweets with a positive sentiment score in a Slack channel NoOp node: Ignore tweets with a negative sentiment score
by scrapeless official
> ⚠️ Disclaimer: This workflow uses Scrapeless and Claude AI via community nodes, which require n8n self-hosted to work properly. 🔁 How It Works This intelligent B2B lead generation workflow combines search automation, website crawling, AI analysis, and multi-channel output: It starts by using Scrapeless’s Deep Serp API to find company websites from targeted Google Search queries. Each result is then individually crawled using Scrapeless's Crawler module, retrieving key business information from pages like /about, /contact, /services. The raw web content is processed via a Code node to clean, extract, and prepare structured data. The cleaned data is passed to Claude Sonnet (Anthropic) which analyzes and qualifies the lead based on content richness, contact data, and relevance. A filter step ensures only high-quality leads (e.g. lead score ≥ 6) are kept. Sent via Discord webhook for real-time notification (can be replaced with Slack, email, or CRM tools). > 📌 The result is a fully automated system that finds, qualifies, and organizes B2B leads with high efficiency and minimal manual input. ✅ Pre-Conditions Before using this workflow, make sure you have: An n8n self-hosted instance A Scrapeless account and API key (get it here) An Anthropic Claude API key A configured Discord webhook URL (or alternative notification service) ⚙️ Workflow Overview Manual Trigger → Scrapeless Google Search → Item Lists → Scrapeless Crawler → Code (Data Cleaning) → Claude Sonnet → Code (Response Parser) → Filter → Discord Notification 🔨 Step-by-Step Breakdown Manual Trigger – For testing purposes (can be replaced with Cron or Webhook) Scrapeless Google Search – Queries target B2B topics via Scrapeless’s Deep SERP API Item Lists – Splits search results into individual items Scrapeless Crawler – Visits each company domain and scrapes structured content Code Node (Data Cleaner) – Extracts and formats content for LLM input Claude Sonnet (via HTTP Request) – Evaluates lead quality, relevance, and contact info Code Node (Parser) – Parses Claude’s JSON response IF Filter – Filters leads based on score threshold Discord Webhook – Sends formatted message with company info 🧩 Customization Guidance You can easily adjust the workflow to match your needs: Lead Criteria**: Modify the Claude prompt and scoring logic in the Code node Output Channels**: Replace the Discord webhook with Slack, Email, Airtable, or any CRM node Search Topics**: Change your query in the Scrapeless SERP node to find leads in different niches or countries Scoring Threshold**: Adjust the filter logic (lead_score >= 6) to match your quality tolerance 🧪 How to Use Insert your Scrapeless and Claude API credentials in the designated nodes Replace or configure the Discord webhook (or alternative outputs) Run the workflow manually (or schedule it) View qualified leads directly in your chosen notification channel 📦 Output Example Each qualified lead includes: 🏢 Company Name 🌐 Website ✉️ Email(s) 📞 Phone(s) 📍 Location 📈 Lead Score 📝 Summary of relevant content 👥 Ideal Users This workflow is perfect for: AI SaaS companies** targeting mid-market & enterprise leads Marketing agencies** looking for B2B-qualified leads Automation consultants** building scraping solutions No-code developers** working with n8n, Make, Pipedream Sales teams** needing enriched prospecting data
by Yaron Been
Prunaai Flux Schnell Image Generator Description This is a 3x faster FLUX.1 [schnell] model from Black Forest Labs, optimised with pruna with minimal quality loss. Contact us for more at pruna.ai Overview This n8n workflow integrates with the Replicate API to use the prunaai/flux-schnell model. This powerful AI model can generate high-quality image content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt for generated image Optional Parameters seed** (integer, default: None): Random seed. Set for reproducible generation megapixels** (string, default: 1): Approximate number of megapixels for generated image speed_mode** (string, default: Juiced 🔥 (default)): Run faster predictions with model optimized for speed num_outputs** (integer, default: 1): Number of outputs to generate aspect_ratio** (string, default: 1:1): Aspect ratio of the output image output_format** (string, default: jpg): Format of the output images output_quality** (integer, default: 80): Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs num_inference_steps** (integer, default: 4): Number of denoising steps. 4 is recommended, and lower number of steps produce lower quality outputs, faster. How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate image content Access the generated output from the final node API Reference Model: prunaai/flux-schnell API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of image generation parameters
by Yaron Been
Prunaai Flux.1 Dev Image Generator Description This is the fastest Flux Dev endpoint in the world, contact us for more at pruna.ai Overview This n8n workflow integrates with the Replicate API to use the prunaai/flux.1-dev model. This powerful AI model can generate high-quality image content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt Optional Parameters seed** (integer, default: -1): Seed guidance** (number, default: 3.5): Guidance scale image_size** (integer, default: 1024): Base image size (longest side) speed_mode** (string, default: Juiced 🔥 (default)): Speed optimization level aspect_ratio** (string, default: 1:1): Aspect ratio of the output image output_format** (string, default: jpg): Output format output_quality** (integer, default: 80): Output quality (for jpg and webp) num_inference_steps** (integer, default: 28): Number of inference steps How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate image content Access the generated output from the final node API Reference Model: prunaai/flux.1-dev API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of image generation parameters
by Nícolas Pastorello
What is this? This is an n8n workflow designed to supercharge your Sonarr setup. Instead of just waiting for releases to appear in your RSS feed, this workflow proactively runs on a schedule, finds what's missing, actively searches for it, and grabs the best result based on your specific criteria. It's a "set it and forget it" solution to ensure your library is always complete. Key Features 🚀 Proactive Searching: Doesn't wait for content to come to you. It actively triggers a search for missing episodes. 🗓️ Fully Automated & Scheduled: Runs every 12 hours by default to check for anything new that's missing. 🧠 Smart & Efficient: Searches only once per season, even if multiple episodes from that season are missing, preventing unnecessary API calls. 🎯 Precise Release Filtering: It validates search results against the exact quality name and language you define before telling Sonarr to grab it. This gives you more control than standard quality profiles. ✅ Automatic Download: Once a valid release is found, it's automatically pushed to your download client via Sonarr. How It Works Trigger: The workflow starts automatically on a schedule. Fetch Missing: It connects to your Sonarr instance and gets a list of all monitored, "wanted" episodes. Filter & Group: It intelligently creates a unique list of seasons that need searching. Search: It loops through each unique season and tells Sonarr to perform an interactive search. Validate: It inspects the search results and only allows releases that match both the pre-defined quality AND language. Grab: If a perfect match is found, it sends a final command to Sonarr to grab that specific release and begin the download. How to Use This Template Import the JSON file into your n8n instance. Find the node named "info" (it's a "Set" node near the beginning). This is your main configuration area. Update the following values in the "info" node: urlSonar: Change http://192.168.31.204:8989 to your Sonarr's URL. apikey: Paste your Sonarr API key here. quality: Set the exact quality name you want to match (e.g., WEBDL-1080p). languages: Set the exact language name you want to match (e.g., English, Spanish). Activate the workflow. That's it! You can also change the schedule by editing the "Schedule Trigger" node.
by Harshil Agrawal
This workflow automatically monitors the functionality of a factory. The workflow logs machine data coming from factory sensors in a CrateDB database, generates an incident report in PagerDuty, and notifies the responsible staff members when the temperature of a machine crosses the threshold value. This workflow builds on a workflow that generates factory data. Read more about this use case and how to build both workflows with step-by-step instructions in the blog post How to automate your factory's incident reporting. Prerequisites A PagerDuty account and credentials AMQP, an ActiveMQ connection, and credentials A CrateDB instance running locally or on a server, and credentials. Nodes AMQP Trigger node starts the workflow. IF node filters sensor values higher than 50°C. PagerDuty node creates an incident in the account. Set nodes set the required incident information and sensor data, respectively. CrateDB nodes ingest the information data and machine sensor data, respectively. Function node converts degrees from Celsius to Fahrenheit.
by TheUnknownEntity
I'm currently trialing a 4 day work week for all staff at my company, and one of the major impacts on productivity is interruptions. As such, I opted to use N8N to create a workflow to monitor my Google Calendar and when an event starts, to update my Slack status with an emote and the title of the calendar task. Additionally I opted to include to change the colour of Philips Hue lamp located in my living room where my wife is currently working so she know's if she can interrupt me or not. My calendar is built on the theory behind the Diary Detox system and as such the Slack Status reflect the colours involved. This was achieved using the emote aliases for the relevant colour circles. The Philips Hue lamp status is changed via the local API with Home Assistant. This is a very similiar process to controlling it with something like the Streamdeck, but the workflow calls the Webhook instead of the Streamdeck. This process can be found in lots of Youtube videos such as this. This gives my wife a very quick and easy way to know if she can interrupt me in my office (when the lights are Green or Blue) or when I'm busy (Red). Please Note: The above images are not intended to be an incentive to create your own Squid Games. Additionally, when integrating Slack with N8N, there are 2 x APIs which can be used. Typically the Bot User OAuth Token is used, however in order for your Status to be updated, the User OAuth Token must be used with the users.profile:read and users.profile:write permissions enabled. For clarity, I have removed the Webhooks from the Workflow as this would allow any person to control my lights. These can be inserted in the HTTP Request nodes. Each node responds to a different automation within the Home Assistant infrastructure. Acknowledgement: I would also credit Jon (Discord) aka 8668 (Workflows) for writing the Function node which turns the ColorID into a named variable.
by n8n Team
This is an example workflow that imports an XML file into an SQL database. The ReadBinaryFiles node loads the XML file from the server. Then the Code node extracts the file content from the binary buffer. Afterwards, an XML node converts the XML string into a JSON structure. Finally, in the MySQL node inserts the data records into the SQL table. In the upper part of the workflow there is another MySQL node that is disabled. This node creates a new table with all the required variables based on the sample SQL database: https://www.mysqltutorial.org/mysql-sample-database.aspx
by Dataki
This workflow enriches new Pipedrive organization's data by adding a note to the organization object in Pipedrive. It assumes there is a custom "website" field in your Pipedrive setup, as data will be scraped from this website to generate a note using OpenAI. Then, a notification is sent in Slack. ⚠️ Disclaimer This workflow uses a scraping API. Before using it, ensure you comply with the regulations regarding web scraping in your country or state. Important Notes The OpenAI model used is GPT-4o, chosen for its large input token capacity. However, it is not the cheapest model if cost is very important to you. The system prompt in the OpenAI Node generates output with relevant information, but feel free to improve or modify it according to your needs. How It Works Node 1: Pipedrive Trigger - An Organization is Created This is the trigger of the workflow. When an organization object is created in Pipedrive, this node is triggered and retrieves the data. Make sure you have a "website" custom field in Pipedrive (the name of the field in the n8n node will appear as a random ID and not with the Pipedrive custom field name). Node 2: ScrapingBee - Get Organization's Website's Homepage Content This node scrapes the content from the URL of the website associated with the Pipedrive Organization created in Node 1. The workflow uses the ScrapingBee API, but you can use any preferred API or simply the HTTP request node in n8n. Node 3: OpenAI - Message GPT-4o with Scraped Data This node sends HTML-scraped data from the previous node to the OpenAI GPT-4o model. The system prompt instructs the model to extract company data, such as products or services offered and competitors (if known by the model), and format it as HTML for optimal use in a Pipedrive Note. Node 4: Pipedrive - Create a Note with OpenAI Output This node adds a Note to the Organization created in Pipedrive using the OpenAI node output. The Note will include the company description, target market, selling products, and competitors (if GPT-4o was able to determine them). Node 5 & 6: HTML To Markdown & Code - Markdown to Slack Markdown These two nodes format the HTML output to Slack Markdown. The Note created in Pipedrive is in HTML format, as specified by the System Prompt of the OpenAI Node. To send it to Slack, it needs to be converted to Markdown and then to Slack Markdown. Node 7: Slack - Notify This node sends a message in Slack containing the Pipedrive Organization Note created with this workflow.
by Omer Fayyaz
An intelligent AI-powered agent that automatically browses publication websites, analyzes page content with natural language understanding, and identifies the latest downloadable reports, research papers, and data files across multiple sources using advanced structured output parsing. What Makes This Different: AI-Powered Content Analysis** - Uses advanced language models (GPT-4/GPT-5.1) to understand page context and identify downloadable reports, even when links aren't explicitly labeled, handling complex page layouts and dynamic content Structured Output Parsing** - Enforces JSON schema validation ensuring consistent data extraction with required fields (title, link, file_type, description), eliminating parsing errors and data inconsistencies HTML to Markdown Conversion** - Converts raw HTML to clean Markdown before AI processing, removing noise and improving AI comprehension of page structure and content hierarchy Intelligent Link Detection** - AI agent identifies direct download URLs, converts relative links to absolute URLs, and prioritizes the most recent reports based on publication dates and page positioning Comprehensive Validation** - Multi-layer validation checks link format, file type detection, and report relevance before saving, ensuring only valid, downloadable reports enter your library Flexible Source Management** - Reads publication sources from Google Sheets, enabling easy addition/removal of sources without workflow modification, with support for categories and custom metadata Key Benefits of AI-Powered Report Discovery: Automated Discovery** - Eliminates manual browsing and searching across multiple publication sites, saving hours of research time while ensuring you never miss new reports Context-Aware Extraction** - AI understands page context, distinguishing between actual reports and navigation links, category pages, or promotional content Prioritized Results** - Automatically selects the most recent and relevant report from each source, focusing on quality over quantity Structured Data Output** - All discovered reports are saved with consistent metadata (title, link, file type, description, source), making them easy to search, filter, and integrate with other systems Error Resilience** - Handles missing reports gracefully, logging when no reports are found without failing the entire workflow, ensuring continuous operation Integration Ready** - Can be called by other workflows (e.g., PDF downloader), enabling end-to-end automation from discovery to storage Who's it for This template is designed for researchers, market analysts, competitive intelligence teams, academic institutions, industry monitoring services, and anyone who needs to systematically discover and track downloadable reports from multiple publication sources. It's perfect for organizations that need to monitor industry publications, track competitor research, discover new market reports, build research libraries, or stay updated on latest publications without manually visiting dozens of websites daily. How it works / What it does This workflow creates an AI-powered report discovery system that reads publication source URLs from Google Sheets, fetches their pages, uses AI to analyze content, and extracts information about downloadable reports. The system: Reads Active Sources - Fetches publication URLs and metadata from Google Sheets "Report Sources" sheet, processing each source in sequence Loops Through Sources - Processes sources one at a time using Split in Batches, ensuring proper error isolation and preventing batch failures Fetches Publication Pages - Downloads HTML content from each source URL with proper browser headers (User-Agent, Accept, Accept-Language) to avoid blocking Converts HTML to Markdown - Transforms raw HTML into clean Markdown format, removing styling, scripts, and navigation elements to improve AI comprehension AI Analysis - LangChain agent analyzes the Markdown content using GPT-4/GPT-5.1, identifying downloadable reports based on context, link patterns, and content structure Structured Output Parsing - Enforces JSON schema validation, ensuring the AI returns data in the exact format: source, title, link, file_type, description Validates & Normalizes Output - Validates extracted links are absolute URLs, checks file type indicators, determines report validity, and normalizes all fields Routes by Validity - IF node routes valid reports to save operation, invalid/missing reports to logging Saves Discovered Reports - Appends valid reports to Google Sheets "Discovered Reports" sheet with metadata, source URL, category, and discovery timestamp Logs No Report Found - Records sources where no valid reports were found in "Discovery Log" sheet for monitoring and troubleshooting Tracks Completion - Generates completion summary with number of sources checked and processing timestamp Key Innovation: AI-Powered Context Understanding - Unlike traditional web scrapers that rely on fixed CSS selectors or regex patterns, this workflow uses AI to understand page context and semantics. The AI can identify reports even when they're embedded in complex layouts, use non-standard naming, or require understanding of surrounding text to determine relevance. This makes it adaptable to any website structure without manual configuration. How to set up 1. Prepare Google Sheets Create a Google Sheet with three tabs: "Report Sources", "Discovered Reports", and "Discovery Log" In "Report Sources" sheet, create columns: Source_Name, Source_URL, Category (optional) Add publication URLs in the Source_URL column (e.g., "https://example.com/research" or "https://publisher.com/reports") Add descriptive names in Source_Name column for easy identification Optionally add Category values (e.g., "Market Research", "Industry Reports", "Academic Papers") The "Discovered Reports" sheet will be automatically populated with columns: source, title, link, fileType, description, sourceUrl, category, discoveredAt, status, isValid The "Discovery Log" sheet will record sources where no reports were found Verify your Google Sheets credentials are set up in n8n (OAuth2 recommended) 2. Configure Google Sheets Nodes Open the "Read Active Sources" node and select your spreadsheet from the document dropdown Set sheet name to "Report Sources" Configure the "Save Discovered Report" node: select same spreadsheet, set sheet name to "Discovered Reports", operation should be "Append or Update" Configure the "Log No Report Found" node: same spreadsheet, "Discovery Log" sheet, operation "Append or Update" Test connection by running the "Read Active Sources" node manually to verify it can access your sheet 3. Set Up OpenAI Credentials Open the "OpenAI GPT-5.1" node (or configure the model you want to use) Connect your OpenAI API credentials (API key required) The workflow uses GPT-5.1 by default, but you can change to GPT-4, GPT-4 Turbo, or other models Temperature is set to 0.1 for consistent, deterministic output Verify API key has sufficient credits and access to the selected model For cost optimization, GPT-4 Turbo is recommended for similar results at lower cost 4. Configure AI Agent & Output Parser The "AI Report Discovery Agent" node contains a detailed system prompt that instructs the AI on what to look for The prompt is pre-configured but can be customized for your specific needs (e.g., prioritize certain file types, look for specific keywords) The "Structured Output Parser" enforces the JSON schema - verify the schema matches your needs: { "source": "Publisher Name", "title": "Report Title", "link": "https://example.com/report.pdf", "file_type": "pdf", "description": "Brief description" } The parser ensures the AI always returns valid JSON with all required fields Test the AI agent by manually running with a sample source URL to verify it correctly identifies reports 5. Customize Discovery Rules (Optional) The AI agent's system prompt can be modified in the "AI Report Discovery Agent" node Current rules prioritize: downloadable files (PDF, Excel, Word, PowerPoint), most recent publications, direct download URLs To customize: Edit the system message to add specific keywords, file types, or discovery patterns Example customization: Add industry-specific terms or prioritize reports with certain keywords in titles The validation code in "Validate & Normalize Output" can be adjusted to change what's considered "valid" Test with your specific sources to ensure discovery rules work as expected 6. Set Up Scheduling & Test The workflow includes Manual Trigger (for testing), Schedule Trigger (runs daily), and Execute Workflow Trigger (for calling from other workflows) To customize schedule: Open "Schedule (Daily)" node and adjust interval (e.g., twice daily, weekly) For initial testing: Use Manual Trigger, add 2-3 test publication URLs to your "Report Sources" sheet Verify execution: Check that pages are fetched, AI analysis completes, and reports are saved to "Discovered Reports" Monitor execution logs: Check for API errors, timeout issues, or parsing failures Review Discovery Log: Verify sources with no reports are properly logged Common issues: OpenAI API rate limits (add delays if processing many sources), invalid URLs (check source URLs), timeout errors (increase timeout for slow-loading pages), AI not finding reports (may need to adjust system prompt for specific site structures) Requirements OpenAI API Key** - Active OpenAI account with API access and sufficient credits for GPT-4/GPT-5.1 model usage (API key configured in n8n credentials) Google Sheets Account** - Active Google account with OAuth2 credentials configured in n8n for reading and writing spreadsheet data Source Spreadsheet** - Google Sheet with "Report Sources", "Discovered Reports", and "Discovery Log" tabs, properly formatted with required columns Valid Publication URLs** - Direct links to publication pages that contain downloadable reports (not direct PDF links - the workflow discovers those) n8n Instance** - Self-hosted or cloud n8n instance with access to external websites (HTTP Request node needs internet connectivity) and LangChain nodes enabled
by Fernanda Silva
Workflow Description Your workflow is an intelligent chatbot, using ++OpenAI assistant++, integrated with a backend that supports WhatsApp Business, designed to handle various use cases such as sales and customer support. Below is a breakdown of its functionality and key components: Workflow Structure and Functionality Chat Input (Chat Trigger) The flow starts by receiving messages from customers via WhatsApp Business. Collects basic information, such as session_id, to organize interactions. Condition Check (If Node) Checks if additional customer data (e.g., name, age, dependents) is sent along with the message. If additional data is present, a customized prompt is generated, which includes this information. The prompt specifies that this data is for the assistant's awareness and doesn’t require a response. Data Preparation (Edit Fields Nodes) Formats customer data and the interaction details to be processed by the AI assistant. Compiles the customer data and their query into a single text block. AI Responses (OpenAI Nodes) The assistant’s prompt is carefully designed to guide the AI in providing accurate and relevant responses based on the customer’s query and data provided. Prompts describe the available functionalities, including which APIs to call and their specific purposes, helping to prevent “hallucinated” or irrelevant responses. Memory and Context (Postgres Chat Memory) Stores context and messages in continuous sessions using a database, ensuring the chatbot maintains conversation history. API Calls The workflow allows the use of APIs with any endpoints you choose, depending on your specific use case. This flexibility enables integration with various services tailored to your needs. The OpenAI assistant understands JSON structures, and you can define in the prompt how the responses should be formatted. This allows you to structure responses neatly for the client, ensuring clarity and professionalism. Make sure to describe the purpose of each endpoint in the assistant’s prompt to help guide the AI and prevent misinterpretation. Customer Response Delivery After processing and querying APIs, the generated response is sent to the backend and ultimately delivered to the customer through WhatsApp Business. Best Practices Implemented Preventing Hallucinations** Every API has a clear description in its prompt, ensuring the AI understands its intended use case. Versatile Functionality** The chatbot is modular and flexible, capable of handling both sales and general customer inquiries. Context Persistence** By utilizing persistent memory, the flow maintains continuous interaction context, which is crucial for longer conversations or follow-up queries. Additional Recommendations Include practical examples in the assistant’s prompt, such as frequently asked questions or decision-making flows based on API calls. Ensure all responses align with the customer’s objectives (e.g., making a purchase or resolving technical queries). Log interactions in detail for future analysis and workflow optimization. This workflow provides a solid foundation for a robust and multifunctional virtual assistant 🚀
by Ron
This flow monitors a file for changes of its content. If changed, an alert is sent out and you receive it as push, SMS or voice call on SIGNL4. User cases: Log-file monitoring Monitoring of production data Integration with third-party systems via file interface Etc. Sample file "alert-data.json": { "Body": "Alert in building A2.", "Done": false, "eventId": "2518088743722201372_4ee5617b-2731-4d38-8e16-e4148b8fb8a0" } Body: The alert text to be sent. Done: If false this is a new alert. If true this indicated the alert has been closed. eventId: Last SIGNL4 event ID written by SIGNL4. This flow can be easily adapted for database monitoring as well.