by JHH
LLM/RAG Kaggle Development Assistant An on-premises, domain-specific AI assistant for Kaggle (tested on binary disaster-tweet classification), combining LLM, an n8n workflow engine, and Qdrant-backed Retrieval-Augmented Generation (RAG). Deploy via containerized starter kit. Needs high end GPU support or patience. Initial chat should contain guidelines on what to to produce and the challenge guidelines. Features Coding Assistance** • "Real"-time Python code recommendations, debugging help, and data-science best practices • Multi-turn conversational context Workflow Automation** • n8n orchestration for LLM calls, document ingestion, and external API integrations Retrieval-Augmented Generation (RAG)** • Qdrant vector-database for competition-specific document lookup • On-demand retrieval of Kaggle competition guidelines, tutorials, and notebooks after convertion to HTML and ingestion into RAG entirly On-Premises for Privacy** • Locally hosted LLM (via Ollama) – no external code or data transfer ALIENTELLIGENCE/contentsummarizer:latest for summarizing qwen3:8b for chat and coding mxbai-embed-large:latest for embedding • GPU acceleration required Based on: https://n8n.io/workflows/2339 breakdown documents into study notes using templating mistralai and qdrant/
by Recrutei Automações
What This Workflow Does This workflow automates the candidate nurturing process, solving the common problem of candidates losing interest or "ghosting" after an application. It keeps them engaged and informed by sending a personalized, multi-channel (WhatsApp & Gmail) sequence of follow-up messages over their first week. The automation triggers when a new candidate is added to your ATS (e.g., via a Recrutei webhook). It then uses AI to generate a custom 3-part message (for Day 1, Day 3, and Day 7) tailored to the candidate's age and the specific job they applied for, ensuring a professional and empathetic experience that strengthens your employer brand. How it Works Trigger: A Webhook node captures the new candidate data from your Applicant Tracking System (ATS) or form. Data Preparation: Two Code nodes clean the incoming data. The first (Separating information) extracts key fields and formats the phone number. The second (Extract age) calculates the candidate's age from their birthday to be used by the AI. AI Content Generation: The workflow sends the candidate's details (name, age, job title) to an AI model (AI Recruitment Assistant). The AI has a detailed system prompt to generate three distinct messages for Day 1 (Thank You), Day 3 (Friendly Reminder), and Day 7 (Final Reinforcement), adapting its tone based on the candidate's age. Split Messages: A Code node (Separating messages per days) receives the single text block from the AI and splits it into three separate variables (day1, day3, day7). Day 1 Send: The workflow immediately sends the day1 message via both Gmail and WhatsApp (configured for Evolution API). Day 3 Send: A "Wait" node pauses the workflow for 2 days, after which it sends the day3 message. Day 7 Send: Another "Wait" node pauses for 4 more days, then sends the final day7 message, completing the 7-day nurturing sequence. Setup Instructions This workflow is plug-and-play once you configure the following 5 steps: Webhook Node: Copy the Test URL from the Webhook node and configure it in your ATS (e.g., Recrutei) or form builder to trigger whenever a new candidate is added. Run one test submission to make the data structure visible to n8n. AI Credentials: In the AI Recruitment Assistant node, select or create your OpenAI API credential. MCP Credential (Optional): If you use a Recrutei MCP, paste your endpoint URL into the MCP Recrutei node. Gmail Credentials: In all three Message Gmail nodes (Day 1, 3, 7), select or create your Gmail (OAuth2) credential. Optional: In the same nodes, go to Options and change the Sender Name from your_company to your actual company name. WhatsApp (Evolution API): This template is pre-configured for the Evolution API. In all three Message WhatsApp nodes (Day 1, 3, 7), you must: URL: Replace {server-url} and {instance} with your Evolution API details. Headers: In the "Header Parameters" section, replace your_api_key with your actual Evolution API key.
by Simon
This n8n workflow simplifies the process of removing backgrounds from images stored in Google Drive. By leveraging the PhotoRoom API, this template enables automatic background removal, padding adjustments, and output formatting, all while storing the updated images back in a designated Google Drive folder. This workflow is very useful for companies or individuals that are spending a lot of time into removing the background from product images. How it Works The workflow begins with a Google Drive Trigger node that monitors a specific folder for new image uploads. Upon detecting a new image, the workflow downloads the file and extracts essential metadata, such as the file size. Configurations are set for background color, padding, output size, and more, which are all customizable to match specific requirements. The PhotoRoom API is called to process the image by removing its background and adding padding based on the settings. The processed image is saved back to Google Drive in the specified output folder with an updated name indicating the background has been removed. Requirements PhotoRoom API Key Google Drive API Access Customizing the Workflow Easily adjust the background color, padding, and output size using the configuration node. Modify the output folder path in Google Drive or replace Google Drive with another storage service if needed. For advanced use cases, integrate further image processing steps, such as adding captions or analyzing content using AI.
by Yaron Been
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically tracks customer satisfaction scores across multiple platforms and surveys to help improve customer experience and identify areas for enhancement. It saves you time by eliminating the need to manually check different feedback sources and provides comprehensive satisfaction analytics. Overview This workflow automatically scrapes customer satisfaction surveys, review platforms, and feedback forms to extract satisfaction scores and sentiment data. It uses Bright Data to access various feedback platforms without being blocked and AI to intelligently analyze satisfaction trends and identify improvement opportunities. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For scraping satisfaction surveys and review platforms without being blocked OpenAI**: AI agent for intelligent satisfaction analysis and trend identification Google Sheets**: For storing satisfaction scores and generating analytics reports How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your satisfaction tracking spreadsheet Customize: Define feedback sources and satisfaction metrics you want to monitor Use Cases Customer Experience**: Monitor satisfaction trends across all customer touchpoints Product Teams**: Identify product features that impact customer satisfaction Support Teams**: Track satisfaction scores for support interactions Management**: Get comprehensive satisfaction reporting for strategic decisions Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #customersatisfaction #satisfactionscores #brightdata #webscraping #customerexperience #n8nworkflow #workflow #nocode #satisfactiontracking #csat #nps #customeranalytics #feedbackanalysis #customerinsights #satisfactionmonitoring #experiencemanagement #customermetrics #satisfactionsurveys #feedbackautomation #customerfeedback #satisfactiondata #customerjourney #experienceanalytics #satisfactionreporting #customersentiment #experienceoptimization #satisfactiontrends #customervoice
by Lukas Kunhardt
Intelligently Segment PDFs by Table of Contents This workflow empowers you to automatically process PDF documents, intelligently identify or generate a hierarchical Table of Contents (ToC), and then segment the entire document's content based on these ToC headings. It effectively breaks down a large PDF into its constituent sections, each paired with its corresponding heading and hierarchical level. Why It's Useful Unlock the true structure of your PDFs for granular access and advanced processing: AI Agent Tool:** A key use case is to provide this workflow as a tool to an AI agent. The agent can then use the segmented output to "read" and navigate to specific sections of a document to answer questions, extract information, or perform tasks with much greater accuracy and efficiency. Targeted Content Extraction:** Programmatically pull out specific chapters or subsections for focused analysis, summarization, reporting, or repurposing content. Enhanced RAG Systems:** Improve your Retrieval Augmented Generation (RAG) pipelines by feeding them well-defined, contextually relevant document sections instead of entire, monolithic PDFs. This leads to more precise AI-generated responses. Modular Document Processing:** Process different parts of a document using distinct logic in subsequent n8n workflows by acting on individual sections. Data Preparation:** Seamlessly convert lengthy PDFs into a structured format where each section (including its heading, level, and content in multiple formats) becomes a distinct, manageable item. How It Works Ingestion & Advanced Parsing: The workflow ingests a PDF (via a provided URL or a pre-set one for manual runs). It then utilizes Chunkr.ai to perform Optical Character Recognition (OCR) and parse the document into detailed structural elements, extracting text, HTML, and Markdown for each segment. AI-Powered Table of Contents Generation: A Google Gemini AI model analyzes the initial pages of the document (where a ToC often resides) along with section headers extracted by Chunkr as a fallback. This allows it to construct an accurate, hierarchical Table of Contents in a structured JSON format, even if the PDF lacks an explicit ToC or if it's poorly formatted. Precise Content Segmentation: Sophisticated custom code then meticulously maps the AI-generated ToC headings to their corresponding content within the parsed document from Chunkr. It intelligently determines the precise start and end of each section. Structured & Flexible Output: The primary output provides each identified section as an individual n8n item. Each item includes the heading text, its hierarchical level (e.g., 1, 1.1, 2), and the full content of that section in Text, HTML, and Markdown formats. Optionally, the workflow can also reconstruct the entire document into a single, navigable HTML file or a clean Markdown file. What You Need To run this workflow, you'll need: Input PDF:** When triggered by another workflow: A URL pointing to the PDF document. When triggered manually: The workflow uses a pre-configured sample PDF from Google Drive for demonstration (this can be customized). Chunkr.ai API Key:** Required for the initial parsing and OCR of the PDF document. You'll need to insert this into the relevant HTTP Request nodes. Google Gemini API Credentials:** Necessary for the AI model to intelligently generate the Table of Contents. This should be configured in the Google Gemini Chat Model nodes. Outputs The workflow primarily generates: Individual Document Sections:** A series of n8n items. Each item represents a distinct section of the PDF and contains: heading: The text of the section heading. headingLevel: The hierarchical level of the heading (e.g., 1 for H1, 2 for H2). sectionText: The plain text content of the section. sectionHTML: The HTML content of the section. sectionMarkdown: The Markdown content of the section. Alternatively, you can configure the workflow to output: Full Reconstructed Document:** A single HTML file representing the entire processed document. A single Markdown file representing the entire processed document. This workflow is ideal for anyone looking to deconstruct PDFs into meaningful, manageable parts for advanced automation, AI integration, or detailed content analysis.
by Nskha
Overview The [n8n] YouTube Channel Advanced RSS Feeds Generator workflow facilitates the generation of various RSS feed formats for YouTube channels without requiring API access or administrative permissions. It utilizes third-party services to extract data, making it extremely user-friendly and accessible. Key Use Cases and Benefits Content Aggregation**: Easily gather and syndicate content from any public YouTube channel. No API Key Required**: Avoid the complexities and limitations of Google's API. Multiple Formats**: Supports ATOM, JSON, MRSS, Plaintext, Sfeed, and direct YouTube XML feeds. Flexibility**: Input can be a YouTube channel or video URL, ID, or username. Services/APIs Utilized This workflow integrates with: commentpicker.com**: For retrieving YouTube channel IDs. rss-bridge.org**: To generate various RSS formats. Configuration Instructions Start the Workflow: Activate the workflow in your n8n instance. Input Details: Enter the YouTube channel or video URL, ID, or username via the provided form trigger. Run the Workflow: Execute the workflow to receive links to 13 different RSS feeds, including community and video content feeds. Screenshots Additional Notes Customization**: You can modify the RSS feed formats or integrate additional services as needed. Support and Contributions For support, questions, or contributions, please visit the n8n community forum or the GitHub repository. We welcome contributions from the community!
by Harshil Agrawal
Based on your use case, you might want to trigger a workflow if new data gets added to your database. This workflow allows you to send a message to Mattermost when new data gets added in Google Sheets. The Interval node triggers the workflow every 45 minutes. You can modify the timing based on your use case. You can even use the Cron node to trigger the workflow. If you wish to fetch new Tweets from Twitter, replace the Google Sheet node with the respective node. Update the Function node accordingly.
by Shashikanth
Source code, I maintain this worflow here. Usage Guide This workflow backs up all workflows as JSON files named in the [workflow_name].json format. Steps Create GitHub Repository Skip this step if using an existing repository. Add GitHub Credentials In Credentials, add the GitHub credential for the repository owner. Download and Import Workflow Import this workflow into n8n. Set Global Values In the Globals node, set the following: repo.owner: GitHub username of the repository owner. repo.name: Name of the repository for backups. repo.path: Path to the folder within the repository where workflows will be saved. Configure GitHub Nodes Edit each GitHub node in the workflow to use the added credentials. Workflow Logic Each workflow run handles files based on their status: New Workflow If a workflow is new, create a new file in the repository. Unchanged Workflow If the workflow is unchanged, skip to the next item. Changed Workflow If a workflow has changes, update the corresponding file in the repository. Current Limitations / Needs work Name Change of Workflows If a workflow is renamed or deleted in n8n, the old file remains in the repository. Deleted Workflows Deleted workflows in n8n are not removed from the repository.
by Danielle Gomes
This n8n workflow collects and summarizes news from multiple RSS feeds, using OpenAI to generate a concise summary that can be sent to WhatsApp or other destinations. Perfect for automating your daily news digest. 🔁 Workflow Breakdown: Schedule Trigger Start the workflow on your desired schedule (daily, hourly, etc.). 🟨 Note: Set the trigger however you wish. RSS Feeds (My RSS 01–04) Fetches articles from four different RSS sources. 🟨 Note: You can add as many RSS feeds as you want. Edit Fields (Edit Fields1–3) Normalizes RSS fields (title, link, etc.) to ensure consistency across different sources. Merge (append mode) Combines the RSS items into a single unified list. Filter Optionally filter articles by keywords, date, or categories. Limit Limits the analysis to the 10 most recent articles. 🟨 Note: This keeps the result concise and avoids overloading the summary. Aggregate Prepares the selected news for summarization by combining them into a single content block. OpenAI (Message Assistant) Summarizes the aggregated news items in a clean and readable format using AI. Send Summary to WhatsApp Sends the AI-generated summary to a WhatsApp endpoint via webhook (yoururlapi.com). You can replace this with an email service, Google Drive, or any other destination. 🟨 Note: You can send it to your WhatsApp API, email, drive, etc. No Operation (End) Final placeholder to safely close the workflow. You may expand from here if needed.
by Ludwig
How It Works: • Scrapes company review data from Glassdoor using ScrapingBee. • Extracts demographic-based ratings using AI-powered text analysis. • Calculates workplace disparities with statistical measures like z-scores, effect sizes, and p-values. • Generates visualizations (scatter plots, bar charts) to highlight patterns of discrimination or bias. Example Visualizations: Set Up Steps: Estimated time: ~20 minutes. • Replace ScrapingBee and OpenAI credentials with your own. • Input the company name you want to analyze (best results with large U.S.-based organizations). • Run the workflow and review the AI-generated insights and visual reports. This workflow empowers users to identify potential workplace discrimination trends, helping advocate for greater equity and accountability. Additional Credit: Wes Medford For algorithms and inspiration
by Felix
How It Works This workflow automatically detects duplicate invoices from Gmail. Incoming PDF attachments are scanned by the easybits AI Extractor, then checked against the Master Finance File in Google Sheets. Duplicates trigger a Slack alert – new invoices get added to the sheet. Flow overview: Gmail picks up new emails labeled as invoices (polls every minute) The PDF attachment is extracted and converted to base64 easybits Extractor reads the document and returns structured data The invoice number is compared against all existing entries in Google Sheets If duplicate → Slack DM alert to felix.sattler If new → Invoice is appended to the Master Finance File Step-by-Step Setup Guide 1. Set Up Your easybits Extractor Pipeline Before connecting this workflow, you need a configured extraction pipeline on easybits. Go to extractor.easybits.tech and click "Create a Pipeline". Fill in the Pipeline Name and Description – describe the type of document you're processing (e.g. "Invoice / Receipt"). Upload a sample receipt or invoice as your reference document. Click "Map Fields" and define the following fields to extract: invoice_number (String) – Unique identifier of the invoice, e.g. IN-2026-0022514 total_amount (Number) – Total amount due on the invoice, e.g. 149.99 Click "Save & Test Pipeline" in the Test tab to verify the extraction works correctly. Go to Pipeline Details → View Pipeline and copy your Pipeline ID and API Key. 2. Connect the easybits Node in n8n Open the HTTP Request node in the workflow. Replace the Pipeline ID in the URL with your own. Set up a Bearer Auth credential with your API Key. > The node sends the PDF to your pipeline and receives the extracted fields back under json.data. 3. Connect Gmail Open the Gmail Trigger node. Connect your Gmail account via OAuth2. Create a label called invoice in Gmail (or use your preferred label). Update the label filter in the node to match your label. Make sure Download Attachments is enabled under Options. > The trigger polls every minute for new emails matching the label. 4. Connect Google Sheets Open the Check Google Sheets and Add to Master List nodes. Connect your Google Sheets account via OAuth2. Select your target spreadsheet (Master Finance File) and sheet. Make sure your sheet has at least these columns: Invoice Number and Final Amount (EUR). 5. Connect Slack Go to api.slack.com/apps and create a new Slack App. Under OAuth & Permissions, add these Bot Token Scopes: chat:write, chat:write.public, channels:read, groups:read, users:read, users.profile:read. Install the app to your workspace via Settings → Install App. Copy the Bot User OAuth Token and add it as a Slack API credential in n8n. Open the Slack: Alert Finance node and select the user or channel to receive duplicate alerts. 6. Activate the Workflow Click the "Active" toggle in the top-right corner of n8n to enable the workflow. Label an email with an invoice attachment as invoice in Gmail to test it end to end. Check your Google Sheet – a new row with the invoice number and amount should appear. Send the same invoice again – you should receive a Slack DM alerting you to the duplicate.
by Open Paws
This general-purpose sub-agent combines multiple research and automation tools to support high-impact decision-making for animal advocacy workflows. It’s designed to act as a reusable, modular unit within larger multi-agent systems—handling search, scraping, scoring, and domain-specific semantic lookup. It powers many of the advanced workflows released by Open Paws and serves as a versatile backend utility agent. 🛠️ What It Does Performs real-time Google Search using Serper Scrapes and extracts page content using Jina AI and Scraping Dog Conducts semantic search over the Open Paws knowledge base Generates OpenAI embeddings for similarity search and analysis Routes search and content analysis through OpenRouter LLMs Connects with downstream tools like the Text Scoring Sub-Workflow to evaluate message performance > 🧩 This agent is typically used as a sub-workflow in larger automations where agents need access to external tools or advocacy-specific knowledge. 🧠 Domain Focus: Animal Advocacy The agent is pre-configured to interface with the Open Paws database—an open-source, animal advocacy-specific knowledge graph—and is optimized for content and research tasks relevant to farmed animal issues, corporate campaigns, and activist communication. 🔗 Integrated Tools and APIs | Tool | Purpose | |---------------|------------------------------------------| | Serper API | Real-time Google Search queries | | Jina AI | Web scraping and content extraction | | Scraping Dog | Social media scraping where Jina is blocked | | OpenAI API | Embedding generation for semantic search | | OpenRouter | Proxy to multiple LLMs (e.g., GPT-4, Claude)| | Open Paws DB | Advocacy-specific semantic knowledge base | 📦 Use Cases Create and evaluate online content (e.g. social media, emails, petitions) for predicted performance and advocacy alignment Act as a research and reasoning agent within multi-agent workflows Automate web and social media research for real-time campaign support Surface relevant facts or arguments from an advocacy-specific knowledge base Assist communications teams with message testing and content ideation Monitor search results and scrape pages to inform rapid response messaging