by Holger
++How it Works:++ This RSS reader retrieves links from a Google Sheets file and goes through each link to retrieve the messages that are younger than 3 days, then saves them in a second Google Sheets file and then deletes all older entries in the second Google Sheets file! The retrieval can take a while due to the Google API block prevention, depending on the number of news feeds that were retrieved!==* Detailed Description is in the sticky Notes from the Workflow!*
by Romain
π§Ύ Automated Invoice Processing - n8n Workflow π Workflow Description This n8n workflow automates the complete processing of PDF invoices with AI-powered data extraction. The workflow monitors a Google Drive folder, extracts important invoice data, and automatically organizes files in a structured filing system. π― Features π Automatic monitoring** of a Google Drive folder for new PDF files π€ AI-powered data extraction** from invoices (customer, amount, date, etc.) π Intelligent file sorting** by year and month π Automatic renaming** following a consistent schema π Central documentation** in Google Sheets π§ Required Integrations Required Accounts: Google Drive (with folder permissions) Google Sheets (with write permissions) Google Gemini API (for AI data extraction) Used n8n Nodes: Google Drive Trigger Google Drive (File Operations) Google Sheets Extract from File (PDF) Information Extractor (LangChain) Google Gemini Chat Model Split in Batches π Workflow Steps in Detail 1. Monitoring & Triggering Google Drive Trigger** monitors a defined input folder Automatically starts when new PDF files are detected Split in Batches** enables batch processing of multiple files 2. File Processing GetFile** downloads PDF files from Google Drive ExtractFromPDF** converts PDF content to text Supports scanned documents as well 3. AI Data Extraction The Information Extractor node with Google Gemini extracts: Company name/sender Customer name and customer number Invoice number and date Net and gross amount Value-added tax Article description Month and year (for sorting) 4. Automatic Filing GetYearFolder** searches/creates year folders GetMonthFolder** searches/creates month folders MoveFile** moves invoice to correct folder UpdateFileName** renames file (schema: "Customer Month Year") 5. Documentation AddToOverview** enters all data into Google Sheets table Enables central overview and analysis βοΈ Setup Instructions Step 1: Prepare Google Drive Create the following folder structure in Google Drive: π [Input Folder] (e.g., "Invoices-Inbox") π [Main Folder] (e.g., "Accounting") βββ π 2024 βββ π January βββ π February βββ π March βββ ... (all months) βββ π 2025 βββ ... (all months) Step 2: Create Google Sheets Table Create a spreadsheet with the following columns: Customer Number Customer [Company Name] Location Invoice Date Invoice Number VAT Net Amount Month Year Step 3: Configure Workflow Configure Google Drive Trigger: Select your input folder as "Folder to Watch" Set "Event" to "fileCreated" Activate the trigger Search files and folders: Select the same input folder as filter Customize Information Extractor: Adapt attribute names to your needs Change company name in description Adjust system prompt if needed GetYearFolder & GetMonthFolder: Set the correct folder ID for your main folder Check query string for year/month search AddToOverview: Select your Google Sheets table Map columns according to your table π§ Customization Options Extend Data Extraction: Add more attributes in the Information Extractor node: { "name": "Payment Terms", "description": "Days until payment due", "required": false } Customize File Naming: Change the schema in the UpdateFileName node: "{{ $('Information Extractor').item.json.output.InvoiceNumber }} - {{ $('Information Extractor').item.json.output.Customer }}" Change Monitoring Interval: In Google Drive Trigger under "Poll Times" select different intervals. π¨ Important Notes β οΈ Permissions: Google Drive: Full access to configured folders Google Sheets: Write permission for target table Google Gemini: Valid API key required π Data Format: Works with German number formats (comma as decimal separator) Date format: YYYY-MM-DD Supports various PDF formats π Error Handling: Workflow fails if folders are missing Incomplete PDFs may lead to incomplete extractions Check logs for troubleshooting π Performance & Limitations Processing time:** 30-60 seconds per invoice Supported formats:** PDF (text and OCR) Batch processing:** Yes, multiple files simultaneously AI accuracy:** ~95% for standardized invoices π§ͺ Testing Run Test: Upload a test PDF to the input folder Monitor workflow execution in n8n Check results in Google Sheets Verify correct file movement and renaming Error Diagnosis: For errors: check n8n logs Consider Google API quotas Validate folder permissions π License & Support This workflow can be used and customized freely. For configuration questions or issues, check the n8n community or Google API documentation. Tip: Start with a few test invoices before using the workflow in production!
by Dick
Send a simple JSON array via HTTP POST and get an Excel file. The default filename is Export.xlsx. By adding the (optional) request ?filename=xyz you can specify the filename. NOTE: do not forget to change the webhook path!
by Jenny
Vector Database as a Big Data Analysis Tool for AI Agents Workflows from the webinar "Build production-ready AI Agents with Qdrant and n8n". This series of workflows shows how to build big data analysis tools for production-ready AI agents with the help of vector databases. These pipelines are adaptable to any dataset of images, hence, many production use cases. Uploading (image) datasets to Qdrant Set up meta-variables for anomaly detection in Qdrant Anomaly detection tool KNN classifier tool For anomaly detection The first pipeline to upload an image dataset to Qdrant. 2. This is the second pipeline to set up cluster (class) centres & cluster (class) threshold scores needed for anomaly detection. The third is the anomaly detection tool, which takes any image as input and uses all preparatory work done with Qdrant to detect if it's an anomaly to the uploaded dataset. For KNN (k nearest neighbours) classification The first pipeline to upload an image dataset to Qdrant. The second is the KNN classifier tool, which takes any image as input and classifies it on the uploaded to Qdrant dataset. To recreate both You'll have to upload crops and lands datasets from Kaggle to your own Google Storage bucket, and re-create APIs/connections to Qdrant Cloud (you can use Free Tier cluster), Voyage AI API & Google Cloud Storage. [This workflow] Setting Up Cluster (Class) Centres & Cluster (Class) Threshold Scores for Anomaly Detection Preparatory workflow to set cluster centres and cluster threshold scores so anomalies can be detected based on these thresholds. Here, we're using two approaches to set up these centres: the "distance matrix approach" and the "multimodal embedding model approach".
by Lorena
This workflow allows you to filter positive and negative feedback received from a Typeform and insert the data into Google Sheets. Typeform Trigger node: Start the workflow when a new form is submitted via Typeform Set node: Extract the information submitted in typeform IF node: Filter positive and negative reviews (i.e. ratings above or below 3 out of 5). Google Sheets node: Store the positive and negative reviews and ratings in two different sheets for each case.
by Jonathan
This workflow will backup your workflows to Github. It uses the public api to export all of the workflow data using the n8n node. It then loops over the data checks in Github to see if a file exists that uses the workflow name. Once checked it will then update the file on Github if it exists, Create a new file if it doesn't exist and if it's the same it will ignore the file. Config Options repo_owner - Github owner repo_name - Github repository name repo_path - Path within the Github repository >This workflow has been updated to use the n8n node and the code node so requires at least version 0.198.0 of n8n
by Abdullah Alshiekh
What Problem Does it Solve This workflow automates the process of finding and collecting job postings from LinkedIn, eliminating the need for manual job searching. Itβs designed to save time and ensure you donβt miss out on new opportunities by automatically populating a spreadsheet with key job details. Key Features Automated Data Collection:** The workflow pulls job posts from a LinkedIn search via an RSS feed. Intelligent Data Extraction:** It scrapes the full job description and uses AI to summarize the key benefits and job responsibilities into a concise format. Centralized Database:** All collected and processed information is automatically saved to a Google Sheet, providing a single source of truth for your job search. How It Works The workflow starts when manually triggered. It reads the job posts from a given RSS feed, processing each one individually. For each job, it fetches the full webpage content to extract structured data. This data is then cleaned and passed to an AI model, which generates a brief summary of the job and its benefits. Finally, a new row is either added or updated in a Google Sheet with all the collected details, including the job title, company name, and AI-generated summary. Configuration & Customization This workflow is highly customizable to fit your specific needs. RSS Feed:** To get started, you'll need to provide the RSS feed URL for your desired LinkedIn job search. We can help you set this up. AI Model:** The workflow uses Google Gemini by default, but it can be adjusted to work with other AI platforms. Data Destination:** The output is configured to a Google Sheet, but it can easily be changed to a different platform like Notion or a CRM. AI Prompting:** The AI's instructions are customizable, so you can tailor the output to extract different information or match a specific tone. If you need any help Get In Touch
by Eduard
β‘ UPDATE on May 2025 β added section with all n8n instance webhooks Using n8n a lot? Soar above the limitations of the default n8n dashboard! This template gives you an overview of your workflows, nodes, and tags β all in one place. πͺ Built using XML stylesheets and the Bootstrap 5 library, this workflow is self-contained and does not depend on any third-party software. π It generates a comprehensive overview JSON that can be easily integrated with other BI tools for further analysis and visualization. π Reach out to Eduard if you need help adapting this workflow to your specific use-case! π Benefits: Workflow Summary** π: Instant overview of your workflows, active counts, and triggers. Left-Side Panel** π: Quick access to all your workflows, nodes, and tags for seamless navigation. Workflow Details** π¬: Deep dive into each workflow's nodes, timestamps, and tags. Node Analysis** π§©: Identify the most frequently used nodes across your workflows. Tag Organization** ποΈ: Workflows are grouped according to their tags. Webhooks** β‘: List of all webhook endpoints with the links to workflows. Visually Stunning** π¨: Clean, intuitive, and easy-to-navigate dashboard design. XML & Bootstrap 5** π οΈ: Built using XML stylesheets and Bootstrap 5, ensuring a self-contained and responsive dashboard. No Dependencies** π: The workflow does not rely on any third-party software. Bootstrap 5 files are loaded via CDN but can be delivered directly from your server. β οΈ Important note for cloud users Since the cloud version doesn't support environmental variables, please make the following changes: get-nodes-via-jmespath node. Update the instance_url variable: enter your n8n URL instead of {{$env["N8N_PROTOCOL"]}}://{{$env["N8N_HOST"]}} Create HTML node. Please provide the n8n instance URL instead of {{ $env.WEBHOOK_URL }} πExample: Follow me on LinkedIn for more tips on AI automation and n8n workflows!
by Aaron Smusz
This workflow automatically manages Acrobat Sign signatures, respond with "intent" to Acrobat-Sign webhooks. Prerequisites Adobe Acrobat Sign and Sign webhook Basic knowledge of JavaScript Nodes Webhook nodes trigger the workflow on new sign intents on a document Respond to Webhook node sets the response headers. Function node processes data returned by the previous node. Set node sets the required values.
by Surya Vardhan Yalavarthi
Deploy a personal AI assistant that answers recruiter questions about your skills and projects, then automatically emails your CV as a PDF attachment when requested. Upload your portfolio documents (resume, project writeups, case studies) to a Google Drive folder β the workflow chunks them into 600-character segments, embeds them with OpenAI, and stores them in Pinecone. A webhook-powered AI Agent (Claude Sonnet 4.5) retrieves the most relevant evidence using Pinecone + Cohere reranking, detects CV requests via structured output parsing, and sends your resume file via Gmail β all without any manual intervention. How it works Ingestion pipeline: Two Google Drive poll triggers fire every minute, detecting newly created or updated files in your monitored portfolio folder Files are downloaded and enriched with metadata (source filename and upload timestamp) The Default Data Loader extracts text from the binary file, the Recursive Character Text Splitter chunks it at 600 characters with 100-character overlap, and OpenAI text-embedding-3-small produces 1536-dimension vectors Vectors are upserted into the portfolio-docs Pinecone index Chat agent pipeline: A webhook at POST /webhook/portfolio-query receives { "chatInput": "...", "sessionId": "...", "email": "..." } Claude Sonnet 4.5 is instructed to call the portfolio_knowledge tool (a Vector Store Tool backed by Pinecone) before answering β every response is grounded in retrieved evidence Cohere rerank-v3.5 reranks the top-5 Pinecone results to top-3 before they reach the LLM A Structured Output Parser enforces { "answer": "...", "cvRequested": false } β the cvRequested boolean is set by the LLM when it detects recruiter intent An IF node branches on cvRequested: true β download CV PDF from Drive β Gmail attachment β respond { answer, cvSent: true }; false β respond { answer, cvSent: false } immediately Buffer Window Memory retains the last 10 messages per sessionId for multi-turn conversations Error handling: An Error Trigger catches any node failure and extracts error_message, failed_node, workflow_name, and execution_url into a clean object β ready to forward to Slack, email, or any alerting webhook. Use cases Job seekers & freelancers** β A 24/7 recruiter-ready assistant that answers questions about your experience and sends your CV on request, even while you sleep Portfolio websites** β A backend API endpoint that powers intelligent Q&A on your personal site without building custom infrastructure Consultants & agencies** β Adapt the ingestion pipeline for a client-facing knowledge base; swap Gmail for any email or messaging node Setup Prerequisites A Pinecone account with an index named portfolio-docs (dimension: 1536, metric: cosine, pod or serverless) A Google Drive folder containing your portfolio documents (PDF, DOCX, or plain text) Your CV stored as a PDF in Google Drive (note its file ID) n8n instance with the six credentials below configured Step 1 β Configure credentials In n8n β Settings β Credentials, create one credential for each service: | Credential name | Service | |---|---| | Google Drive OAuth2 | Google Drive OAuth2 | | OpenAI API | OpenAI | | Pinecone API | Pinecone | | Anthropic API | Anthropic | | Cohere API | Cohere | | Gmail OAuth2 | Gmail OAuth2 | Step 2 β Set your Google Drive folder ID Open File Created Trigger and File Updated Trigger (do both). In the Folder to Watch field, switch to ID mode and paste your folder ID. > Find your folder ID in the Drive URL: https://drive.google.com/drive/folders/YOUR_FOLDER_ID Step 3 β Set your CV file ID Open Download CV PDF. In the File field, switch to ID mode and paste your CV file ID. > Find the file ID in: https://drive.google.com/file/d/YOUR_FILE_ID/view Step 4 β Personalize the system prompt Open Portfolio AI Agent and edit the system message: Replace the generic role description with your name and specialization Adjust the tone (formal/casual) to match how you want to present yourself Update the call-to-action line to reference your actual contact details Step 5 β Ingest your documents Move or upload your portfolio files into the monitored Drive folder. The ingestion pipeline will trigger automatically within one minute and populate Pinecone. To verify ingestion: check your Pinecone index vector count β it should increase after each file is processed. Step 6 β Test the chat endpoint Send a POST request to your webhook URL: curl -X POST https://your-n8n-instance/webhook/portfolio-query \ -H "Content-Type: application/json" \ -d '{"chatInput": "What are your main technical skills?", "sessionId": "test-1"}' Expected response: { "answer": "...", "cvSent": false } To test CV delivery, include "email": "your@email.com" and ask: "Can you send me the CV?". Workflow details Nodes**: 25 functional nodes + 4 documentation sticky notes Triggers**: 4 (2 Google Drive poll triggers, 1 Webhook, 1 Error Trigger) AI components**: Claude Sonnet 4.5 (Γ2 β agent + retrieval), OpenAI text-embedding-3-small (Γ2 β ingest + retrieval), Cohere rerank-v3.5, Pinecone Vector Store (Γ2 β insert + retrieve), Structured Output Parser, Buffer Window Memory Canvas layout**: Three clearly labelled sections with grey sticky note backgrounds β Ingestion pipeline, AI Agent pipeline, Error Handling
by James Li
Summary Onfleet is a last-mile delivery software that provides end-to-end route planning, dispatch, communication, and analytics to handle the heavy lifting while you can focus on your customers. This workflow template loads in a spreadsheet from your local storage and automatically creates Onfleet tasks on a one-time basis upon workflow trigger. You can use this workflow as a task importer. Configurations Update the Read Binary File node with the absolute file path to the local spreadsheet of interest Update the Onfleet node with your own Onfleet credentials, to register for an Onfleet API key, please visit https://onfleet.com/signup to get started You can easily change how the Onfleet task is created by mapping to additional data in the spreadsheet For import templates, visit Onfleet Support to learn more π
by Angel Menendez
This workflow is triggered by a parent workflow initiated via a Slack shortcut. Upon activation, it collects input from a modal window in Slack and initiates a vulnerability scan using the Qualys API. Key Features Trigger:** Launched by a parent workflow through a Slack shortcut with modal input. API Integration:** Utilizes the Qualys API for vulnerability scanning. Data Conversion:** Converts XML scan results to JSON for further processing. Loop Mechanism:** Continuously checks the scan status until completion. Slack Notifications:** Posts scan summary and detailed results to a specified Slack channel. Workflow Nodes Start VM Scan in Qualys: Initiates the scan with specified parameters. Convert XML to JSON: Converts the scan results from XML format to JSON. Fetch Scan Results: Retrieves scan results from Qualys. Check if Scan Finished: Verifies whether the scan is complete. Loop Mechanism: Handles the repetitive checking of the scan status. Slack Notifications: Posts updates and results to Slack. Relevant Links Qualys API Documentation Qualys Platform Documentation Parent workflow link Link to Report Generator Subworkflow