by Mark Shcherbakov
Video Guide I prepared a comprehensive guide detailing how to automate the parsing of invoices using n8n and LlamaParse, seamlessly capturing and storing vital billing information. Youtube Link Who is this for? This workflow is ideal for finance teams, accountants, and business operations managers who need to streamline invoice processing. It is particularly helpful for organizations seeking to reduce manual entry errors and improve efficiency in managing billing information. What problem does this workflow solve? Manually processing invoices can be time-consuming and error-prone. This automation eliminates the need for manual data entry by capturing invoice details directly from uploaded documents and storing structured data efficiently. This enhances productivity and accuracy across financial operations. What this workflow does The workflow leverages n8n and LlamaParse to automatically detect new invoices in a designated Google Drive folder, parse essential billing details, and store the extracted data in a structured format. The key functionalities include: Real-time detection of new invoices via Google Drive triggers. Automated HTTP requests to initiate parsing through Lama Cloud. Structured storage of invoice details and line items in a database for future reference. Google Drive Integration: Monitors a specific folder in Google Drive for new invoice uploads. Parsing with LlamaParse: Automatically sends invoices for parsing and processes results through webhooks. Data Storage in Airtable: Creates records for invoices and their associated line items, allowing for detailed tracking. Setup N8N Workflow Google Drive Trigger: Set up a trigger to detect new files in a specified folder dedicated to invoices. File Upload to LlamaParse: Create an HTTP request that sends the invoice file to LlamaParse for parsing, including relevant header settings and webhook URL. Webhook Processing: Establish a webhook node to handle parsed results from LlamaParse, extracting needed invoice details effectively. Invoice Record Creation: Create initial records for invoices in your database using the parsed details received from the webhook. Line Item Processing: Transform string data into structured line item arrays and create individual records for each item linked to the main invoice.
by Agniva Mahata
How it Works: Trigger: The workflow is triggered by a webhook, initiated by an Airtable automation. This automation sends the Book or Chapter record ID and the desired action (e.g., "Generate Book Details," "Generate Chapters," "Generate Chapter Research," "Generate Chapter Content"). Action Routing: A "Switch" node directs the workflow based on the action query parameter received from the webhook. This determines which part of the book creation process will be executed. Data Retrieval: The workflow fetches the relevant book or chapter data from Airtable using the provided recordId. AI Processing: Book Details Generation: If the action is "Generate Book Details," an AI Agent (powered by a Large Language Model (LLM) like Google Gemini and the Perplexity search tool) researches the book idea. It focuses on crafting a compelling book description, identifying the target audience, and conducting general book research to maximize bestseller potential. The research brief is then saved back to Airtable. Chapter Generation: If the action is "Generate Chapters," an LLM generates 7-10 chapter titles and descriptions based on the book idea and previous research. A structured output parser ensures the chapter data is in the correct format. The chapters are then split into individual items and saved as separate records in the "Chapter" table in Airtable, linked to the main book record. Chapter Research Generation: If the action is "Generate Chapter Research," another AI Agent conducts in-depth research on a specific chapter, using the Perplexity search tool multiple times. It focuses on finding stories, case studies, historical events, and expert perspectives to make the chapter engaging and credible. The research is saved back to the "Chapter" record in Airtable. Chapter Content Generation: If the action is "Generate Chapter Content," an LLM writes the full content of the chapter, using the research gathered in the previous step, the overall book research, and the chapter description. The generated content is saved back to the "Chapter" record in Airtable. Airtable Updates: In each of the AI processing steps, the workflow updates the corresponding Airtable record (either "Book" or "Chapter") with the generated results (research, chapter details, or content) and sets the "Action" field back to "Idle." Set Up Steps: Airtable Setup (Estimated time: 10-15 minutes): Copy the Airtable base blueprint: https://airtable.com/appfkz4KUlKvOjtbp/shra78TlDfqLRdSfT. This will create the "Book" and "Chapter" tables with the necessary fields. In the "Book" table, create three Airtable Automations: Trigger: When a record matches conditions -> Action is Generate Book Details Action: Run a script. Use the following script: let autoRoute = input.config(); await fetch(autoRoute.webhookUrl + "?recordId=" + autoRoute.recordId + "&action=" + autoRoute.action); In the script action's configuration, add three "Input variables": webhookUrl (map it to your n8n webhook URL, obtained in the next step) recordId (map it to the Airtable record ID) action (map it to Action) Repeat this process to create two more automations in the "Book" table, identical except triggered when Action is Generate Chapters, respectively. In the "Chapter" table, create two Airtable Automations: Trigger: When a record matches conditions -> Action is Generate Chapter Research Action: Run a script (use the same script as above, with the same input variables). Create a second automation, identical except triggered when Action is Generate Chapter Content. n8n Setup (Estimated time: 15-20 minutes): Import the provided JSON workflow into n8n. Webhook Node: Copy the "Test URL" from the Webhook node. This is the webhookUrl you'll use in the Airtable automations. Important: Once you've tested and are ready to go live, switch to the "Production URL." Airtable Nodes: Configure all Airtable nodes (there are eight). You'll need to connect your Airtable account using OAuth 2. Select the correct Base ("Book Agency \[v1] Cobuild" or whatever you named it) and Table ("Book" or "Chapter") for each node. The field mappings are already defined in the template, but double-check them. LLM Nodes (Google Gemini & OpenAI): Connect your Google Gemini and OpenAI accounts to the respective LLM nodes. You'll need API keys for both. You may also configure different LLM Models. Perplexity Nodes Connect your Perplexity AI API to the Perplexity nodes. You'll need API keys for that. Activate the workflow. Testing (Estimated Time: 5-10 minutes): Go to your Airtable "Book" table. Create a New Record. Fill in the "Idea" field with a book concept. Change the "Action" field to "Generate Book Details". The Airtable automation should trigger, sending a request to your n8n webhook. Monitor the n8n execution log to see the workflow in action. Check the Airtable record to see if the "Research" field is populated. Repeat the testing for Generate Chapters, Generate Chapter Research and Generate Chapter Content.
by Onur
Turn BBC News Articles into Podcasts using Hugging Face and Google Gemini Effortlessly transform BBC news articles into engaging podcasts with this automated n8n workflow. Who is this for? This template is perfect for: Content creators** who want to quickly produce podcasts from current events. Students** looking for an efficient way to create audio content for projects or assignments. Individuals** interested in generating their own podcasts without technical expertise. Setup Information Install n8n: If you haven't already, download and install n8n from n8n.io. Import the Workflow: Copy the JSON code for this workflow and import it into your n8n instance. Configure Credentials: Gemini API: Set up your Gemini API credentials in the workflow's LLM nodes. Hugging Face Token: Obtain an access token from Hugging Face and add it to the HTTP Request node for the text-to-speech model. Customize (Optional): Filtering Criteria: Adjust the News Classifier node to fine-tune the selection of news articles based on your preferences. Output Options: Modify the workflow to save the generated audio file to a cloud storage service or publish it to a podcast hosting platform. Prerequisites An active n8n instance. Basic understanding of n8n workflows (no coding required). API credentials for Gemini and a Hugging Face account with an access token. What problem does it solve? This workflow eliminates the manual effort involved in creating podcasts from news articles. It automates the entire process, from fetching and filtering news to generating the final audio file. What are the benefits? Time-saving:** Create podcasts in minutes, not hours. Easy to use:** No coding or technical skills required. Customizable:** Adapt the workflow to your specific needs and preferences. Cost-effective:** Leverage free or low-cost services like Gemini and Hugging Face. How does it work? The workflow fetches news articles from the BBC website. It filters articles based on their suitability for a podcast. It extracts the full content of the selected articles. It uses Gemini LLM to create a podcast script. It converts the script to speech using Hugging Face's text-to-speech model. The final podcast audio is ready for use. Nodes in the Workflow Fetch BBC News Page: Retrieves the main BBC News page. News Classifier: Categorizes news articles using Gemini LLM. Fetch BBC News Detail: Extracts detailed content from suitable articles. Basic Podcast LLM Chain: Generates a podcast script using Gemini LLM. HTTP Request: Converts the script to speech using Hugging Face. Add Story I'm excited to share this workflow with the n8n community and help content creators and students easily produce engaging podcasts! Additional Tips Explore the n8n documentation and community resources for more advanced customization options. Experiment with different filtering criteria and LLM prompts to achieve your desired podcast style.
by Joseph LePage
🌐 Confluence Page AI Chatbot Workflow This n8n workflow template enables users to interact with an AI-powered chatbot designed to retrieve, process, and analyze content from Confluence pages. By leveraging Confluence's REST API and an AI agent, the workflow facilitates seamless communication and contextual insights based on Confluence page data. 🌟 How the Workflow Works 🔗 Input Chat Message The workflow begins when a user sends a chat message containing a query or request for information about a specific Confluence page. 📄 Data Retrieval The workflow uses the Confluence REST API to fetch page details by ID, including its body in the desired format (e.g., storage, view). The retrieved HTML content is converted into Markdown for easier processing. 🤖 AI Agent Interaction An AI-powered agent processes the Markdown content and provides dynamic responses to user queries. The agent is context-aware, ensuring accurate and relevant answers based on the Confluence page's content. 💬 Dynamic Responses Users can interact with the chatbot to: Summarize the page's content. Extract specific details or sections. Clarify complex information. Analyze key points or insights. 🚀 Use Cases 📚 Knowledge Management**: Quickly access and analyze information stored in Confluence without manually searching through pages. 📊 Team Collaboration**: Facilitate discussions by summarizing or explaining page content during team chats. 🔍 Research and Documentation**: Extract critical insights from large documentation repositories for efficient decision-making. ♿ Accessibility**: Provide an alternative way to interact with Confluence content for users who prefer conversational interfaces. 🛠️ Resources for Getting Started Confluence API Setup: Generate an API token for authentication via Atlassian's account management portal. Refer to Confluence's REST API documentation for endpoint details and usage instructions. n8n Installation: Install n8n locally or on a server using the official installation guide. AI Agent Configuration: Set up OpenAI or other supported language models for natural language processing.
by Ferenc Erb
Overview An automation workflow that creates a complete REST API for digitally signing PDF documents using n8n webhooks. This service demonstrates how to implement secure document signing functionality through standardized API endpoints with file upload and download capabilities. Use Case This workflow is designed for developers and automation specialists who need to implement digital document signing. It's particularly useful for: Integrating PDF signing capabilities into existing document workflows API-based automation of signature processes Creating proof-of-concept implementations for document verification systems Learning n8n's webhook capabilities and file handling techniques Testing PDF signing in development environments before production implementation What This Workflow Does API-Based Document Management Exposes RESTful webhook endpoints for all document operations Handles multipart/form-data uploads for PDF documents Processes JSON payloads for signing configuration Provides download functionality for completed documents Digital Certificate Handling Uploads existing PFX/PKCS#12 digital certificates Generates new certificates with customizable attributes Securely manages certificate storage and access Associates certificates with signing operations Cryptographic PDF Signing Applies digital signatures using industry-standard cryptographic methods Embeds signature information within PDF document structure Validates document integrity through cryptographic verification Preserves original document while adding signature elements Webhook Integration System Routes different API methods to appropriate handlers Validates request payloads and file content Manages authentication through webhook paths Returns structured responses for integration with other systems Technical Architecture Components API Gateway: n8n webhook nodes that receive external requests Request Router: Switch nodes that direct operations based on method parameters Document Processor: Function nodes for PDF manipulation and verification Certificate Manager: Specialized nodes for cryptographic key operations Storage Interface: File operation nodes for document persistence Response Formatter: Nodes that structure API responses Integration Flow Client Request → Webhook Endpoint → Method Router → Processing Engine → Digital Signing → Storage → Response Generation → Client Response Setup Instructions Prerequisites n8n installation (minimum version 0.214.0) Node.js 14 or higher Required environment variable: NODE_FUNCTION_ALLOW_EXTERNAL: "node-forge,@signpdf/signpdf,@signpdf/signer-p12,@signpdf/placeholder-plain" Configuration Steps Import Workflow Import the workflow JSON into your n8n instance Activate the workflow to enable the webhooks Configure Storage Set the storage path variables in the workflow Ensure proper permissions on the storage directories Test API Endpoints Use the included test scripts to verify functionality Test PDF upload, certificate generation, and signing Integration Document the webhook URLs for integration with other systems Configure error handling according to your requirements Testing Methods Test the workflow functionality using various HTTP requests and JSON data: Upload PDF documents to the document processing endpoint Upload or generate digital certificates Execute PDF signing operations Download signed documents from the download endpoint Webhook Endpoints The workflow exposes two primary webhook endpoints that form a complete API for PDF digital signing operations: 1. Document Processing Endpoint (/webhook/docu-digi-sign) This endpoint handles all document and certificate operations: Method: Upload PDF HTTP: POST Content-Type: multipart/form-data Parameters: method, uploadType, fileName, fileData Method: Upload Certificate HTTP: POST Content-Type: multipart/form-data Parameters: method, uploadType, fileName, fileData Method: Generate Certificate HTTP: POST Content-Type: application/json Parameters: method, subjectCN, issuerCN, serialNumber, validFrom, validTo, password Method: Sign PDF HTTP: POST Content-Type: application/json Parameters: method, inputPdf, pfxFile, pfxPassword 2. Document Download Endpoint (/webhook/docu-download) This endpoint handles the retrieval of processed documents: Method: Download Signed PDF HTTP: GET Content-Type: application/json Parameters: method, fileType, fileName Key Workflow Sections The workflow is organized into logical sections with clear responsibilities: Request Processing**: Parses incoming webhook data Method Routing**: Directs requests to appropriate handlers Document Management**: Handles file operations and storage Cryptographic Operations**: Manages signing and certificate functions Response Formatting**: Structures and returns results
by Jaruphat J.
Overview This workflow automatically saves files received via LINE Messaging API into Google Drive and logs the file details into a Google Sheet. It checks the file type against allowed types, organizes files into date-based folders and (optionally) file type–specific subfolders, and sends a reply message back to the LINE user with the file URL or an error message if the file type is not permitted. Who is this for? Developers & IT Administrators: Looking to integrate LINE with Google Drive and Sheets for automated file management. Businesses & Marketing Teams: That want to automatically archive media files and documents received from users via LINE. Anyone Interested in No-Code Automation: Users who want to leverage n8n’s capabilities without heavy coding. What Problem Does This Workflow Solve? Automated File Organization: Files received from LINE are automatically checked for allowed file types, then stored in a structured folder hierarchy in Google Drive (by date and/or file type). Data Logging: Each file upload is recorded in a Google Sheet, providing an audit trail with file names, upload dates, URLs, and types. Instant Feedback: Users receive an immediate reply via LINE confirming the file upload, or an error message if the file type is not allowed. What This Workflow Does 1. Receives Incoming Requests: A webhook node ("LINE Webhook Listener") listens for POST requests from LINE, capturing file upload events and associated metadata. 2. Configuration Loading: A Google Sheets node ("Get Config") reads configuration data (e.g., parent folder ID, allowed file types, folder organization settings, and credentials) from a pre-defined sheet. Data Merging & Processing: The "Merge Event and Config Data" and "Process Event and Config Data" nodes merge and structure the event data with configuration settings. A "Determine Folder Info" node calculates folder names based on the configuration. If Store by Date is enabled, it uses the current date (or a specified date) as the folder name. If Store by File Type is also enabled, it uses the file’s type (e.g., image) for a subfolder. 4. Folder Search & Creation: The workflow searches for an existing date folder ("Search Date Folder"). If the date folder is not found, an IF node ("Check Existing Date Folder") routes to a "Create Date Folder" node. Similarly, for file type organization, the workflow uses a "Search FileType Folder" node (with appropriate conditions) to look for a subfolder, or creates it if not found. The "Set Date Folder ID" and "Set Image Folder ID" nodes capture and merge the resulting folder IDs. Finally, the "Config final ParentId" node sets the final target folder ID based on the configuration conditions: Store by Date: TRUE, Store by File Type: TRUE: Use the file type folder (inside the date folder). Store by Date: TRUE, Store by File Type: FALSE: Use the date folder. Store by Date: FALSE, Store by File Type: TRUE: Use the file type folder. Store by Date: FALSE, Store by File Type: FALSE: Use the Parent Folder ID from the configuration. 5. File Retrieval and Validation: A HTTP Request node ("Get File Binary Content") fetches the file’s binary data from the LINE API. A Function node ("Validate File Type") checks if the file’s MIME type is included in the allowed list (e.g., "audio|image|video"). If not, it throws an error that is captured for the reply. 6. File Upload and Logging: The "Upload File to Google Drive" node uploads the validated binary file to the final target folder. After a successful upload, the "Log File Details to Google Sheet" node logs details such as file name, upload date, Google Drive URL, and file type into a designated Google Sheet. 7. User Feedback: The "Check Reply Enabled Flag" node checks if the reply feature is enabled. Finally, the "Send LINE Reply Message" node sends a reply message back to the LINE user with either the file URL (if the upload was successful) or an error message (if the file type was not allowed). Setup Instructions 1. Google Sheets Setup: Create a Google Sheet with two sheets:** config: Include columns for Parent Folder Path, Parent Folder ID, Store by Date (boolean), Store by File Type (boolean), Allow File Types (e.g., “audio|image|video”), CurrentDate, Reply Enabled, and CHANNEL ACCESS TOKEN. fileList: Create headers for File Name, Date Uploaded, Google Drive URL, and File Type. For an example of the required format, check this Google Sheets template: Google Sheet Template 2. Google Drive Credentials: Set up and authorize your Google Drive credentials in n8n. 3. LINE Messaging API: Configure your LINE Developer Console webhook to point to the n8n Webhook URL ("Line Chat Bot" node). Ensure you have the proper Channel Access Token stored in your Google Sheet. 4. n8n Workflow Import: Import the provided JSON file into your n8n instance. Verify node connections and update any credential references as needed. 5. Test the Workflow: Send a test message via LINE to confirm that files are properly validated, uploaded, logged, and that reply messages are sent. How to Customize This Workflow Allowed File Types: Adjust the "Validate File Type" field in your config sheet to control which file types are accepted. Folder Structure: Modify the logic in the "Determine Folder Info" and subsequent folder nodes to change how folders are structured (e.g., use different date formats or add additional categorization). Logging: Update the "Log File Details to Google Sheet" node if you wish to log additional file metadata. Reply Messages: Customize the reply text in the "Send LINE Reply Message" node to include more detailed information or instructions.
by Jimleuk
Ever wanted to build your own RAG search over Youtube videos? Well, now you can! This n8n template shows how you can build a very capable Youtube search engine powered by Apify, Qdrant and your LLM of choice to quickly and efficiently browse over many videos for research. I originally started to template to ask questions on the "n8n @ scale office-hours" livestream videos but then extended it to include the latest videos on the official channel. Check out a demo here: https://jimleuk.app.n8n.cloud/webhook/n8n_videos How it works Stage 1 is to collect the Youtube video transcripts and push them into a vector database. For this, I've used Apify to scrape Youtube and Qdrant to store the embeddings. Transcripts are broken down into smaller chunks and carefully tagged with metadata to assist in later search and filtering. Stage 2 is to build a web frontend for the user to query the vectorised transcripts. I'm using a webhook to serve a simple web app and API to dynamically fetch the results. When searching for a video, I've opted to use Qdrant's search groups API which in this use-case, performs better as it returns a wider range of videos results. In the web frontend, when the user clicks on the results, the matching Youtube video plays in an embedded video player. How to use Once credentials are all set, first run steps 1 - 3 to populate your vector store. Next, set the workflow to active to expose the web frontend. Visit the webhook URL in your browser to use it. If only for personal use, you may want to remove the rate limiting mechanism in step 4. Requirements Apify for Youtube Channel and Video Scraping Qdrant for Vector store OpenAI for LLM and Embeddings Customising the template Not interested in official n8n videos? Swap to a different channel - this template will work on many as long as videos are not private or set to prevent embeds. Technically any vector store should work but may not have the same grouping API. Use the simple vector store node and revert back to basic searching instead.
by Miquel Colomer
This n8n workflow template automates the process of finding LinkedIn profiles for a person based on their name, and company. It scrapes Google search results via Bright Data, parses the results with GPT-4o-mini, and delivers a personalized follow-up email with insights and suggested outreach steps. 🚀 What It Does Accepts a user-submitted form with a person’s full name, and company. Performs a Google search using Bright Data to find LinkedIn profiles and company data. Uses GPT-4o-mini to parse HTML results and identify matching profiles. Filters and selects the most relevant LinkedIn entry. Analyzes the data to generate a buyer persona and follow-up strategy. Sends a styled email with insights and outreach steps. 🛠️ Step-by-Step Setup Deploy the form trigger to accept person data (name, position, company). Build a Google search query from user input. Scrape search results using Bright Data. Extract HTML content using the HTML node. Use GPT-4o-mini to parse LinkedIn entries and company insights. Filter for matches based on user input. Merge relevant data and generate personalized outreach content. Send email to a predefined address. Show a final confirmation message to the user. 🧠 How It Works: Workflow Overview Trigger:** When User Completes Form Search:** Edit Url LinkedIn, Get LinkedIn Entry on Google, Extract Body and Title, Parse Google Results Matching:** Extract Parsed Results, Filter, Limit, IF LinkedIn Profile is Found? Fallback:** Form Not Found if no match Company Lookup:** Edit Company Search, Get Company on Google, Parse Results, Split Out Content Generation:** Merge, Create a Followup for Company and Person Email Delivery:** Send Email, Form Email Sent 📨 Final Output An HTML-styled email (using Tailwind CSS) with: Matched LinkedIn profile Company insights Persona-based outreach strategy 🔐 Credentials Used BrightData account** for scraping Google search results OpenAI account** for GPT-4o-mini-powered parsing and content generation SMTP account** for sending follow-up emails ❓Questions? Template and node created by Miquel Colomer and n8nhackers. Need help customizing or deploying? Contact us for consulting and support.
by Joseph
Here is your refined template description with detailed step-by-step instructions, markdown formatting, and customization guidance. YouTube Transcript Extraction Workflow This n8n workflow extracts and processes transcripts from YouTube videos using the YouTube Transcript API on RapidAPI. It allows users to retrieve subtitles from YouTube videos, clean them up, and return structured transcript data for further processing. Table of Contents Problem Statement & Target Audience Pre-conditions & API Requirements Step-by-Step Workflow Explanation Customization Guide How to Set Up This Workflow Problem Statement & Target Audience Who is this for? This workflow is ideal for content creators, researchers, and developers who need to: Extract subtitles from YouTube videos automatically. Format and clean** transcript data for readability. Use transcripts for summarization, content repurposing, or language analysis. Pre-conditions & API Requirements API Required YouTube Transcript API** (RapidAPI) n8n Setup Prerequisites A running n8n instance (Installation Guide) A RapidAPI account to access the YouTube Transcript API An API key from RapidAPI to authenticate requests Step-by-Step Workflow Explanation 1. Input YouTube Video URL (Trigger) This step provides a simple input form where users enter a YouTube video URL. 2. HTTP Request Node (Retrieve Transcript Data) Makes a POST request to the YouTube Transcript API via RapidAPI. Passes the video URL received from the input form. Uses an environment variable to store the API key securely. 3. Function Node (Process Transcript) Receives* the API response containing the *raw transcript**. Processes and cleans** the transcript: Removes unwanted characters. Formats text for readability. Handles errors** when no transcript is available. Outputs* both the *raw and cleaned transcript** for further use. 4. Set Field Node (Response Formatting) Structures** the processed transcript data into a user-friendly format. Returns** the final transcript data to the client. Customization Guide 1. Modify Transcript Cleaning Rules Update the Function Node to apply custom text processing, such as: Removing timestamps. Changing the output format (e.g., JSON, plain text). 2. Store Transcripts in a Database Add a Database Node (e.g., MySQL, PostgreSQL, or Firebase) to save transcripts. 3. Generate Summaries from Transcripts Integrate AI services (e.g., OpenAI, Google Gemini) to summarize transcripts. 4. Convert Transcripts into Speech Use ElevenLabs API to generate an AI-powered voiceover from transcripts. How to Set Up This Workflow Step 1: Import the Workflow into n8n Download or copy the workflow JSON file. Import it into your n8n instance. Step 2: Set Up the API Key Sign up for the YouTube Transcript API. Subscribe to the api. Copy and paste your api key where the "your_api_key" is. Step 3: Activate the Workflow Start the workflow in n8n. Enter a YouTube video URL in the input form. The workflow will return a cleaned transcript. This workflow ensures seamless YouTube transcript extraction and processing with minimal manual effort. 🚀
by Samir Saci
Tags: Sustainability, Web Scraping, OpenAI, Google Sheets, Newsletter, Marketing Context Hey! I’m Samir, a Supply Chain Engineer and Data Scientist from Paris, and the founder of LogiGreen Consulting. We use AI, automation, and data to support sustainable business practices for small, medium and large companies. I use this workflow to bring awareness about sustainability and promote my business by delivering automated daily news digests. > Promote your business with a fully automated newsletter powered by AI! This n8n workflow scrapes articles from the official EU news website and sends a daily curated digest, highlighting only the most relevant sustainability news. 📬 For business inquiries, feel free to connect with me on LinkedIn Who is this template for? This workflow is useful for: Business owners** who want to promote their service or products with a fully automated newsletter Sustainability professionals** staying informed on EU climate news Consultants and analysts** working on CSRD, Green Deal, or ESG initiatives Corporate communications teams** tracking relevant EU activity Media curators** building newsletters What does it do? This n8n workflow: ⏰ Triggers automatically every morning 🌍 Scrapes articles from the EU Commission News Portal 🧠 Uses OpenAI GPT-4o to classify each article for sustainability relevance 📄 Stores the results in a Google Sheet for tracking 🧾 Generates a beautiful HTML digest email, including titles, summaries, and images 📬 Sends the digest via Gmail to your mailing list How it works Trigger at 08:30 every morning Scrape and extract article blocks from the EU news site Use OpenAI to decide if articles are sustainability-related Store relevant entries in Google Sheets Generate HTML email with a professional layout and logo Send the digest via Gmail to a configured recipient list What do I need to get started? You’ll need: A Google Sheet connected to your n8n instance An OpenAI account with GPT-4 or GPT-4o access A Gmail OAuth credential setup Follow the Guide! Follow the sticky notes inside the workflow or check out my step-by-step tutorial on how to configure and deploy it. 🎥 Watch My Tutorial Notes You can customize the system prompt to adjust how AI classifies “sustainability” Works well for tracking updates relevant to climate action, green transition, and circular economy This workflow was built using n8n version 1.85.4 Submitted: April 24, 2025
by PollupAI
This n8n workflow automates the import of your Google Keep notes into a structured Google Sheet, using Google Drive, OpenAI for AI-powered processing, and JSON file extraction. It's perfect for users who want to turn exported Keep notes into a searchable, filterable spreadsheet – optionally enhanced by AI summarization or transformation. Who is this for? Researchers, knowledge workers, and digital minimalists who rely on Google Keep and want to better organize or analyze their notes. Anyone who regularly exports Google Keep notes and wants a clean, automated workflow to store them in Google Sheets. Users looking to apply AI to process, summarize, or extract insights from raw notes. What problem is this workflow solving? Exporting Google Keep notes via Google Takeout gives you unstructured .json files that are hard to read and manage. This workflow solves that by: Filtering relevant .json files Extracting note content (Optionally) applying AI to analyze or summarize each note Writing the result into a structured Google Sheet What this workflow does Google Drive Search: Looks for .json files inside a specified "Keep" folder. Loop: Processes files in batches of 10. File Filtering: Filters by .json extension. Download + Extract: Downloads each file and extracts note content from JSON. Optional Filtering: Only keeps non-archived notes or those meeting content criteria. AI Processing (optional): Uses OpenAI to summarize or transform the note content. Prepare for Export: Maps note fields to be written. Google Sheets: Appends or updates the target sheet with the note data. Setup Export your Google Keep notes using Google Takeout: Deselect all, then choose only Google Keep. Choose “Send download link via email”. Unzip the downloaded archive and upload the .json files to your Google Drive. Connect Google Drive, OpenAI, and Google Sheets in n8n. Set the correct folder path for your notes in the “Search in ‘Keep’ folder” node. Point the Google Sheet node to your spreadsheet How to customize this workflow to your needs Skip AI processing: If you don't need summaries or transformations, remove or disable the OpenAI Chat Model node. Filter criteria: Customize the Filter node to extract only recent notes, or those containing specific keywords. AI prompts: Edit the Tools Agent or Chat Model node to instruct the AI to summarize, extract tasks, categorize notes, etc. Field mapping: Adjust the “Set fields for export” node to control what gets written to the spreadsheet. Use this template to build a powerful knowledge extraction tool from your Google Keep archive – ideal for backups, audits, or data-driven insights.
by Eduard
This workflow demonstrates three distinct approaches to chaining LLM operations using Claude 3.7 Sonnet. Connect to any section to experience the differences in implementation, performance, and capabilities. What you'll find: 1️⃣ Naive Sequential Chaining The simplest but least efficient approach - connecting LLM nodes in a direct sequence. Easy to set up for beginners but becomes unwieldy and slow as your chain grows. 2️⃣ Agent-Based Processing with Memory Process a list of instructions through a single AI Agent that maintains conversation history. This structured approach provides better context management while keeping your workflow organized. 3️⃣ Parallel Processing for Maximum Speed Split your prompts and process them simultaneously for much faster results. Ideal when you need to run multiple independent tasks without shared context. Setup Instructions: API Credentials: Configure your Anthropic API key in the credentials manager. This workflow uses Claude 3.7 Sonnet, but you can modify the model in each Anthropic Chat Model node, or pick an entirely different LLM. For Cloud Users: If using the parallel processing method (section 3), replace {{ $env.WEBHOOK_URL }} in the "LLM steps - parallel" HTTP Request node with your n8n instance URL. Test Data: The workflow fetches content from the n8n blog by default. You can modify this part to use a different content or a data source. Customization: Each section contains a set of example prompts. Modify the "Initial prompts" nodes to change the questions asked to the LLM. Compare these methods to understand the trade-offs between simplicity, speed, and context management in your AI workflows! Follow me on LinkedIn for more tips on AI automation and n8n workflows!