by Zain Ali
🧾 Generate Project Summary from meeting transcript Who’s it for 🤝 Project managers looking to automate client meeting summaries Client success teams needing structured deliverables from transcripts Agencies and consultants who want consistent, repeatable documentation How it works / What it does ⚙️ Trigger: Manual or webhook trigger kicks off the workflow. Get meeting transcript: Reads the raw transcript from a specified Google Docs file. Generate summary: Sends transcript + instructions to OpenAI (gpt-4.1-mini) to produce a structured project summary. Convert to HTML: Transforms the LLM-generated Markdown into styled HTML. Prepare request: Wraps HTML and metadata into a multipart request body. Create Google Doc: Uploads the new “Project Summary” document into your Drive folder. How to set up 🛠️ Credentials Google Docs & Drive OAuth2 credentials OpenAI API key (gpt-4.1-mini) Nodes configuration Manual Trigger / webhook node Google Docs “Get meeting transcript” node: set documentURL AI Chat Model node: select gpt-4.1-mini Markdown node: enable tables & emoji Google Drive “CreateGoogleDoc” node: set target folder ID Paste in your IDs Update documentURL to your transcript doc Update google_drive_folder_id in the Set node Execute Click “Execute Workflow” or call via webhook Requirements 📋 n8n Google OAuth2 scopes for Docs & Drive OpenAI account with GPT-4.1-mini access A Google Drive folder to store summaries How to customize ✨ Output format**: Edit the Markdown prompt in the ChainLlm node to adjust headings or tone Timeline section**: Extend LLM prompt template with your own phase table Styling**: Tweak inline CSS in the Code node (Prepare_Request) for fonts or margins Trigger**: Swap Manual Trigger for HTTP/Webhook trigger to integrate with other tools Language model**: Upgrade to a different model by changing model.value in the AI node
by Niranjan G
Who is this for? NVD (National Vulnerability Database) data is essential for security analysts, vulnerability managers, and DevSecOps professionals who need to perform both CVE lookups and monitor historical change logs. This workflow helps streamline those efforts by providing structured outputs for audit, triage, or compliance tracking purposes. 📝 Note: While this example uses Google Sheets as the destination, you can easily modify the final destination node (e.g., send to Slack, email, database, etc.) based on your specific automation needs.? What problem is this solving? Security teams often manually look up CVE data and track changes across multiple tools. This process is inefficient and error-prone. This workflow automates the CVE lookup and historical change tracking by logging enriched vulnerability data into Google Sheets in real-time. What this workflow does This workflow is designed for CVE API lookup and change history tracking. In many vulnerability automation pipelines, it is essential to determine not only the metadata of a CVE but also how it has evolved over time. Based on the operational need—whether it's enrichment, risk scoring, or remediation validation—this workflow becomes particularly handy in surfacing both current and historical CVE data. This template performs the following actions: Accepts incoming webhook requests containing a CVE ID Queries the NVD CVE Lookup API to fetch vulnerability metadata Queries the NVD CVE History API to retrieve all historical changes Flattens both datasets into a sheet-compatible structure Appends vulnerability metadata to one sheet and change history to another within the same Google Spreadsheet Setup 🔑 Request an NVD API Key To request an NVD API Key, please provide your organization name, a valid email address, and indicate your organization type at NVD API Key Request. You must scroll to the end of the Terms of Use Agreement and check "I agree to the Terms of Use" to obtain an API Key. After submission, you will receive a single-use hyperlink via email to activate and view your API Key. If not activated within seven days, a new request must be submitted. 📊 API Rate Limits Without an API key, you're limited to 5 requests per 30-second window. With an API key, you’re allowed up to 50 requests in the same period. To prevent request throttling, it's recommended to introduce slight delays between consecutive API calls in production setups. Clone or import this workflow into your n8n instance. Set up the following credentials: Google Sheets OAuth2 NVD API Key (via HTTP Header Auth) The workflow logs data to a Google Sheet titled NVD Database, with Sheet 1 named CVE Lookup and Sheet 2 named CVE History. Trigger each workflow using the respective webhook URL, appending ?cveId=CVE-XXXX-XXXX as a query parameter. 🔍 Example Webhook Request (CVE Change History) You can test this workflow with the following example: GET https://your-domain.com/webhook/cve-history?cveId=CVE-2023-34362 How to customize this workflow Use the Edit Fields node (optional) to centralize configuration like sheet name or query input Extend the CVE flattening logic to include more nested metadata if needed Integrate notification systems (e.g., Slack or email) by branching from the processing nodes Modify webhook paths for better endpoint organization 🔐 Production Security Tips Use HTTP Header Auth on the webhook for secure access > ⚠️ This template uses webhooks and NVD API access with authentication headers. This template uses two flows: Webhook 1:** NVD CVE Lookup — Lookup CVE vulnerability metadata from NVD and sync to Google Sheet Webhook 2:** NVD CVE Change History — Track change history for CVEs via NVD and log each update Each flow: Hits NVD’s respective endpoint Uses custom JS Code node to flatten the nested JSON Syncs data to dedicated Google Sheet tabs 🧩 4 nodes: Webhook → API Call → Parse → Sheet Sync Make sure both flows are activated and webhooks exposed for external access. Based on your needs, ensure you have a secure setup—whether hosted internally or in a cloud environment—when running n8n in production.
by Praveena
Idea The idea for app came since I wanted to build a unique gift for my niece because she gets excited for her birthday (which Im going to miss this year). The web app has a simple countdown (in html and JS) but more importantly, there is an AI agent that will answer some specific questions and know her preferences. How it works The questions from app are sent via web hook to N8N which has pulls preferences file (about her likes, dislikes, personality) from postgre and AI Agent that will answer questions/respond. The current status is stored back in postgre (especially about status of cat and universe happenings) before responding back. Features Integrated AI chatbot via N8N webhook Persistent conversation history Minimizable chat interface Fallback support for offline testing Features: -- Wheres Mittens - This is a query to track her lost cat in multiverse. -- Multiverse updates with recent update stored Pre Requisites Postgre SQL database is available. Alternatively, use any other database but change the N8N nodes. LLM Api Key. Step by Step Instructions Export this N8N Workflow. Modify LLM API Key, I used openAI, 4.1 For web app scofflding,you will need Node, HTML and Javascript. I've created a mini version using Node and JS with web app and N8N connection settings here: <https://github.com/productiser/FiBirthdayAgent> PostgreSQL Database Script (1 table for memory and context storage): CREATE TABLE fifi_world_context ( id TEXT PRIMARY KEY, -- e.g., 'agent_fifi' cat_location TEXT, -- e.g., "Bubble Nebula" cat_activity TEXT, -- e.g., "Playing laser tag with moon mice" fifi_preferences JSONB, -- e.g., likes/dislikes/foods/shows world_history TEXT, -- Summary of narrative events last_updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); 5.Modify system prompt as per your needs. Built With N8N Self hosted Self hosted web app Hosted on Vercel Total spend = <£1 (AI costs only) Total Time = <1 day Support Watch this video for web app overview and how it looks. <https://youtu.be/e7PlrTdvwoM> Contact me on info@pankstr.com/ superllmuser@gmail.com for any queries Hope you enjoy!!
by Yang
Who is this for? This template is designed for content creators, marketing teams, educators, or media managers who want to repurpose video content into written blog posts with visuals. It's ideal for anyone looking to automate the process of transforming YouTube videos into professional blog articles and custom images. What problem is this workflow solving? Creating written content from video material is time-consuming and manual. This workflow solves that by automating the entire pipeline: from detecting new YouTube video uploads to transcribing the audio, turning it into an engaging blog post, generating a matching visual, and saving both in Airtable. It saves hours of work while keeping your blog or social feed active and consistent. What this workflow does This automation listens for new YouTube videos added to a Google Drive folder, extracts the full transcript using Dumpling AI, and sends it to GPT-4o to generate a blog post and image prompt. Dumpling AI then turns the prompt into a 16:9 visual. The blog and visual are saved into Airtable for easy publishing or curation. Setup Google Drive Trigger Create a folder in Google Drive and upload your YouTube videos there. Link this folder in the "Watch Folder for New YouTube Videos" node. Enable polling every minute or adjust as needed. Download & Prepare the Video The video is downloaded and converted into base64 format by the next two nodes: Download Video File and Convert Downloaded Video to Base64. Transcription with Dumpling AI The base64 video is sent to Dumpling AI’s extract-video endpoint. You must have a Dumpling AI account and an API key with access to this endpoint: Dumpling AI Docs Generate Blog Content with GPT-4o GPT-4o takes the transcript and generates: A human-like blog post A descriptive prompt for AI image generation Make sure your OpenAI credentials are configured. Generate the Visual The prompt is passed to Dumpling AI’s generate-ai-image endpoint using model FLUX.1-pro. The result is a clean 1024x576 image. Save to Airtable Blog content is stored under the Content field in Airtable. The image prompt is also added to the Attachments column as a visual reference. Ensure Airtable base and table are preconfigured with the correct field names. How to customize this workflow to your needs Change the GPT prompt to alter the tone or format of the blog post (e.g., add bullet points or SEO tags). Modify the Dumpling AI prompt to generate different image styles. Add a scheduler or webhook trigger to run at different intervals or through other integrations. Connect this output to Ghost, Notion, or your CMS using additional nodes. 🧠 Sticky Note Summary Part 1: Transcription & Blog Prompt Watches a Google Drive folder for new video uploads. Downloads and encodes the video. Transcribes full audio with Dumpling AI. GPT-4o writes a blog post and descriptive image prompt. Part 2: Image Generation & Airtable Save Dumpling AI generates a visual from the image prompt. Blog content is saved to Airtable. The image prompt is patched into the Attachments field in the same record. ✅ Use this if you want to automate repurposing YouTube videos into blog content with zero manual work.
by David Olusola
AI Lead Capture System - Complete Setup Guide Prerequisites n8n instance (cloud or self-hosted) Google AI Studio account (free tier available) Google account for Sheets integration Website with chat widget capability Phase 1: Core Infrastructure Setup Step 1: Set Up Google AI Studio Go to Google AI Studio Create account or sign in with Google Navigate to "Get API Key" Create new API key for your project Copy and securely store the API key Free tier limits: 15 requests/minute, 1 million tokens/month Step 2: Configure Google Sheets Create new Google Sheet for lead storage Add column headers (exact names): Full Name Company Name Email Address Phone Number Project Intent/Needs Project Timeline Budget Range Preferred Communication Channel How they heard about DAEX AI Copy the Google Sheet ID from URL (between /d/ and /edit) Ensure sheet is accessible to your Google account Step 3: Import n8n Workflow Open your n8n instance Create new workflow Click "..." menu → Import from JSON Paste the provided workflow JSON Workflow will appear with all nodes connected Phase 2: Credential Configuration Step 4: Set Up Google Gemini API In n8n, go to Credentials → Add Credential Search for "Google PaLM API" Enter your API key from Step 1 Test connection Link to the "Google Gemini Chat Model" node Step 5: Configure Google Sheets Access Go to Credentials → Add Credential Select "Google Sheets OAuth2 API" Follow OAuth flow to authorize your Google account Test connection with your sheet Link to the "Google Sheets" node Phase 3: Workflow Customization Step 6: Update Company Information Open the AI Agent node In the system message, replace all mentions of: Company name and description Service offerings and specializations FAQ knowledge base Typical project timelines and pricing ranges Adjust conversation tone to match your brand voice Step 7: Configure Lead Qualification Fields In the AI Agent system message, modify the required information list: Add/remove qualification questions Adjust budget ranges for your services Customize timeline options Update communication channel preferences In Google Sheets node, update column mappings if you changed fields Step 8: Set Up Sheet Integration Open Google Sheets node Click on Document ID dropdown Select your lead capture sheet Verify all column mappings match your sheet headers Test with sample data Phase 4: Website Integration Step 9: Get Webhook URL Open Webhook node in n8n Copy the webhook URL (starts with your n8n domain) Note: URL format is https://your-n8n-domain.com/webhook/[unique-id] Step 10: Connect Your Chat Widget Choose your integration method: Option A: Direct JavaScript Integration javascript// Add to your website function sendMessage(message, sessionId) { fetch('YOUR_WEBHOOK_URL', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ message: message, sessionId: sessionId || 'visitor-' + Date.now() }) }) .then(response => response.json()) .then(data => { // Display AI response in your chat widget displayMessage(data.message); }); } Option B: Chat Platform Webhook Open your chat platform settings (Intercom, Crisp, etc.) Find webhook/integration section Add webhook URL pointing to your n8n endpoint Configure to send message and session data Option C: Zapier/Make.com Integration Create new Zap/Scenario Trigger: New chat message from your platform Action: HTTP POST to your n8n webhook Map message content and session ID Phase 5: Testing & Optimization Step 11: Test Complete Flow Send test message through your chat widget Verify AI responds appropriately Check conversation context is maintained Confirm lead data appears in Google Sheets Test with various conversation scenarios Step 12: Monitor Performance Check n8n execution logs for errors Monitor Google Sheets for data quality Review conversation logs for improvement opportunities Track response times and conversion rates Step 13: Fine-Tune Conversations Analyze real conversation logs Update system prompts based on common questions Add new FAQ knowledge to the AI agent Adjust qualification questions based on lead quality Optimize for your specific customer patterns Phase 6: Advanced Features (Optional) Step 14: Add Lead Scoring Create new column in Google Sheets for "Lead Score" Update AI agent to calculate scores based on: Budget range (higher budget = higher score) Timeline urgency (sooner = higher score) Project complexity (complex = higher score) Add conditional formatting in Google Sheets to highlight high-value leads Step 15: Set Up Notifications Add email notification node after Google Sheets Configure to send alerts for high-priority leads Include lead details and conversation summary Set up different notification rules for different lead scores Step 16: Analytics Dashboard Connect Google Sheets to Google Data Studio or similar Create dashboard showing: Daily lead volume Conversion rates by source Average qualification time Lead quality scores Revenue pipeline from captured leads Troubleshooting Common Issues AI Not Responding Check Google Gemini API key validity Verify API quota not exceeded Review n8n execution logs for errors Data Not Saving to Sheets Confirm Google Sheets permissions Check column name matching Verify sheet ID is correct Chat Widget Not Connecting Test webhook URL directly with curl/Postman Verify JSON format matches expected structure Check CORS settings if browser-based integration Conversation Context Lost Ensure sessionId is unique per visitor Check memory node configuration Verify sessionId is passed consistently
by Yar Malik (Asfandyar)
How it works Trigger: Listens for an incoming chat message Copy Assistant: Feeds the message (plus memory) into an OpenAI Chat Model and exposes two “tools” Cold Email Writer Tool Sales Letter Tool• Tool execution: Depending on the user’s intent, the appropriate tool generates the copy • Save output: Writes the generated email or sales letter into your target document via the Update a document node Set up steps • Configure your OpenAI Chat Model credentials in n8n (no hard-coded keys!) • Add and authenticate the Simple Memory credential (to keep context across messages) • Create Google Docs (or MS Word) credentials for the Update a document node • Ensure your Chat trigger is pointing at your incoming-message endpoint • Mandatory: Drop sticky-note annotations on each tool node explaining where to enter API keys and how to tweak prompts Once everything’s wired up, send a test chat message like “Write me a cold email for a fintech startup” and watch the workflow spin up a polished draft in your document. How to use Import the workflow JSON into n8n. Configure your Chat trigger (webhook or form) to receive incoming messages. Send a chat prompt like: “Write me a cold email for a B2B SaaS offering.” The “Copy Assistant” custom GPT picks the right tool (Cold Email or Sales Letter). Generated copy is written directly into your linked Google Doc or Word document. Requirements OpenAI API Key (with Chat Completions & Custom GPTs enabled) Custom Assistant created in your ChatGPT dashboard (Assistant ID pasted into the Chat Model node) n8n instance (Cloud or self-hosted) with credentials set up for: Simple Memory (to persist context) Google Docs or Microsoft Word (for document output) Customising this workflow Tweak system and user prompts inside the Copy Assistant node to fit your brand voice. Swap in Slack, Teams or email nodes instead of a document writer to deliver copy where you need it. Add or remove tools (e.g., “Follow-up Email Writer”) by duplicating the existing tool pattern. Use sticky-note annotations on every node to explain where to enter API keys, Assistant IDs, or prompt tweaks.
by Angel Menendez
Who is this for? This workflow is designed for teams using Slack for communication and ServiceNow for incident management. It simplifies incident lookup by enabling team members to fetch incident details directly within Slack via a Slash Command. What problem is this workflow solving? Manually switching between Slack and ServiceNow to retrieve incident details can be time-consuming and disrupt workflow efficiency. This workflow bridges the two platforms, providing instant access to critical incident information in Slack, saving time, and improving response efficiency. What this workflow does? The workflow listens for a Slash Command in Slack that includes an incident ID, extracts the ID from the incoming payload, queries ServiceNow for the corresponding incident details, and sends a formatted response back to Slack. Depending on the query result, it can: Display incident details (e.g., ID, description, severity, and priority). Notify the user if no matching incident is found. Alert the user if there’s an issue connecting to ServiceNow. Setup Slack Setup: Create a Slash Command in Slack with the appropriate endpoint URL. Configure the command to send a POST request to the webhook endpoint of this workflow. For details on how to setup the Slack app using Slash commands and n8n, check out this video. ServiceNow Setup: Create or use an existing account with the necessary permissions to access incident data. Configure the ServiceNow node with your ServiceNow credentials. n8n Workflow Activation: Deploy and activate the workflow in your n8n instance. Ensure all nodes are properly configured and connected. How to customize this workflow to your needs Modify Incident Query Parameters:** Adjust the query logic in the Search For Incident in ServiceNow node to include additional filters or data points based on your organization’s needs. Slack Response Customization:** Customize the Slack response template to display additional incident details or to match your team’s tone and style. Error Handling:** Enhance the error handling nodes to include more detailed logs or send alerts to a dedicated Slack channel.
by Pavel Duchovny
Who is this for? This workflow is designed for: Database administrators and developers working with MongoDB Content managers handling movie databases Organizations looking to implement AI-powered search and recommendation systems Developers interested in combining LangChain, OpenAI, and MongoDB capabilities What problem does this workflow solve? Traditional database queries can be complex and require specific MongoDB syntax knowledge. This workflow addresses: The complexity of writing MongoDB aggregation pipelines The need for natural language interaction with movie databases The challenge of maintaining user preferences and favorites The gap between AI language models and database operations What this workflow does This workflow creates an intelligent agent that: Accepts natural language queries about movies Translates user requests into MongoDB aggregation pipelines Queries a movie database containing detailed information including: Plot summaries Genre classifications Cast and director information Runtime and release dates Ratings and awards Provides contextual responses using OpenAI's language model Allows users to save favorite movies to the database Maintains conversation context using a window buffer memory Setup Required Credentials: OpenAI API credentials MongoDB connection details Node Configuration: Configure the MongoDB connection in the MongoDBAggregate node Set up the OpenAI Chat Model with your API key Ensure the webhook trigger is properly configured for receiving chat messages Database Requirements: A MongoDB collection named "movies" with the specified document structure Proper indexes for efficient querying Appropriate user permissions for read/write operations How to customize this workflow Modify the Document Structure: Update the tool description in the MongoDBAggregate node to match your collection schema Adjust the aggregation pipeline templates for your specific use case Enhance the AI Agent: Customize the prompt in the "AI Agent - Movie Recommendation" node Modify the window buffer memory size based on your context needs Add additional tools for more functionality Extend Functionality: Add more MongoDB operations beyond aggregation Implement additional workflows for different types of queries Create custom error handling and validation Add user authentication and rate limiting Integration Options: Connect to external APIs for additional movie data Add webhook endpoints for different platforms Implement caching mechanisms for frequent queries Add data transformation nodes for specific output formats This workflow serves as a foundation that can be adapted to various use cases beyond movie recommendations, such as e-commerce product search, content management systems, or any scenario requiring intelligent database interaction.
by n8n Team
This workflow creates a Jira issue when a new ticket is created in Zendesk. Subsequent comments on the ticket in Zendesk are added as comments to the issue in Jira. Prerequisites Zendesk account and Zendesk credentials. Jira account and Jira credentials. Jira project to create issues in. How it works The workflow listens for new tickets in Zendesk. When a new ticket is created, the workflow creates a new issue in Jira. The Jira issue key is then saved in one of the ticket's fields (in setup we call this "Jira Issue Key"). The next time a comment is added to the ticket, the workflow retrieves the Jira issue key from the ticket's field and adds the comment to the issue in Jira. Setup This workflow requires that you set up a webhook in Zendesk. To do so, follow the steps below: In the workflow, open the On new Zendesk ticket node and copy the webhook URL. In Zendesk, navigate to Admin Center > Apps and integrations > Webhooks > Actions > Create Webhook. Add all the required details which can be retrieved from the On new Zendesk ticket node. The webhook URL gets added to the “Endpoint URL” field, and the “Request method” should match what is shown in n8n. Save the webhook. In Zendesk, navigate to Admin Center > Objects and rules > Business rules > Triggers > Add trigger. Give the trigger a name such as “New tickets”. Under “Conditions” in “Meet ALL of the following conditions”, add “Status is New”. Under “Actions”, select “Notify active webhook” and select the webhook you created previously. In the JSON body, add the following: { "id": "{{ticket.id}}", "comment": "{{ticket.latest_comment_html}}" } Save the Zendesk trigger. You will also need to set up a field in Zendesk to store the Jira issue key. To do so, follow the steps below: In Zendesk, navigate to Admin Center > Objects and rules > Tickets > Fields > Add field. Use the text field option and give the field a name such as “Jira Issue Key". Save the field. In n8n, open the Update ticket node and select the field you created in Zendesk.
by Ranjan Dailata
Who this is for? Extract Amazon Best Seller Electronic Info is an automated workflow that extracts best seller data from Amazon's Electronics section using Bright Data Web Unlocker, transform it into structured JSON using Google Gemini's LLM, and forwards a fully structured JSON response to a specified webhook for downstream use. This workflow is tailored for: eCommerce Analysts** Who need to monitor Amazon best-seller trends in the Electronics category and track changes in real-time or on a schedule. Product Intelligence Teams** Who want structured insights on competitor offerings, including rankings, prices, ratings, and promotions. AI-powered Chatbot Developers** Who are building assistants capable of answering product-related queries with fresh, structured data from Amazon. Growth Hackers & Marketers** Looking to automate competitive research and surface trending product data to inform pricing strategies. Data Aggregators and Price Trackers** Who need reliable and smart scraping of Amazon data enriched with AI-driven parsing. What problem is this workflow solving? Keeping up with Amazon's best sellers in Electronics is a time-consuming, error-prone task when done manually.This workflow automates the process, ensuring: Automating Data Extraction from Amazon Best Sellers using Bright Data, ensuring reliable access to real-time, structured data. Enhancing Raw Data with Google Gemini, turning product lists into structured JSON using the Google Gemini LLM. Sending Results to a Webhook, enabling seamless integration into dashboards, databases, or chatbots. What this workflow does The workflow performs the following steps: Extracts Amazon Best Seller Electronics page info using Bright Data's Web Unlocker API. Processes the unstructured content using Google Gemini's Flash Exp model to extract structured product data. Sends the structured information to a webhook endpoint. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Amazon URL with the Bright Data zone by navigating to the Amazon URL with the Bright Data Zone node. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether you're a market researcher, e-commerce entrepreneur, or data analyst. Here's how you can adapt it to fit your specific use case: Change the Amazon Category** Update the Amazon URL with the topic of your interest such as Computers & Accessories, Home Audio, etc. Customize the Gemini Prompt** Update the Gemini prompt to get different styles of output — comparison tables, summaries, feature highlights, etc. Send Output to Other Destinations** Replace the Webhook URL to forward output to: Google Sheets Airtable Slack or Discord Custom API endpoints
by Anurag
Description This workflow automates the extraction of structured data from invoices or similar documents using Docsumo's API. Users can upload a PDF via an n8n form trigger, which is then sent to Docsumo for processing and structured parsing. The workflow fetches key document metadata and all line items, reconstructs each invoice row with combined header and item details, and finally exports all results as an Excel file. Ideal for automating invoice data entry, reporting, or integrating with accounting systems. How It Works A user uploads a PDF document using the integrated n8n form trigger. The workflow securely sends the document to Docsumo via REST API. After uploading, it checks and retrieves the parsed document results. Header information and table line items are extracted and mapped into structured records. The complete result is exported as an Excel (.xls) file. Setup Steps Docsumo Account: Register and obtain your API key from Docsumo. n8n Credentials Manager: Add your Docsumo API key as an HTTP header credential (never hardcode the key in the workflow). Workflow Configuration: In the HTTP Request nodes, set the authentication to your saved Docsumo credentials. Update the file type or document type in the request (e.g., "type": "invoice") as needed for your use case. Testing: Enable the workflow and use the built-in form to upload a sample invoice for extraction. Features Supports PDF uploads via n8n’s built-in form or via API/webhook extension. Sends files directly to Docsumo for document data extraction using secure credentials. Extracts invoice-level metadata (number, date, vendor, totals) and full line item tables. Consolidates all data in easy-to-use Excel format for download or integration. Modular node structure, easily extensible for further automation. Prerequisites Docsumo account with API access enabled. n8n instance with form, HTTP Request, Code, and Excel/Convert to File nodes. Working Docsumo API Key stored securely in n8n’s credential manager. Example Use Cases | Scenario | Benefit | |---------------------|-----------------------------------------| | Invoice Automation | Extract line items and metadata rapidly | | Receipts Processing | Parse and digitize business receipts | | Bulk Bill Imports | Batch process bills for analytics | Notes Credentials Security:** Do not store your API key directly in HTTP Request nodes; always use n8n credentials manager. Sticky Notes:** The workflow includes sticky notes for setup, input, API call, extraction, and output steps to assist template users. Custom Columns:** You can customize header or line item extraction by editing the Code node as needed.
by Halfbit 🚀
Jura Coffee Counter: Webhook API & Google Sheets Logger ☕️ Track how many coffees your Jura E8 espresso machine makes — fully automated via webhook and Google Sheets. This workflow exposes a custom API endpoint that can be called by smart devices, such as an ESP8266 or ESP32 reading data from a Jura E8 coffee machine via Bluetooth Low Energy (BLE). The incoming data (including total coffee count) is timestamped and appended to a Google Sheet, making it easy to visualize or analyze your machine usage. ☕ Originally built for a Jura E8, based on AlexxIT/Jura reverse-engineering project. > 📝 This workflow uses Google Sheets as a logging backend. You can easily switch it to Airtable, Notion, or a database of your choice. Live example available at: https://halfbitstudio.com/o-nas/ > 🖥️ In our setup, this workflow is used to provide real-time coffee consumption stats displayed directly on our website. > 🔌 Some Jura machines require an accessory Bluetooth transmitter to enable connectivity. Communication is based on the Bluetooth Low Energy (BLE) protocol. Use Case Tracking usage of a Jura coffee machine Logging IoT sensor data into Google Sheets Creating dashboards for daily consumption Smart office setups with coffee stats! Features ☁️ Two Webhook endpoints: POST /{{WEBHOOK_POST_PATH}} — receives JSON from ESP (coffee machine reader) GET /{{WEBHOOK_GET_PATH}} — returns latest records as JSON 📅 Timestamping via Date & Time node 🔹 Coffee counter extraction from incoming JSON 🧾 Appends structured rows to Google Sheets 📤 Webhook response for external status or dashboards Setup Instructions Jura Coffee Machine Integration (Hardware) Use an ESP device (e.g. ESP8266 or ESP32) to connect to the Jura E8 via Bluetooth Low Energy (BLE). Send POST requests with JSON payload: { "total_coffees": 123 } Reverse-engineered protocol reference: AlexxIT/Jura Google Sheets Configuration Create a new Google Sheet with column headers like: date | time | coffee counter Connect your Google account in n8n and authorize access to this sheet. Replace the documentId and sheetName fields in the Google Sheets nodes: Use full URL to your spreadsheet Use the actual sheet name (e.g. Sheet1) Environment Variables & Placeholders | Placeholder | Description | | ------------------------ | ----------------------------------------------- | | {{WEBHOOK_POST_PATH}} | Endpoint to receive coffee counter data | | {{WEBHOOK_GET_PATH}} | Endpoint to return latest data (for dashboards) | | {{SHEET_ID}} | Google Spreadsheet ID | | {{GOOGLE_CREDENTIALS}} | OAuth2 credentials for Google Sheets | | {{DATA_COLUMNS}} | Column names in the target sheet | Testing the Workflow Send test request: Use Postman or ESP to send a POST request to /{{WEBHOOK_POST_PATH}} Body should include total_coffees value Check Google Sheet: Open your sheet and verify that a new row was appended Test GET endpoint: Access the second webhook URL (e.g. /{{WEBHOOK_GET_PATH}}) in browser or fetch via API Optional: Use Respond to Webhook output in a dashboard or frontend Customization Tips Sheet format**: Add more columns if you want to track additional data (e.g. machine temperature, errors) Output format**: Replace Google Sheets with any other storage (e.g. MySQL, Notion) Auth layer**: Add basic auth or token verification if needed for public exposure Notifications**: Send alerts to Discord/Slack when reaching thresholds (e.g. 200 coffees brewed) Tags: google-sheets, iot, webhook, jura, coffee, api, automation