by Abolfazl Akbarzadeh
What we wanna do? Let's look at the concern. In my experience, some developers don't check their Jira board to find out whether there are new updates on the issues or not or if some Issues need to be addressed as soon as possible. So, the developer or anyone else in other fields needs to be informed about the task as soon as possible, too. One way to send this immediate notification is through the Telegram Bot. Setup Guide so, first of all, you need to register a Telegram Bot in your account and obtain its token, so that we'll be able to send Telegram messages by using this token through our bot; after getting your telegram bot token go to the workflow and click on one of the telegram nodes select the telegram credential or create one through the Credential to connect with field and put the token in the token in the Access Token field. Ok, you're done with the Telegram Side setup. then you need the Jira accounts (team users) accountId and also their telegram chatId for the telegram account node so that it can find the corresponding telegram user from the assignee of the issue, put this data as following guide comments in the telegram account node. Now we go for the Jira side setup, you need to setup some automation rules as your needs. go to the Jira settings and Global automation section, click on the Create Rule button select the Issue Created trigger type in the When step add a Send webhook request action, after selecting it you'll see its settings go back to workflow and from the jira-webhook node copy the Production URL paste it in the Web request URL field in the Jira action setting then set the HTTP method field on POST set Web request body on Issue Data (Automation format) in the header section, add a new header with the name type and value created for the creation event. OK, the Jira side also is done! Now It's time to test! If you've put your Jira accountId and telegram chatId in the telegram account node and of course started the telegram bot, after creating an Issue that is assigned to you, the creation notif will send to you in telegram!
by Marth
⚙️ How it works Workflow starts from a manual trigger or form submission with project details. It extracts key input data like client name, email, project type, deadline, and brand folder (optional). A Google Drive folder is automatically created inside a designated parent folder. The shareable link of the newly created folder is generated. A personalized email is composed and sent to the client using Gmail, including project details and folder link. 🛠️ Set up steps Google Drive Setup: Connect your Google Drive credentials in n8n. Set the parent folder ID where all project folders should be created. Gmail Setup: Connect a Gmail account with proper access. Customize the subject and message template in the Gmail node. Input Data Preparation: Ensure the following input fields are provided: client_name contact_email project_type deadline brand_drive_folder (optional) Test & Deploy: Use mock data or a test trigger to validate the workflow. Once confirmed, deploy it with the actual trigger (e.g. webhook, form submission).
by Mauricio Perera
Overview: This workflow is designed to handle user inputs via a webhook, process the inputs with the Google Gemini API (specifically the gemini-2.0-flash-thinking-exp-1219 model), and return a structured response to the user. The response includes three key elements: reasoning, the final answer, and citation URLs (if applicable). This workflow provides a robust solution for integrating AI reasoning into your processes. This workflow can be utilized as a tool for AI-based agents, intelligent email drafting systems, or as a standalone intelligent automation solution. Setup: Webhook Configuration: Ensure the webhook node is properly set up to accept GET requests with an input parameter. Verify that the webhook path matches your application requirements. Test the webhook using tools like Postman to ensure proper data formatting. Google Gemini API Credentials: Set up your Google Gemini API account credentials in the HTTP Request node. Ensure API access and permissions are valid. Parameter Adjustments: Customize the temperature, topK, topP, and maxOutputTokens parameters to fit your use case. Customization: Input Parameters: Modify the webhook path or parameters based on the data your application will send. Response Formatting: Adjust the JavaScript code in the "Process API Response" node to fit your desired output structure. Output Expectations: Test the response returned by the "Return Response to User" node to ensure it meets your application requirements. Workflow Steps: Receive User Input: Node Type: Webhook Purpose: Captures a GET request containing a user-provided input parameter. Acts as the starting point for the workflow. Send Request to Google Gemini: Node Type: HTTP Request Purpose: Sends the received input to the Gemini-2.0-flash-thinking-exp-1219 model for processing. The API configuration includes parameters for customizing the response. Process API Response: Node Type: Code Node Purpose: Extracts reasoning, the final answer, and citation URLs from the API response. Organizes the output for further use. Return Response to User: Node Type: Respond to Webhook Purpose: Sends the processed and structured response back to the user via the webhook. Ensures the response format meets expectations. Expected Outcomes: Input Handling:** Successfully captures user input via a webhook. AI Processing:* Generates a structured response using the *Gemini-2.0-flash-thinking-exp-1219** model, including reasoning, answers, and citations (if available). Output Delivery:** Returns a user-friendly response formatted to your specifications. Notes: The workflow is inactive by default. Each node is annotated with a Sticky Note to clarify its purpose. Ensure all API credentials are correctly configured before execution. Use this workflow to save time, improve accuracy, and automate repetitive tasks efficiently. Tags: Automation Google Gemini AI Agents Intelligent Automation Content Generation Workflow Integration
by Vitali
Template Description This n8n workflow template allows you to create a masked email address using the Fastmail API, triggered by a webhook. This is especially useful for generating disposable email addresses for privacy-conscious users or for testing purposes. Workflow Details: Webhook Trigger: The workflow is initiated by sending a POST request to a specific webhook. You can include state and description in your request body to customize the masked email's state and description. Session Retrieval: The workflow makes an HTTP request to the Fastmail API to retrieve session information. It uses this data to authenticate further requests. Create Masked Email: Using the retrieved session data, the workflow sends a POST request to Fastmail's JMAP API to create a masked email. It uses the provided state and description from the webhook payload. Prepare Output: Once the masked email is successfully created, the workflow extracts the email address and attaches the description for further processing. Respond to Webhook: Finally, the workflow responds to the original POST request with the newly created masked email and its description. Requirements: Fastmail API Access**: You will need valid API credentials for Fastmail configured with HTTP Header Authentication. Authorization Setup**: Optionally set up authorization if your webhook is exposed to the internet to prevent misuse. Custom Webhook Request**: Use a tool like curl or create a shortcut on macOS/iOS to send the POST request to the webhook with the necessary JSON payload, like so: curl -X POST -H 'Content-Type: application/json' https://your-n8n-instance/webhook/87f9abd1-2c9b-4d1f-8c7f-2261f4698c3c -d '{"state": "pending", "description": "my mega fancy masked email"}' This template simplifies the process of integrating masked email functionality into your projects or workflows and can be extended for various use cases. Feel free to use the companion shortcut I've also created. Please update the authorization header in the shortcut if needed. https://www.icloud.com/shortcuts/ac249b50eab34c04acd9fb522f9f7068
by James Francis
Overview Slack quietly released an update to their API that allows developers to build "AI Apps & Agents", which is a special classification of apps that have access to several special capabilities including: Multiple simultaneous chat threads with one user Loading "three dots" UI while your agent is thinking Option for users to pin your app to their top bar for quick chat access This workflow demonstrates how to build a Slack agent that takes advantage of all of these features. For a full video walkthrough of this workflow, watch this YouTube tutorial. Setup Instructions All of the below steps are required for this workflow to function properly unless otherwise noted. Create a Slack App Visit api.slack.com and click "Your Apps" Create a new app from scratch and follow the setup instructions In the Agents & AI Apps tab, enable the toggle and give your app a brief description In the OAuth & Permissions tab, enable the following bot token scopes: assistant:write chat:write channels:read im:history Install the app into your workspace and grant the requested permissions In your Slack workspace, right click your app's name in the sidebar, click "View app details", and make note of your apps Channel ID - you'll need this later. Copy your app's Bot User OAuth Token - you'll need that to create your n8n credentials In the Event Subscriptions tab, enable events and paste the workflows PRODUCTION webhook url (from this workflow's trigger node) into the input. In the same tab under "Susbcribe to bot events", select message.im Create a Postgres database In order to save the chat history and give your agent a working memory, you'll need your own Postgres database. You can use Supabase, Neon, or any other Postgres database provider. Once you've added your database's credentials to n8n, you can select those credentials in the Postgres Chat Memory node. This worklow saves all chat history in a table called chat_histories, but you name the table whatever you want. Create n8n Credentials You'll need to create the following credentials: Slack API. Use your Bot User OAuth Token referenced above. Bearer Auth. Use the same Bot User OAuth Token. Postgres. Use the connection string or config from your database provider. OpenRouter (or any other LLM model for the agent's model node) Wire Everything Up Now that you've created your Slack app, have your Postgres database, and have created credentials, follow these steps to wire up your workflow: In the "On Message Received" trigger, use your Slack API credential and enter your apps Channel ID in the "Channel To Watch" field. In the "Set Thinking Status" node, use your Bearer Auth credential. In the "Postgres Chat Memory" node, use your Postgres credential. In the "Send Reply" node, use your Slack API credential. Using the Chatbot Once you've completed the setup process and added in your credentials, you'll have a fully functional Slack chatbot complete with threads, loading UI, and the ability to pin your app to your workspace's top bar. Taking the Next Steps Now that this skeleton app is in place, it's up to you to add horsepower to the AI agent at the center of it all. Customize the prompts and add whatever tools you'd like. The sky is the limit! If you have any questions or feedback about this workflow, or would like me to build custom workflows for your business, email me at n8n@paperjam.agency.
by ist00dent
This n8n template enables you to instantly generate high-quality screenshots of any specified public URL by simply sending a webhook request. It’s an indispensable tool for developers, content creators, marketers, or anyone needing on-demand visual captures of web pages without manual intervention, all while including crucial security measures. 🔧 How it works Receive URL Webhook: This node acts as the entry point for the workflow. It listens for incoming POST requests and expects a JSON body containing a url property with the website you want to screenshot. You can trigger it from any application or service capable of sending an HTTP POST request. Validate URL for SSRF: This is a crucial security step. This Function node validates the incoming url to prevent Server-Side Request Forgery (SSRF) vulnerabilities. It checks for valid http:// or https:// protocols and, more importantly, ensures the URL does not attempt to access internal/private IP addresses or localhost. If the URL is deemed unsafe or invalid, it flags it for an error response. IF URL Valid: This IF node checks the isValidUrl flag set by the previous validation step. If the URL is valid (true), the workflow proceeds to take the screenshot. If the URL is invalid or flagged for security (false), the workflow branches to Respond with Validation Error. Take Screenshot: This node sends an HTTP GET request to the ScreenshotMachine API to capture an image of the validated URL. Remember to replace YOUR_API_KEY in the URL field of this node with your actual API key from ScreenshotMachine. Respond with Screenshot Data: This node sends the data received directly from the Take Screenshot node back to the original caller of the webhook. This response typically includes information about the generated screenshot, such as the URL to the image file, success status, and other metadata from the ScreenshotMachine API. Respond with Validation Error: If the IF URL Valid node determines the URL is unsafe or invalid, this node sends a descriptive error message back to the webhook caller, explaining why the request was denied due to security concerns or an invalid format. 🔒 Security Considerations This template includes a dedicated Validate URL for SSRF node to mitigate Server-Side Request Forgery (SSRF) vulnerabilities. SSRF attacks occur when an attacker can trick a server-side application into making requests to an unintended location. Without validation, an attacker could potentially use your n8n workflow to scan internal networks, access sensitive internal resources, or attack other services from your n8n server. The validation checks for: Only http:// or https:// protocols. Prevention of localhost or common private IP ranges (e.g., 10.x.x.x, 172.16.x.x - 172.31.x.x, 192.168.x.x). While this validation adds a significant layer of security, always ensure your n8n instance is properly secured and updated. 👤 Who is it for? This workflow is ideal for: Developers: Automate screenshot generation for testing, monitoring, or integrating visual content into applications. Content Creators: Quickly grab visuals for articles, presentations, or social media posts. Marketing Teams: Create dynamic visual assets for campaigns, ads, or competitive analysis. Automation Enthusiasts: Integrate powerful screenshot capabilities into existing automated workflows. Website Owners: Monitor how your website appears across different tools or over time. 📑 Prerequisites To use this template, you will need: An n8n instance (cloud or self-hosted). An API Key from ScreenshotMachine. You can obtain one by signing up on their website: https://www.screenshotmachine.com/ 📑 Data Structure When you trigger the webhook, send a POST request with a JSON body structured as follows: { "url": "https://www.example.com" } If the URL is valid, the workflow will return the JSON response directly from the ScreenshotMachine API. This response typically includes information about the generated screenshot, such as the URL to the image file, success status, and other metadata: { "status": "success", "hash": "...", "url": "https://www.screenshotmachine.com/...", "size": 12345, "mimetype": "image/jpeg" } If the URL is invalid or blocked by the security validation, the workflow will return an error response similar to this: { "status": "error", "message": "Access to private IP addresses is not allowed for security reasons." } ⚙️ Setup Instructions Import Workflow: In your n8n editor, click "File" > "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path: Double-click the Receive URL Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /website-screenshot). Add ScreenshotMachine API Key: Double-click the Take Screenshot node. In the 'URL' parameter, locate YOUR_API_KEY and replace it with your actual API key obtained from ScreenshotMachine. Example URL structure: http://api.screenshotmachine.com/?key=YOUR_API_KEY&url={{ $json.validatedUrl }} Activate Workflow: Save and activate the workflow. 📝 Tips Processing Screenshots: You're not limited to just responding with the screenshot data! You can insert additional nodes after the Take Screenshot node (and before the Respond with Screenshot Data node) to further process or utilize the generated image. Common extensions include: Saving to Cloud Storage: Use nodes for Amazon S3, Google Drive, or Dropbox to store the screenshots automatically, creating an archive. Sending via Email: Attach the screenshot to an email notification using an Email or Gmail node for automated alerts or reports. Posting to Chat Platforms: Share the screenshot directly in a Slack, Discord, or Microsoft Teams channel for team collaboration or visual notifications. Image Optimization: Use an image processing node (if available via an API or a custom function) to resize, crop, or compress the screenshot before saving or sending. Custom Screenshot Parameters: The ScreenshotMachine API supports various optional parameters (e.g., width, height, quality, delay, fullpage). Upgrade: Extend the Receive URL Webhook to accept these parameters in the incoming JSON body (e.g., {"url": "...", "width": 1024, "fullpage": true}). Leverage: Dynamically pass these parameters to the Take Screenshot HTTP Request node's URL to customize your screenshots for different use cases. Scheduled Monitoring: Upgrade: Combine this workflow with a Cron or Schedule node. Set it to run periodically (e.g., daily, hourly). Leverage: Automatically monitor your website or competitors' sites for visual changes. You could then save screenshots to cloud storage and even trigger a comparison tool if a change is detected. Automated Visual Regression Testing: Upgrade: After taking a screenshot, store it with a unique identifier. In subsequent runs, take a new screenshot, then use an external image comparison API or a custom function to compare the new screenshot with a baseline. Leverage: Get automated alerts if visual elements on your website change unexpectedly, which is critical for quality assurance. Dynamic Image Generation for Social Media/Marketing: Upgrade: Feed URLs (e.g., for new blog posts, product pages) into this workflow. After generating the screenshot, use it to create dynamic social media images or marketing assets. Leverage: Streamline the creation of engaging visual content, saving design time.
by Gain FLow AI
Inquiry Form to Personalised WhatsApp Message Overview This workflow creates a smart, automated system for capturing leads from an inquiry form, initiating personalized WhatsApp message via Unipile API, and updating your Google Sheet CRM. It uses AI to craft initial outreach messages and logs the success or failure of each message sent, ensuring you track every lead effectively. This automation helps you engage leads quickly and efficiently, without manual effort. Use Case This workflow is ideal for: Sales Teams**: Automate the first touchpoint with new leads, qualifying them and initiating conversations. Small Businesses**: Provide immediate, personalized responses to inquiries, enhancing customer experience. Customer Support**: Quickly gather more context from users after they fill out a help form. Lead Generation**: Streamline the process from form submission to active lead engagement and CRM tracking. How It Works Form Submission Trigger: The workflow is activated when someone submits an "Inquiry Form." This form collects essential lead details such as: Full Name Email WhatsApp number Company Name "How can we help you?" (a notes field) AI Crafts Personalized Message: An OpenAI node, acting as "Alex" (a friendly, approachable human assistant), generates a short, personalized, and engaging opening message for the lead. This message directly addresses the lead by their first name and includes an open-ended question to encourage them to share more details about their needs. WhatsApp Outreach: The AI then uses the WhatsApp API (via Unipile) to send this personalized message directly to the lead's WhatsApp number. Unipile is key here, as it allows sending messages without prior chat history and can connect to your personal WhatsApp. Log Success or Failure: The AI checks the response from the WhatsApp API. If the WhatsApp message is sent successfully: The lead's details, along with the personalized message, WhatsApp chat ID, and message ID, are logged into a "Successful" sheet in your Google Sheet CRM. If the WhatsApp message fails to send: The lead's information, the attempted message, and the reason for failure are logged into a "Failed" sheet in your Google Sheet CRM. This helps you identify and follow up on problematic leads. How to Set It Up To set up your Lead Capture Agent, follow these steps: Google Sheet Setup: Copy the Template: Make a copy of the provided Google Sheet Template ("Sales Agent" with "Successful" and "Failed" sheets) into your own Google Drive. Connect Google Sheets: Ensure your Google Sheets OAuth2 API credentials are set up in n8n and linked to the "Google Sheets" and "Google Sheets3" nodes. Update Sheet IDs: In both "Google Sheets" and "Google Sheets3" nodes, update the documentId with the ID of your copied "Sales Agent" Google Sheet. Unipile (WhatsApp API) Credentials: Sign up for Unipile: Get your DSN and API key from Unipile (they offer a 7-day free trial). Replace Placeholders: In the "Whatsapp API" node, replace <YOUR_DSN>, <YOUR_API_KEY>, and <YOUR_ACCOUNT_ID> with your actual Unipile credentials. OpenAI API Key: Connect your OpenAI API key as an API credential in n8n and link it to the "OpenAI" node. Inquiry Form Setup: The "Enquiry Form" node generates a public webhook URL. You can embed this form on your website or share the URL directly. Alternatively, if you use your own form solution, configure it to send data via a webhook to the URL provided by the "Enquiry Form" node. Import the Workflow: Import the provided workflow JSON into your n8n instance. Activate and Test: Once all settings are complete, activate the workflow. Test it by submitting a new entry through the "Inquiry Form." Check your Google Sheet to see the lead captured and the message status. This workflow is designed to ensure no lead falls through the cracks, giving your sales or support team a powerful edge!
by JaredCo
This n8n workflow demonstrates how to transform natural language date and time expressions into structured data with 96%+ accuracy. Parse complex expressions like "early next July", "2 weeks after project launch", or "end of Q3" into precise datetime objects with confidence scoring, timezone intelligence, and business rules validation for any automation workflow. Good to know Achieves 96%+ accuracy on complex natural language date expressions At time of writing, this is the most advanced open-source date parser available Includes AI learning that improves over time with user corrections Supports 6 languages with auto-detection (English, Spanish, French, German, Italian, Portuguese) Sub-millisecond response times with intelligent caching Enterprise-grade with business intelligence and timezone handling How it works Natural Language Input**: Receives date expressions via webhook, form, email, or chat AI-Powered Parsing**: Your world-class date parser processes the text through: 50+ custom rule patterns for complex expressions Multi-language auto-detection and smart translation Confidence scoring (0.0-1.0) for AI decision-making Ambiguity detection with helpful suggestions Business Intelligence**: Applies enterprise rules automatically: Holiday calendar awareness (US + International) Working hours validation and warnings Business day auto-adjustment Timezone normalization (IANA format) Smart Scheduling**: Creates calendar events with: Structured datetime objects (start/end times) Confidence metadata for workflow decisions Alternative interpretations for ambiguous inputs Rich context for follow-up actions Integration Ready**: Outputs connect seamlessly to: Google Calendar, Outlook, Apple Calendar CRM systems (HubSpot, Salesforce) Project management tools (Notion, Asana) Communication platforms (Slack, Teams) How to use The webhook trigger receives natural language date requests from any source Replace the MCP server URL with your deployed date parser endpoint Configure timezone preferences for your organization Customize business rules (working hours, holidays) in the parser settings Connect calendar integration nodes for automatic event creation Add notification workflows for scheduling confirmations Use Cases Meeting Scheduling**: "Schedule our quarterly review for early Q3" Project Management**: "Set deadline 2 weeks after product launch" Event Planning**: "Book venue for the weekend before Labor Day" Personal Assistant**: "Remind me about dentist appointment next Tuesday morning" International Teams**: "Team standup tomorrow morning" (auto-timezone conversion) Seasonal Planning**: "Launch campaign in late spring 2025" Requirements Natural Language Date Parser MCP server (provided code) Webhook endpoint or form trigger Calendar integration (Google Calendar, Outlook, etc.) Optional: Slack/Teams for notifications Optional: Database for learning pattern storage Customizing this workflow Multi-language Support**: Enable auto-detection for global teams Business Rules**: Configure company holidays and working hours Learning System**: Enable AI learning from user corrections Integration Depth**: Connect to your existing calendar and CRM systems Confidence Thresholds**: Set minimum confidence levels for auto-scheduling Ambiguity Handling**: Route unclear dates to human review or clarification requests Sample Input/Output Input Examples: "early next July" "2 weeks after Thanksgiving" "next Wednesday evening" "Q3 2025" "mañana por la mañana" (Spanish) "first thing Monday" Rich Output: { "parsed": [{ "start": "2025-07-01T00:00:00Z", "end": "2025-07-10T23:59:59Z", "timezone": "America/New_York" }], "confidence": 0.95, "method": "custom_rules", "business_insights": [{ "type": "business_warning", "message": "Selected date range includes July 4th holiday" }], "predictions": [{ "type": "time_preference", "suggestion": "You usually schedule meetings at 10 AM" }], "ambiguities": [], "alternatives": [{ "interpretation": "Early July 2026", "confidence": 0.15 }], "performance": { "cache_hit": true, "response_time": "0.8ms" } } Why This Workflow is Unique World-Class Accuracy**: 96%+ success rate on complex expressions AI Learning**: Improves over time with user feedback Global Ready**: Multi-language and timezone intelligence Business Smart**: Enterprise rules and holiday awareness Performance Optimized**: Sub-millisecond cached responses Context Aware**: Provides confidence scores and alternatives for AI decision-making Transform your scheduling workflows from rigid form inputs to natural, conversational date requests that your users will love!
by Halfbit 🚀
Jura Coffee Counter: Webhook API & Google Sheets Logger ☕️ Track how many coffees your Jura E8 espresso machine makes — fully automated via webhook and Google Sheets. This workflow exposes a custom API endpoint that can be called by smart devices, such as an ESP8266 or ESP32 reading data from a Jura E8 coffee machine via Bluetooth Low Energy (BLE). The incoming data (including total coffee count) is timestamped and appended to a Google Sheet, making it easy to visualize or analyze your machine usage. ☕ Originally built for a Jura E8, based on AlexxIT/Jura reverse-engineering project. > 📝 This workflow uses Google Sheets as a logging backend. You can easily switch it to Airtable, Notion, or a database of your choice. Live example available at: https://halfbitstudio.com/o-nas/ > 🖥️ In our setup, this workflow is used to provide real-time coffee consumption stats displayed directly on our website. > 🔌 Some Jura machines require an accessory Bluetooth transmitter to enable connectivity. Communication is based on the Bluetooth Low Energy (BLE) protocol. Use Case Tracking usage of a Jura coffee machine Logging IoT sensor data into Google Sheets Creating dashboards for daily consumption Smart office setups with coffee stats! Features ☁️ Two Webhook endpoints: POST /{{WEBHOOK_POST_PATH}} — receives JSON from ESP (coffee machine reader) GET /{{WEBHOOK_GET_PATH}} — returns latest records as JSON 📅 Timestamping via Date & Time node 🔹 Coffee counter extraction from incoming JSON 🧾 Appends structured rows to Google Sheets 📤 Webhook response for external status or dashboards Setup Instructions Jura Coffee Machine Integration (Hardware) Use an ESP device (e.g. ESP8266 or ESP32) to connect to the Jura E8 via Bluetooth Low Energy (BLE). Send POST requests with JSON payload: { "total_coffees": 123 } Reverse-engineered protocol reference: AlexxIT/Jura Google Sheets Configuration Create a new Google Sheet with column headers like: date | time | coffee counter Connect your Google account in n8n and authorize access to this sheet. Replace the documentId and sheetName fields in the Google Sheets nodes: Use full URL to your spreadsheet Use the actual sheet name (e.g. Sheet1) Environment Variables & Placeholders | Placeholder | Description | | ------------------------ | ----------------------------------------------- | | {{WEBHOOK_POST_PATH}} | Endpoint to receive coffee counter data | | {{WEBHOOK_GET_PATH}} | Endpoint to return latest data (for dashboards) | | {{SHEET_ID}} | Google Spreadsheet ID | | {{GOOGLE_CREDENTIALS}} | OAuth2 credentials for Google Sheets | | {{DATA_COLUMNS}} | Column names in the target sheet | Testing the Workflow Send test request: Use Postman or ESP to send a POST request to /{{WEBHOOK_POST_PATH}} Body should include total_coffees value Check Google Sheet: Open your sheet and verify that a new row was appended Test GET endpoint: Access the second webhook URL (e.g. /{{WEBHOOK_GET_PATH}}) in browser or fetch via API Optional: Use Respond to Webhook output in a dashboard or frontend Customization Tips Sheet format**: Add more columns if you want to track additional data (e.g. machine temperature, errors) Output format**: Replace Google Sheets with any other storage (e.g. MySQL, Notion) Auth layer**: Add basic auth or token verification if needed for public exposure Notifications**: Send alerts to Discord/Slack when reaching thresholds (e.g. 200 coffees brewed) Tags: google-sheets, iot, webhook, jura, coffee, api, automation
by Anurag
Description This workflow automates the extraction of structured data from invoices or similar documents using Docsumo's API. Users can upload a PDF via an n8n form trigger, which is then sent to Docsumo for processing and structured parsing. The workflow fetches key document metadata and all line items, reconstructs each invoice row with combined header and item details, and finally exports all results as an Excel file. Ideal for automating invoice data entry, reporting, or integrating with accounting systems. How It Works A user uploads a PDF document using the integrated n8n form trigger. The workflow securely sends the document to Docsumo via REST API. After uploading, it checks and retrieves the parsed document results. Header information and table line items are extracted and mapped into structured records. The complete result is exported as an Excel (.xls) file. Setup Steps Docsumo Account: Register and obtain your API key from Docsumo. n8n Credentials Manager: Add your Docsumo API key as an HTTP header credential (never hardcode the key in the workflow). Workflow Configuration: In the HTTP Request nodes, set the authentication to your saved Docsumo credentials. Update the file type or document type in the request (e.g., "type": "invoice") as needed for your use case. Testing: Enable the workflow and use the built-in form to upload a sample invoice for extraction. Features Supports PDF uploads via n8n’s built-in form or via API/webhook extension. Sends files directly to Docsumo for document data extraction using secure credentials. Extracts invoice-level metadata (number, date, vendor, totals) and full line item tables. Consolidates all data in easy-to-use Excel format for download or integration. Modular node structure, easily extensible for further automation. Prerequisites Docsumo account with API access enabled. n8n instance with form, HTTP Request, Code, and Excel/Convert to File nodes. Working Docsumo API Key stored securely in n8n’s credential manager. Example Use Cases | Scenario | Benefit | |---------------------|-----------------------------------------| | Invoice Automation | Extract line items and metadata rapidly | | Receipts Processing | Parse and digitize business receipts | | Bulk Bill Imports | Batch process bills for analytics | Notes Credentials Security:** Do not store your API key directly in HTTP Request nodes; always use n8n credentials manager. Sticky Notes:** The workflow includes sticky notes for setup, input, API call, extraction, and output steps to assist template users. Custom Columns:** You can customize header or line item extraction by editing the Code node as needed.
by Dmytro
AI-Powered Product Assistant for E-commerce Transform your online store customer service with an intelligent AI assistant that automatically processes customer inquiries, searches your product database, and provides personalized responses about product availability, pricing, and specifications. Perfect for shoe stores, fashion retailers, and any business with extensive product catalogs - this workflow eliminates manual customer service while increasing response speed and accuracy. How it works Customer sends product inquiry via webhook (Instagram DM, website chat, or messaging app) AI extracts key product details (brand, model, size, color) from natural language text System searches your Google Sheets product database with smart filtering AI generates friendly, personalized response with availability, pricing, and stock information Automatic response sent back to customer with product details or alternatives Screenshots: Customer inquiry: "Do you have Nike Air Max 40 size?" AI response: "Nike Air Max 90, size 40 - in stock 3 pieces, price 120$" Set up steps Prepare your product database - Create Google Sheets with columns: Brand, Model, Size, Color, Price, Quantity Configure AI settings - Connect OpenAI API for natural language processing Set up webhook endpoint - Configure trigger for your messaging platform (Instagram, Telegram, website chat) Test with sample inquiries - Verify AI correctly parses requests and finds products Deploy and monitor - Launch your automated assistant and track performance Time investment: 30-45 minutes setup, works immediately with any product catalog up to 1000+ items.
by phil
This workflow automates web scraping of Amazon search result pages by retrieving raw HTML, cleaning it to retain only the relevant product elements, and then using an LLM to extract structured product data (name, description, rating, reviews, and price), before saving the results back to Google Sheets. It integrates Google Sheets to supply and collect URLs, BrightData to fetch page HTML, a custom n8n Function node to sanitize the HTML, LangChain (OpenRouter GPT-4) to parse product details, and Google Sheets again to store the output. URL to scape . Result Who Needs Amazon Search Result Scraping? This scraping workflow is ideal for teams and businesses that need to monitor Amazon product listings at scale: E-commerce Analysts** – Track competitor pricing, ratings, and inventory trends. Market Researchers** – Collect data on product popularity and reviews for market analysis. Data Teams** – Automate ingestion of product metadata into BI pipelines or data lakes. Affiliate Marketers** – Keep affiliate catalogs up to date with latest product details and prices. If you need reliable, structured data from Amazon search results delivered directly into your spreadsheets, this workflow saves you hours of manual copy-and-paste. Why Use This Workflow? End-to-End Automation** – From URL list to clean JSON output in Sheets. Robust HTML Cleaning** – Strips scripts, styles, unwanted tags, and noise. Accurate Structured Parsing** – Leverages GPT-4 via LangChain for reliable extraction. Scalable & Repeatable** – Processes thousands of URLs in batches. Step-by-Step: How This Workflow Scrapes Amazon Get URLs from Google Sheets – Reads a list of search result URLs. Loop Over Items – Iterates through each URL in controlled batches. Fetch Raw HTML – Uses BrightData’s Web Unlocker proxy to retrieve the page. Clean HTML – A Function node removes doctype, scripts, styles, head, comments, classes, and non-whitelisted tags, collapsing extra whitespace. Extract with LLM – Passes cleaned HTML into LangChain → GPT-4 to output JSON for each product: name, description, rating, reviews, price Save Results – Appends the JSON fields as columns back into a “results” sheet in Google Sheets. Customization: Tailor to Your Needs Adaptable Sites** – This workflow can be adapted to any e-commerce or other website, for example Walmart or eBay. Whitelist Tags** – Modify the allowedTags array in the Code node to keep additional HTML elements. Schema Changes** – Update the Structured Output Parser schema to include more fields (e.g., availability, SKU). Alternate Data Sink** – Instead of Sheets, route output to a database, CSV file, or webhook. 🔑 Prerequisites Google Sheets Credentials** – OAuth credentials configured in n8n. BrightData API token** – Stored in n8n credentials as BRIGHTDATA_TOKEN. OpenRouter API Key** – Configured for the LangChain node to call GPT-4. n8n Instance** – Self-hosted or cloud with sufficient quota for HTTP requests and LLM calls. 🚀 Installation & Setup Configure Credentials** In n8n, set up Google Sheets OAuth under “Credentials.” Add BrightData token as a new HTTP Request credential. Create an OpenRouter API key credential for the LangChain node. Import the Workflow** Copy the JSON workflow into n8n’s “Import” dialog. Map your Google Sheet IDs and GIDs to the {{WEB_SHEET_ID}}, {{TRACK_SHEET_GID}}, and {{RESULTS_SHEET_GID}} placeholders. Ensure the BRIGHTDATA_TOKEN credential is selected on the HTTP Request node. Test & Run** Add a few Amazon search URLs to your “track” sheet. Execute the workflow and verify product data appears in your “results” sheet. Tweak batch size or parser schema as needed. ⚠ Important API Rate Limits** – Monitor your BrightData and OpenRouter usage to avoid throttling. Amazon’s Terms** – Ensure your scraping complies with Amazon’s policies and legal requirements. Summary This workflow delivers a fully automated, scalable solution to extract structured product data from Amazon search pages directly into Google Sheets—streamlining your competitive analysis and data collection. 🚀 Phil | Inforeole