by Vincent
Automate Actions After PDF Generation with PDFMonkey in n8n Overview This n8n workflow template allows you to automatically react to PDF generation events from PDFMonkey. When a new PDF is successfully created, this workflow retrieves the file and processes it based on your needs—whether it’s sending it via email, saving it to cloud storage, or integrating it with other apps. How It Works Trigger: The workflow listens for a PDFMonkey webhook event when a new PDF is generated. Retrieve PDF: It fetches the newly generated PDF file from PDFMonkey. Process & Action: Depending on the outcome: ✅ On success: The workflow downloads the PDF and can distribute or store it. ❌ On failure: It handles errors accordingly (e.g., sending alerts, retrying, or logging the issue). Configuration To set up this workflow, follow these steps: Copy the Webhook URL generated by n8n. Go to your PDFMonkey Webhooks dashboard and paste the URL in the appropriate field to define the callback URL. Save your settings and trigger a test to ensure proper integration. 📖 For detailed setup instructions, visit: PDFMonkey Webhooks Documentation Use Cases This workflow is ideal for: Automating invoice processing (e.g., sending PDFs to customers via email). Archiving reports** in cloud storage (e.g., Google Drive, Dropbox, or AWS S3). Sending notifications** via Slack, Microsoft Teams, or WhatsApp when a new PDF is available. Logging generated PDFs** in Airtable, Notion, or a database for tracking. Customization You can customize this workflow to: Add conditional logic** (e.g., different actions based on the document type). Enhance security** (e.g., encrypting PDFs before sharing). Extend integrations** by connecting with CRM tools, task managers, or analytics platforms. Need Help? If you need assistance setting up or customizing this workflow, feel free to reach out to us via chat on pdfmonkey.io—we’ll be happy to help! 🚀
by Edoardo Guzzi
This template integrates OpenAI's image generation and editing endpoints via the GPT-Image-1 model to visually create and manipulate images based on prompts. It features base64 conversion, binary handling, and prompt chaining. Perfect for marketing, design, product visuals and creative workflows. 🛠️ Requirements OpenAI account with access to gpt-image-1(probably you need organizations verifications for access to that model) OpenAI API credentials configured in n8n A self-hosted or cloud n8n instance Basic familiarity with the n8n UI (no programming required) 🔧 Step-by-step Instructions Step 1: Manual Trigger Starts the workflow on click. Ideal for testing the generation and edit logic. Step 2: Generate Image The Create image call node sends a prompt to OpenAI and returns a base64 image. Example prompt: A cyberpunk city at night with flying cars and neon lights Step 3: Convert to Binary The base64 image is converted into a usable binary PNG file with the Convert json binary to File node. Step 4: Edit the Image The binary file is passed to OpenAI’s /images/edits endpoint. A new prompt applies changes to the image. Example: Add a glowing robot in the foreground with a neon sword ✅ Supports model: gpt-image-1 ⚠️ Requires binary file (not base64) Step 5: Final Conversion Converts the final edited image from base64 to file so it can be downloaded or used in other nodes. 🎯 Real-World Use Cases 🎨 Artists & Creators: concept art and illustration variations 🛍️ E-commerce: auto-generate product mockups 📰 Marketing: create eye-catching blog or social visuals 💡 Bonus Ideas Add a Telegram or Slack node to generate or edit images via chat Use a Webhook to feed prompts from a form or frontend Add a mask to restrict edits to specific areas (e.g., background only)
by DIGITAL BIZ TECH
Knowledge RAG & AI Chat Agent: Google Drive to Qdrant Description This workflow transforms a Google Drive folder into an intelligent, searchable knowledge base and provides a chat agent to query it. It’s composed of two distinct flows: An ingestion pipeline to process documents. A live chat agent that uses RAG (Retrieval-Augmented Generation) and optional web search to answer user questions. This system fully automates the creation of a “Chat with your docs” solution and enhances it with external web-searching capabilities. Quick Implementation Steps Import the workflow JSON into your n8n instance. Set up credentials for Google Drive, Mistral AI, OpenAI, and Qdrant. Open the Web Search node and add your Tavily AI API key to the Authorization header. In the Google Drive (List Files) node, set the Folder ID you want to ingest. Run the workflow manually once to populate your Qdrant database (Flow 1). Activate the workflow to enable the chat trigger (Flow 2). Copy the public webhook URL from the When chat message received node and open it in a new tab to start chatting. What It Does The workflow is divided into two primary functions: 1. Knowledge Base Ingestion (Manual Trigger) This flow populates your vector database. Scans Google Drive:** Lists all files from a specified folder. Processes Files Individually:** Downloads each file. Extracts Text via OCR:* Uses *Mistral AI OCR API** for text extraction from PDFs, images, etc. Generates Smart Metadata:** A Mistral LLM assigns metadata like document_type, project, and assigned_to. Chunks & Embeds:* Text is cleaned, chunked, and embedded via *OpenAI’s text-embedding-3-small** model. Stores in Qdrant:** Text chunks, embeddings, and metadata are stored in a Qdrant collection (docaiauto). 2. AI Chat Agent (Chat Trigger) This flow powers the conversational interface. Handles User Queries:** Triggered when a user sends a chat message. Internal RAG Retrieval:* Searches *Qdrant Vector Store** first for answers. Web Search Fallback:* If unavailable internally, the agent offers to perform a *Tavily AI web search**. Contextual Responses:** Combines internal and external info for comprehensive answers. Who's It For Ideal for: Teams building internal AI knowledge bases from Google Drive. Developers creating AI-powered support, research, or onboarding bots. Organizations implementing RAG pipelines. Anyone making unstructured Google Drive documents searchable via chat. Requirements n8n instance** (self-hosted or cloud). Google Drive Credentials** (to list and download files). Mistral AI API Key** (for OCR & metadata extraction). OpenAI API Key** (for embeddings and chat LLM). Qdrant instance** (cloud or self-hosted). Tavily AI API Key** (for web search). How It Works The workflow runs two independent flows in parallel: Flow 1: Ingestion Pipeline (Manual Trigger) List Files: Fetch files from Google Drive using the Folder ID. Loop & Download: Each file is processed one by one. OCR Processing: Upload file to Mistral Retrieve signed URL Extract text using Mistral DOC OCR Metadata Extraction: Analyze text using a Mistral LLM. Text Cleaning & Chunking: Split into 1000-character chunks. Embeddings Creation: Use OpenAI embeddings. Vector Insertion: Push chunks + metadata into Qdrant. Flow 2: AI Chat Agent (Chat Trigger) Chat Trigger: Starts when a chat message is received. AI Agent: Uses OpenAI + Simple Memory to process context. RAG Retrieval: Queries Qdrant for related data. Decision Logic: Found → Form answer. Not found → Ask if user wants web search. Web Search: Performs Tavily web lookup. Final Response: Synthesizes internal + external info. How To Set Up 1. Import the Workflow Upload the provided JSON into your n8n instance. 2. Configure Credentials Create and assign: Google Drive** → Google Drive nodes Mistral AI** → Upload, Signed URL, DOC OCR, Cloud Chat Model OpenAI** → Embeddings + Chat Model nodes Qdrant** → Vector Store nodes 3. Add Tavily API Key Open Web Search node → Parameters → Headers Add your key under Authorization (e.g., tvly-xxxx). 4. Node Configuration Google Drive (List Files):** Set Folder ID. Qdrant Nodes:** Ensure same collection name (docaiauto). 5. Run Ingestion (Flow 1) Click Test workflow to populate Qdrant with your Drive documents. 6. Activate Chat (Flow 2) Toggle the workflow ON to enable real-time chat. 7. Test Open the webhook URL and start chatting! How To Customize Change LLMs:** Swap models in OpenAI or Mistral nodes (e.g., GPT-4o, Claude 3). Modify Prompts:** Edit the system message in ai chat agent to alter tone or logic. Chunking Strategy:** Adjust chunkSize and chunkOverlap in the Code node. Different Sources:** Replace Google Drive with AWS S3, Local Folder, etc. Automate Updates:* Add a *Cron** node for scheduled ingestion. Validation:** Add post-processing steps after metadata extraction. Expand Tools:** Add more functional nodes like Google Calendar or Calculator. Use Case Examples Internal HR Bot:** Answer HR-related queries from stored policy docs. Tech Support Assistant:** Retrieve troubleshooting steps for products. Research Assistant:** Summarize and compare market reports. Project Management Bot:** Query document ownership or project status. Troubleshooting Guide | Issue | Possible Solution | |------------|------------------------| | Chat agent doesn’t respond | Check OpenAI API key and model availability (e.g., gpt-4.1-mini). | | Known documents not found | Ensure ingestion flow ran and both Qdrant nodes use same collection name. | | OCR node fails | Verify Mistral API key and input file integrity. | | Web search not triggered | Re-check Tavily API key in Web Search node headers. | | Incorrect metadata | Tune Information Extractor prompt or use a stronger Mistral model. | Need Help or More Workflows? Want to customize this workflow for your business or integrate it with your existing tools? Our team at Digital Biz Tech can tailor it precisely to your use case from automation logic to AI-powered enhancements. 💡 We can help you set it up for free — from connecting credentials to deploying it live. Contact: shilpa.raju@digitalbiz.tech Website: https://www.digitalbiz.tech LinkedIn: https://www.linkedin.com/company/digital-biz-tech/ You can also DM us on LinkedIn for any help.
by Tom
This workflow shows a low code approach to creating a HTML table based on Google Sheets data. It's similar to this workflow, but allows fully customizing the HTML output. To run the workflow: Make sure you have a Google Sheet with a header row and some data in it. Grab your sheet ID: Add it to the Google Sheets node: Activate the workflow or execute it manually Visit the URL provided by the webhook node in your browser (production URL if the workflow is active, test URL if the workflow is executed manually)
by tanaypant
This workflow automatically follows the steps in a custom incident response playbook and manages incidents in PagerDuty, Jira tickets, and notifies the on-call team in Mattermost. This workflow consists of three sub-workflows, each automating specific steps in the playbook. Read more about this use case and learn how to set up the workflows step-by-step in the blog tutorial How to automate every step of an incident response workflow. Prerequisites A PagerDuty account and credentials A Mattermost account and credentials A Jira account and credentials Nodes Webhook nodes trigger the workflows when an incident is created in PagerDuty, and when the incidedent is acknowledged and resolved. Mattermost nodes create an auxiliary channel for the on-call team to discuss the incident with buttons to acknowledge the incident and mark it as resolved. PagerDuty nodes update the status of the incident. Jira nodes create an issue about the incident and update its status when it's resolved.
by Catalina Kuo
Overview Do you often forget to record expenses? 你是不是會常常忘記紀錄花費? Let Spending Tracker Bot help you! 讓 Spending Tracker Bot 來幫你! This AI image/text Spending Tracker LINE Bot Workflow allows you to quickly create a customized spending tracker robot without writing a line of code. At any time, you can speak or send a photo, and the AI will parse it and automatically log the expense to your cloud ledger. 這套 AI 圖片文字記帳 LINE Bot Workflow ,讓你不用寫一行程式碼,就能快速打造一個量身訂製的記帳機器人。無論何時,只需要口述或發送一張照片,AI 就會幫你整理好自動計入雲端帳本 Preparation ① Enable the Google Sheets API in GCP and complete the OAuth setup ② Create the Google Sheet and populate the field names (Feel free to modify based on your own needs) ③ Configure the Webhook URL in the LINE Developers Console ④ OpenAI API Key ① 在 GCP 啟用 Google Sheets API,並完成 OAuth ② 建立並填好 Google Sheet 欄名 (按照自己的需求做更動) ③ 於 LINE Developers 控制台設定 Webhook URL ④ OpenAI API Key Node Configurations Webhook Purpose: The URL is used to receive incoming requests from LINE. Configuration: Paste this URL into the Webhook URL field in your LINE Developers Console. 用途: 要接收 Line 的 URL 設定: 將 URL 放到 Line Webhook URL Switch based on Expense Type & Set/Https Purpose: To distinguish whether the incoming message is text or an image. Configuration: Use a Switch node to route the flow accordingly. 用途: 區分 text 或 image 設定: switch 分流 AI Agent Purpose: To extract and organize the required fields. Configuration: Chat Model & Structured Output Parser. 用途: 整理出需要的欄位 設定: Chat Model & Structured Output Parser Create a deduplication field Purpose: To prevent duplicate entries by creating a unique "for_deduplication" field. Configuration: Join multiple field names using hyphens (-) as separators. 用途: 確保不會重複寫入,先創建一個"去重使用"欄位 設定: 用 - 連接多個欄位 Aggregrate & Merge_all Purpose: To prevent duplicate entries in the data table. Configuration: Read the Google sheet, extract the existing "for_deduplication" column into a dedupeList, and compare it against the newly generated "for_deduplication" value from the previous step. 用途: 防止重複寫入資料表 設定:讀取雲端表,將原本的"去重使用欄位"整理成dedupeList,與前一步整理好的"去重使用"欄位做比對 Response Switch Purpose: To route data and send appropriate replies based on the content. Configuration: Use the replyToken to respond after branching logic. Depending on the result, either write to the data table or return a message: ✅ Expense recorded successfully: <for_deduplication> Irrelevant details or images will not be logged. ⚠️ This entry has already been logged and will not be duplicated. 用途: 資料分流,回應訊息 設定:使用 replyToken ,資料分流後,寫入資料表或回應訊息 ✅ 記帳成功 : <去重使用欄位> 不相關明細或圖片,不會計入 ⚠️ 此筆資料已記錄過,不會重複記帳 Step by step teaching notes 【Auto Expense Tracker from LINE Messages with GPT-4 and Google Sheets】 【AI 圖片文字記帳 Line Bot,自動記帳寫入 Google Sheet】
by CustomJS
This n8n template shows how to extract selected pages from a generated PDF with the PDF Toolkit by www.customjs.space. @custom-js/n8n-nodes-pdf-toolkit Notice Community nodes can only be installed on self-hosted instances of n8n. What this workflow does Downloads** each PDF using an HTTP Request. Extract** pages from the PDF file as needed. Requirements Self-hosted** n8n instance CustomJS API key** for extracting PDF files. PDF files to be merged** to be converted into a PDF Workflow Steps: Manual Trigger: Runs with user interaction. Download PDF File: Pass urls for PDF files to merge. Extract Pages from PDF: Extract selected pages from a generated PDF Usage Get API key from customJS Sign up to customJS platform. Navigate to your profile page Press "Show" button to get API key Set Credentials for CustomJS API on n8n Copy and paste your API key generated from CustomJS here. Design workflow A Manual Trigger for starting workflow. HTTP Request Nodes for downloading PDF files. Extract Pages from PDF. You can replace logic for triggering and returning results. For example, you can trigger this workflow by calling a webhook and get a result as a response from webhook. Simply replace Manual Trigger and Write to Disk nodes. Perfect for Taking a note of specific pages from PDF files. Splitting PDF file into multiple parts.
by Oneclick AI Squad
This automated n8n workflow detects and manages fraudulent booking transactions through comprehensive AI-powered analysis and multi-layered security checks. The system processes incoming travel booking data, performs IP geolocation verification, enriches transaction details with AI insights, calculates dynamic risk scores, and executes automated responses based on threat levels. All transactions are logged and appropriate notifications are sent to relevant stakeholders. Good to Know The workflow combines multiple detection methods, including IP geolocation, AI analysis, and risk scoring algorithms Google Gemini Chat Model provides advanced natural language processing for transaction analysis Risk levels are dynamically calculated and categorized as CRITICAL, HIGH, or standard risk Automated blocking and flagging system protects against fraudulent transactions in real-time All transaction data is logged to Google Sheets for audit trails and pattern analysis The system respects API rate limits and includes proper error handling mechanisms How It Works 1. Initial Data Ingestion & Extraction Monitors and captures incoming booking transaction data from various sources Extracts key booking details including user information, payment data, booking location, and transaction metadata Performs initial data validation and formatting for downstream processing 2. IP Geolocation and AI Analysis IP Geolocation Check**: Validates booking IP addresses by checking geolocation details and comparing against expected user locations AI Agent Integration**: Utilizes Google Gemini Chat Model to analyze booking patterns, user behavior, and transaction anomalies Enhanced Data Processing**: Enriches transaction data with geographical context and AI-driven risk indicators 3. Risk Calculation and Decision Logic Enhanced Risk Calculator**: Combines AI-generated risk scores with geolocation-based factors, payment method analysis, and historical patterns Critical Risk Check**: Flags transactions with risk levels marked as CRITICAL for immediate action High Risk Check**: Identifies HIGH risk transactions requiring additional verification steps Dynamic Scoring**: Adjusts risk calculations based on real-time threat intelligence and pattern recognition 4. Action & Notification Block User Account**: Automatically blocks user accounts for CRITICAL risk transactions to prevent immediate fraud Flag for Review**: Marks HIGH risk transactions for manual review by fraud prevention teams Send Notifications**: Dispatches real-time alerts via email and messaging systems to security teams Automated Responses**: Sends appropriate messages to users based on transaction status and risk level 5. Logging & Response Log to Google Sheets**: Records all transaction details, risk scores, and actions taken for comprehensive audit trails Flag for Review**: Maintains detailed logs of flagged transactions for pattern analysis and machine learning improvements Response Tracking**: Monitors and logs all automated responses and manual interventions How to Use Import the workflow into your n8n instance Configure Google Gemini Chat Model API credentials for AI analysis Set up IP geolocation service API access for location verification Configure Google Sheets integration for transaction logging Establish Gmail/email credentials for notification delivery Define risk thresholds and scoring parameters based on your fraud tolerance levels Test the workflow with sample booking data to verify all components function correctly Monitor initial deployments closely to fine-tune risk scoring algorithms Establish manual review processes for flagged transactions Set up regular monitoring and maintenance schedules for optimal performance Requirements Google Gemini Chat Model API access IP Geolocation service API credentials Google Sheets API integration Gmail API or SMTP email service for notifications n8n instance with appropriate node modules installed Customizing This Workflow Risk Scoring Parameters**: Adjust risk calculation algorithms and thresholds based on your specific fraud patterns and business requirements AI Model Configuration**: Fine-tune Google Gemini prompts and analysis parameters for improved accuracy in your use case Notification Channels**: Add or modify notification methods including Slack, SMS, or webhook integrations Data Sources**: Extend input methods to accommodate additional booking platforms or payment processors Logging Destinations**: Configure alternative or additional logging systems such as databases or external SIEM platforms Geographic Rules**: Customize geolocation validation rules based on your service areas and customer base Automated Actions**: Modify or expand automated response actions based on your fraud prevention policies Review Workflows**: Integrate with existing fraud review systems or ticketing platforms for seamless manual review processes
by Oneclick AI Squad
This guide walks you through setting up an AI-driven workflow to automate flight and hotel reservation processes using a conversational travel booking system. The workflow accepts booking requests, processes them via APIs, and sends confirmations, enabling a seamless travel booking experience. What’s the Goal? Automatically accept and process booking requests for flights and hotels via HTTP POST. Use AI to understand natural language requests and route them to appropriate data processors. Search for flights and hotels using external APIs and process booking confirmations. Send confirmation emails and return structured booking data to users. Enable an automated system for efficient travel reservations. By the end, you’ll have a self-running system that handles travel bookings effortlessly. Why Does It Matter? Manual booking processes are time-consuming and prone to errors. This workflow offers: Zero Human Error**: AI ensures accurate request parsing and booking processing. Time-Saving Automation**: Automates the entire booking lifecycle, boosting efficiency. Seamless Confirmation**: Sends automated emails and responses without manual intervention. Enhanced User Experience**: Provides a conversational interface for bookings. Think of it as your reliable travel booking assistant that keeps the process smooth and efficient. How It Works Here’s the step-by-step flow of the automation: Step 1: Trigger the Workflow Webhook Trigger**: Accepts incoming booking requests via HTTP POST, initiating the workflow. Step 2: Parse the Request AI Request Parser**: Uses AI to understand natural language booking requests (e.g., flight or hotel) and extracts relevant details. Step 3: Route Booking Type Booking Type Router**: Determines whether the request is for a flight or hotel and routes it to the respective data processor. Step 4: Process Flight Data Flight Data Processor**: Handles flight-specific data and prepares it for the search API. Step 5: Search Flight API Flight Search API**: Searches for available flights based on parameters (e.g., https://api.aviationstack.com) and returns results. Step 6: Process Hotel Data Hotel Data Processor**: Handles hotel-specific data and prepares it for the search API. Step 7: Search Hotel API Hotel Search API**: Searches for available hotels based on parameters (e.g., https://api.booking.com) and returns results. Step 8: Process Flight Booking Flight Booking Processor**: Processes flight bookings and generates confirmation details. Step 9: Process Hotel Booking Hotel Booking Processor**: Processes hotel bookings and generates confirmation details. Step 10: Generate Confirmation Message Confirmation Message Generator**: Creates structured confirmation messages for the user. Step 11: Send Confirmation Email Send Confirmation Email**: Sends booking confirmation via email to the user. Step 12: Send Response Send Response**: Returns structured booking data to the user, completing the workflow. How to Use the Workflow? Importing the workflow in n8n is a straightforward process. Follow these steps to import the Conversational Travel Booker workflow: Download the Workflow: Obtain the workflow file (e.g., JSON export from n8n). Open n8n: Log in to your n8n instance. Import Workflow: Navigate to the workflows section, click "Import," and upload the workflow file. Configure Nodes: Adjust settings (e.g., API keys, webhook URLs) as needed. Execute Workflow: Test and activate the workflow to start processing bookings. Requirements n8n account and instance setup. Access to flight and hotel search APIs (e.g., Aviationstack, Booking.com). Email service integration for sending confirmations. Webhook URL for receiving booking requests. Customizing this Workflow Modify the AI Request Parser to handle additional languages or booking types. Update API endpoints in Flight Search API and Hotel Search API nodes to match your preferred providers. Adjust the Send Confirmation Email node to include custom email templates or additional recipients. Schedule the Webhook Trigger to align with your business hours or demand peaks.
by Shahrear
📜 AI-Powered Contract Management Pipeline (Google Drive + VLM Run + Sheets + Calendar + Slack) ⚙️ What This Workflow Does This workflow automatically extracts, organizes, and tracks legal contract details from documents uploaded to Google Drive. Using VLM Run’s Execute Agent, it parses key metadata such as contract ID, parties, dates, and terms — then stores, alerts, and schedules reminders through Google Sheets, Calendar, and Slack. 🧩 Requirements Google Drive OAuth2** for monitoring and downloads VLM Run API credentials** with Execute Agent access Google Sheets OAuth2** for structured record storage Google Calendar OAuth2** for key date reminders Slack API credentials** for team notifications A reachable Webhook URL (for receiving parsed contract data) ⚡Quick Setup Configure Google Drive OAuth2 and create upload folder and folder for saving extracted images. Install the verified VLM Run node by searching for VLM Run in the node list, then click Install. Once installed, you can start using it in your workflows. Add VLM Run API credentials for document parsing. Configure Google Sheet and Calendar. For Google Sheet, from the document list, pick your Google Sheet (e.g., test). Then select the sheet inside it (e.g., Sheet1). Set the operation to Append Row — this will add new contract details as new rows. Turn on Map Each Column Manually. Match each contract field (like Contract ID, Title, Parties, Effective Date, Termination Date) to its corresponding column in your Google Sheet. Configure Slack for notifications. ⚙️ How It Works Monitor Contract Uploads – Watches a target Google Drive folder for new file uploads (PDFs, images, or scans). Download Contract File – Automatically downloads new contracts for AI analysis. VLM Run ContractParser – Sends the file to the VLM Run Execute Agent, which extracts structured contract data, including: Contract ID Title Parties (with roles) Property address Effective date Termination date Rent, deposit, payment terms, and governing law Receive Contract Data – The webhook endpoint receives the structured JSON response. Format Contract Data – Normalizes fields, formats dates, and prepares for storage. Save to Expense Database (Google Sheets) – Appends extracted data to a master Google Sheet for centralized contract tracking. Notify via Slack – Posts a concise summary to a Slack channel, showing key contract details for visibility. Create Calendar Events – Automatically schedules Google Calendar events for: Effective Date Termination Date Renewal Reminder (60 days before termination) 💡 Why Use This Workflow Manual contract management is error-prone and time-consuming key details like renewal dates, payment terms, or termination clauses often get lost in email threads or folders. This workflow ensures: Zero missed deadlines** automatic Google Calendar reminders keep your team on track. Instant team visibility** - Slack notifications keep legal, finance, and operations aligned. End-to-end automation** no need for manual parsing, data entry, or follow-ups. 🧠 Perfect For Legal teams automating contract intake and tracking Real estate or lease management workflows Finance or procurement teams needing expiration alerts Organizations centralizing contract metadata in Sheets 🛠️ How to Customize Modify Extraction Fields Edit the VLM Run Execute Agent schema to add fields like contract value, payment schedule, department, or contact email. Change Storage Swap Google Sheets for Airtable, Notion, or BigQuery if you manage large datasets or need relational tracking. Customize Notifications Send Slack alerts only for high-value or expiring contracts, and tag relevant teams (e.g., @legal, @finance). Add Calendar Events Auto-create events for reviews or payment milestones using extra date fields. Add Approvals or Signatures Insert a Google Form or Slack approval step, or trigger DocuSign for e-signature automation. ⚠️ Community Node Disclaimer This workflow uses community nodes (VLM Run) that may need additional permissions and custom setup.
by Obsidi8n
How it Works This n8n template makes it possible to send emails directly from your Obsidian notes. It leverages the power of the Obsidian Post Webhook plugin, allowing seamless integration between your notes and the email workflow. What it does: Receives note content and metadata from Obsidian via a Webhook. Parses YAML frontmatter to define email recipients, subject, and more. Automatically processes attachments, encoding them into an email-friendly format. Sends emails via Gmail and confirms the status back to Obsidian. Includes a testing feature to verify everything works before going live. Set-up Steps Webhook Configuration: Set your n8n POST Webhook URL in the Obsidian Obsidian Post Webhook plugin settings. Email Integration: Submit the Gmail credentials in n8n email nodes. Test the Workflow: Run a test from Obsidian to ensure the template functions correctly. Activate and Enjoy: Start sending customized emails with attachments from your notes in no time!
by Priya Jain
This workflow provides an OAuth 2.0 auth token refresh process for better control. Developers can utilize it as an alternative to n8n's built-in OAuth flow to achieve improved control and visibility. In this template, I've used Pipedrive API, but users can apply it with any app that requires the authorization_code for token access. This resolves the issue of manually refreshing the OAuth 2.0 token when it expires, or when n8n's native OAuth stops working. What you need to replicate this Your database with a pre-existing table for storing authentication tokens and associated information. I'm using Supabase in this example, but you can also employ a self-hosted MySQL. Here's a quick video on setting up the Supabase table. Create a client app for your chosen application that you want to access via the API. After duplicating the template: a. Add credentials to your database and connect the DB nodes in all 3 workflows. Enable/Publish the first workflow, "1. Generate and Save Pipedrive tokens to Database." Open your client app and follow the Pipedrive instructions to authenticate. Click on Install and test. This will save your initial refresh token and access token to the database. Please watch the YouTube video for a detailed demonstration of the workflow: How it operates Workflow 1. Create a workflow to capture the authorization_code, generate the access_token, and refresh the token, and then save the token to the database. Workflow 2. Develop your primary workflow to fetch or post data to/from your application. Observe the logic to include an if condition when an error occurs with an invalid token. This triggers the third workflow to refresh the token. Workflow 3. This workflow will handle the token refresh. Remember to send the unique ID to the webhook to fetch the necessary tokens from your table. Detailed demonstration of the workflow: https://youtu.be/6nXi_yverss