by Billy Christi
Who is this for? This workflow is ideal for: Finance teams** that need to process incoming invoices faster with minimal errors Small to mid-sized businesses** that want to automate invoice intake, review, and storage Operations managers** who require approval workflows and centralized record-keeping What problem is this workflow solving? Manually processing invoices is time-consuming, error-prone, and often lacks structure. This workflow solves those challenges by: Automating the intake of invoices** from multiple sources (email, Google Drive, web form) Extracting invoice data using AI**, eliminating manual data entry Implementing an email-based approval system** to add human oversight Automatically storing approved invoice data** in Google Sheets for easy access and reporting Notifying stakeholders** when invoices are approved or rejected What this workflow does This end-to-end invoice processing workflow includes: Three invoice input methods: Google Drive folder monitor, Gmail attachments, and web form uploads PDF to text extraction for each input method using native PDF parsing AI-powered invoice analysis with GPT-4 to extract structured fields such as vendor, total, and due date Dynamic categorization of invoice type (e.g., Travel, Software, Utilities) via AI Email-based approval workflow with embedded forms to collect decisions and notes Automated Google Sheets logging of all invoice data, approval status, and reviewer feedback Rejection notifications sent automatically to your finance team for transparency and follow-up Setup Copy the Google Sheet template here: 👉 PDF Invoice Parser with Approval Workflow – Google Sheet Template Connect your Google Drive account and specify the invoice folder ID Set up Gmail to monitor incoming invoices with PDF attachments Enable your form trigger to accept direct uploads from your internal or external users Enter your OpenAI API key in the AI processing node for data extraction Configure Google Sheets with a target spreadsheet to store invoice data Set recipient email addresses for invoice approvals and rejection notifications Test with a sample invoice to ensure end-to-end flow is working How to customize this workflow to your needs Change input sources**: Replace Gmail with Outlook or use Slack uploads instead Add validation steps**: Include regex or keyword checks before AI analysis Customize the AI schema**: Modify the expected JSON structure based on your internal finance system Integrate with accounting tools**: Add Xero, QuickBooks, or custom API nodes to push data Route based on category**: Add conditional logic to handle invoices differently based on vendor or category Multi-level approvals**: Add additional email steps if higher-level signoff is needed Audit logging**: Use database or Google Sheets to maintain a historical log of approvals and rejections
by InfraNodus
This template can be used to upload the files in your Google drive to an InfraNodus knowledge graph. The InfraNodus graph will then reveal the main topics and ideas in your collection of documents and show the content gaps in them. You can also use the built-in AI to converse with the documents. You can also access the InfraNodus Graphs via its GraphRAG API to re-use them in your other n8n workflows for high-quality content retrieval and knowledge base optimization. The template showcases the use of multiple n8n nodes and processes: Extracting documents from a Google Drive folder text extraction optional: high-quality PDF conversion using ConvertAPI InfraNodus knowledge graph generation Note: If you want to **Sync your Google drive to an InfraNodus graph, check out our other workflow* How it works Here's a description of this workflow step by step: Find all the files in a specific Google drive folder For each file found: reiterate the workflow and Identify the type of the file (TXT, PDF, Markdown) For TXT and Markdown files extract the text data For PDF files use a special PDF to Text convertor to extract the text data. (Optional: using ConvertAPI for better quality PDF conversion) Forward everything to the InfraNodus graphAndStatements API endpoint with the name of the new graph, the text field with the text data, the text settings, and doNotSave=false to create a new graph Reiterate through another file. How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Use that API key to set up authorization for the InfraNodus tool in the workflow. If you want to upload the files to an existing graph, you should copy its name from InfraNodus. Otherwise you can specify any name you want. Requirements An InfraNodus account and API key A Google Drive account and authorization (you will need to set it up via Google Cloud using the n8n instructions provided in the Google Drive node). Customizing this workflow You can use Dropbox instead of Google Drive. You can also modify this workflow slightly to make it Sync with a Google Drive when the new files appear in it. Check out the complete guide at https://support.noduslabs.com/hc/en-us/articles/20267019838108-Upload-Sync-Your-Google-Drive-Folder-with-InfraNodus-using-n8n
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for The Brave Search Structured Data Extractor workflow is designed for professionals and teams that need high-quality, structured insights from Brave search results in real time. Whether you're performing market research, tracking competitors, training AI models, or powering content engines, this workflow offers a robust and automated solution. This workflow is tailored for: Market Researchers - Who analyze trends across multimedia channels AI Developers - Who require clean, structured datasets for model fine-tuning SEO & Content - Analysts looking to monitor visibility across news, images, and videos Media Researchers - Curating timely and relevant information across formats Automation Engineers - Integrating search insights into downstream workflows What problem is this workflow solving? Traditional web scraping and search result parsing is fragmented, inconsistent, and prone to errors, especially when dealing with multimedia (images, videos, news) data from search engines. This workflow provides: Centralized Brave search data extraction across all content types. Switches the search execution based upon the type of search that is being set. ex: news, images, videos, all Automated structured data transformation using Google Gemini Unified output persistence and notification across disk, webhook, and Google Sheets What this workflow does Input Configuration Define your Brave search query Set the search type: videos, images, news, or all Configure your Bright Data MCP zone Bright Data MCP Search Execution Initiates a Brave search via Bright Data MCP using the correct URL pattern for each search type Returns raw HTML of search results Google Gemini LLM Structured Data Extraction Transforms raw results into structured data (e.g., title, URL, source, snippet) Output Handling Save to disk (e.g., JSON or CSV file) Send Webhook notification with structured data (e.g., Slack, internal dashboards) Store in Google Sheets for team-wide access or dashboarding Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Enhance Output Analysis Add additional LLM prompts for topic classification, sentiment scoring, or trend forecasting. Output Format Options Choose to output CSV, Markdown, or HTML reports based on your integration target. Schedule Automation Trigger the workflow on a schedule (daily/weekly) to keep monitoring topical content.
by InfraNodus
This template can be used to sync the files in your Google drive to a new or existing InfraNodus knowledge graph. The InfraNodus graph will then reveal the main topics and ideas in your collection of documents and show the content gaps in them. You can also use the built-in AI to converse with the documents. You can also access the InfraNodus Graphs via its GraphRAG API to re-use them in your other n8n workflows for high-quality content retrieval and knowledge base optimization. The template showcases the use of multiple n8n nodes and processes: Syncing documents from a Google Drive folder / extracting them text extraction from files optional: high-quality PDF conversion using ConvertAPI InfraNodus knowledge graph generation Note: If you want to **upload files from your Google drive to an InfraNodus graph, check out our other workflow* How it works Here's a description of this workflow step by step: Wait for new file(s) to appear in the Google drive folder Reiterate through each file Retrieve the new file from the Google drive For each file found: reiterate the workflow and Identify the type of the file (TXT, PDF, Markdown) For TXT and Markdown files extract the text data For PDF files use a special PDF to Text convertor to extract the text data. (Optional: using ConvertAPI for better quality PDF conversion) Forward everything to the InfraNodus graphAndStatements API endpoint with the name of the new graph, the text field with the text data, the text settings, and doNotSave=false to create a new graph Reiterate through another file. How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Use that API key to set up authorization for the InfraNodus tool in the workflow. If you want to upload the files to an existing graph, you should copy its name from InfraNodus. Otherwise you can specify any name you want. Requirements An InfraNodus account and API key A Google Drive account and authorization (you will need to set it up via Google Cloud using the n8n instructions provided in the Google Drive node). Customizing this workflow You can use Dropbox instead of Google Drive. You can also modify this workflow slightly to make it Upload the files from a Google Drive when the new files appear in it. Check out the complete guide at https://support.noduslabs.com/hc/en-us/articles/20267019838108-Upload-Sync-Your-Google-Drive-Folder-with-InfraNodus-using-n8n
by Nasser
For Who? Content Creators Youtube Automation Marketing Team How it works? 1 - Enter your content idea in the Edit Fields node in a "raw" format. Ex : Boil Eggs Perfectly 2 - LLM create 3 keywords request based on the idea and Apify scrape the YTB Search 3 - Wait until the dataset is completed in Apify 4 - Retrieve Dataset from Apify, calculate approximation of CTR and filter top performing videos 5 - LLM analyze patterns of best performing titles and create a prompt based on it. Another LLM create 5 titles based on these criteria 6 - LLM analyze patterns of best performing thumbnails and create a prompt based on it. Another LLM create 1 thumbnail based on these criteria 7 - Return titles and thumbnail in a HTML Page 📺 YouTube Video Tutorial: SETUP Setup Input Content Idea : Enter Keyword Related to the niche you want. Trigger can be replaced with anything as long as you retrieve a content idea. For example : Form submission, Database entry, etc ... If you want to change the number of keywords, update the data accordingly in the "Create Keywords" LLM Chain node ➡️ Structured Output Parser AND in the "YTB Search Scrape" HTTP Request Node in Body ➡️ JSON ➡️ searchQueries. If you want to change the number of scraped videos for each keyword, update the data accordingly in the "Create Videos Dataset" HTTP Request Node in Body ➡️ JSON ➡️ maxResults. If you want to adjust the CTR Calculation feel free to update it in the Code Node ➡️ Follow the Comments (after "//") to find what you're looking for. If you want to adjust the level of virality of the videos kept for analaysis go to Filter Node ➡️ Value. Setup Output HTML Page : You can also replace this part with any type of storage. For example : Airtable Database, Google Drive/Google Sheet, Send to an email, etc ... APIs : For the following third-party integrations, replace ==[YOUR_API_TOKEN]== with your API Token or connect your account via Client ID / Secret to your n8n instance : Apify : https://docs.apify.com/api/v2/getting-started OpenAI : https://platform.openai.com/docs/overview (base URL : https://api.openai.com/v1) OR OpenRouter : https://openrouter.ai/docs/quickstart (base URL : https://openrouter.ai/api/v1) HuggingFace (FLUX.1) : https://huggingface.co/docs 👨💻 More Workflows : https://n8n.io/creators/nasser/
by Dvir Sharon
🛒 Monitor Google Shopping Prices with Bright Data & Email Alerts This template requires a self-hosted n8n instance to run. A comprehensive n8n automation that monitors product prices daily using Bright Data's Google Shopping dataset and sends smart email alerts when price conditions are met. 📋 Overview This workflow provides an automated price monitoring solution that tracks product prices from Google Shopping daily and sends intelligent email notifications. Perfect for e-commerce monitoring, competitor analysis, deal hunting, and inventory management. ✨ Key Features 🕘 Scheduled Monitoring: Daily automated price checks at 9 AM 🛍️ Google Shopping Integration: Uses Bright Data's dataset for accurate pricing 📊 Smart Price Comparison: Compares current prices with historical data 📧 Intelligent Alerts: Sends emails only when prices meet criteria 📈 Data Storage: Updates Google Sheets with latest pricing data 🔄 Batch Processing: Handles multiple products with rate limiting ⚡ Fast & Reliable: Built-in error handling 🎯 Customizable Filters: Advanced price comparison logic 🎯 What This Workflow Does Schedule Trigger: Runs daily at 9 AM Data Retrieval: Fetches product list from Google Sheets Price Extraction: Scrapes current prices using Bright Data Data Update: Updates Google Sheets with new prices Price Comparison: Compares new vs. old prices Smart Filtering: Filters products that meet alert criteria Email Notifications: Sends alerts for qualifying changes Rate Limiting: Adds delay between emails Output Data Points | Field | Description | Example | | :------------ | :------------------------- | :------------------------------- | | Product URL | Original Google Shopping URL | https://shopping.google.com/product/... | | Product Name | Product title | iPhone 15 Pro Max 256GB | | Ratings | Product rating score | 4.5 | | Reviews | Number of reviews | 1,247 | | Old Price | Previous price | $1,199.00 | | New Price | Current scraped price | $1,199.00 | | Timestamp | When the check occurred | 2025-05-30T09:00:00Z | 🚀 Setup Instructions Prerequisites n8n instance (self-hosted or cloud) Google account with Sheets access Bright Data account with Google Shopping dataset access Gmail account for notifications Steps Import the workflow JSON into n8n Configure Bright Data credentials and dataset access Set up Google Sheets with required columns Configure Gmail OAuth2 credentials Update sheet IDs and schedule settings Test with sample products and activate 📖 Usage Guide Google Sheet Structure Your Google Sheet should have the following columns to ensure the workflow functions correctly: Product URL** (Text): The direct URL to the Google Shopping product page. This is the primary identifier for the product. Product Name** (Text): The name of the product. This will be automatically populated or updated by the workflow. Old Price** (Number/Currency): The price of the product from the previous check. This column is crucial for price comparison. New Price** (Number/Currency): The most recently scraped price of the product. Ratings** (Number): The star rating of the product. Reviews** (Number): The total number of reviews for the product. Timestamp** (Datetime): The date and time when the price check was performed. Adding Products Add Google Shopping URLs to your Google Sheet. The workflow will fetch product details and track prices. Historical price data builds over time. Understanding Price Alerts The default setting for this workflow is to send an email alert when the new price equals the old price. This might seem counterintuitive, but it's useful for specific scenarios, such as: Monitoring stable pricing:** If you are tracking a product and want to be notified when its price has remained consistent over time, indicating a potential stable buying opportunity or a benchmark. Verifying data consistency:** To confirm that the scraping process is working correctly and consistently retrieving the same price when no changes are expected. You can easily customize the alert logic to trigger on different conditions as described below. Customizing Alert Logic Price drops:** new_price < old_price Significant drops:** new_price < (old_price * 0.9) (e.g., price dropped by more than 10%) Price increases:** new_price > old_price Any change:** new_price != old_price Reading the Results Real-time pricing data Historical tracking Product metadata Timestamps for each check 🔧 Customization Options Add More Data:** Descriptions, availability, seller info, shipping, images Modify Email Templates:** Customize subject and body Multiple Recipients:** Duplicate email node and change recipients Webhook Integration:** Add real-time triggers or Slack alerts 🚨 Troubleshooting Bright Data connection failed:** Check API credentials and dataset access No price data extracted:** Verify URLs and test with different products Google Sheets permission denied:** Re-authenticate and check sharing Emails not sending:** Re-auth Gmail OAuth and verify recipients Filter not working:** Check price formats and logic Workflow failed:** Check logs, retry logic, and network status 📊 Use Cases & Examples E-commerce Monitoring:** Track competitor pricing and trends Deal Hunting:** Get alerts for price drops on wishlist items Inventory Management:** Monitor supplier pricing for procurement Market Research:** Analyze pricing trends and generate reports ⚙️ Advanced Configuration Batch Processing:** Increase batch size, add delays, use parallel processing Price History:** Store historical data, calculate averages, forecast trends Tool Integration:** CRM, Slack, databases, BI tools (Tableau, Power BI) 📈 Performance & Limits Single URL:** 2–5 seconds Concurrent Requests:** 3–5 (depends on Bright Data plan) Data Accuracy:** 95%+ Success Rate:** 90%+ Daily Capacity:** 100–500 products Memory:** ~100MB per execution API Calls:** 1 Bright Data + 2 Google Sheets per product 🤝 Support & Community n8n Forum:** <https://community.n8n.io> Documentation:** <https://docs.n8n.io> Bright Data Support:** Via your Bright Data dashboard GitHub Issues:** Report bugs and request features 🎯 Ready to Use! Your workflow provides a solid foundation for automated price monitoring. Customize it to fit your specific needs and use cases for maximum effectiveness in tracking Google Shopping prices with intelligent email notifications. Please note that this template uses Community Nodes. Ensure you understand the risks before using community nodes.
by Samir Saci
Tags: Scrapping, Events, European Union, Networking Context Hey! I’m Samir, a Supply Chain Engineer and Data Scientist from Paris, and the founder of LogiGreen Consulting. We use AI, automation, and data to support sustainable and data-driven operations across all types of organizations. This workflow is part of our networking strategy (as a business) to track official EU events that may relate to topics we cover. > Want to stay ahead of critical EU meetings and events without checking the website every day? This n8n workflow automatically scrapes the EU’s official event portal and logs the latest entries with clean metadata including date, location, category, and link. 📬 For collaborations, feel free to connect with me on LinkedIn Who is this template for? This workflow is useful for: Policy & public affairs teams** following institutional activities Sustainability teams** watching for relevant climate-related summits NGOs and researchers** interested in event calendars Data teams** building dashboards on public event trends What does it do? This n8n workflow: 🌐 Scrapes the EU events portal for new meetings and conferences 📅 Extracts event metadata (title, date, location, type, and link) 🔁 Handles pagination across multiple pages 🚫 Checks for duplicates already stored 📊 Saves new records into a connected Google Sheet How it works Triggered daily via cron HTTP node loads the event listing HTML Extract HTML blocks for each event article Parse event name, link, type, location, and full date Concatenate and clean dates for easy tracking Store non-duplicate entries in Google Sheets The workflow uses static data to track pagination and ensure only new events are stored, making it ideal for building up a clean dataset over time. What do I need to get started? You’ll need: A Google Sheet connected to your n8n instance No code or AI tools needed — just n8n and this template Follow the Guide! Sticky notes are included directly inside the workflow to guide you step-by-step through setup and customisation. 🎥 Watch My Tutorial Notes This is ideal for analysts and consultants who want clean, structured data from the EU portal You can add filtering, email alerts, or AI classifiers later This workflow was built using n8n version 1.93.0 Submitted: June 1, 2025
by Yaron Been
Workflow Overview This sophisticated n8n automation is a powerful lead generation and outreach tool designed to transform YouTube channel research into actionable marketing opportunities. By intelligently connecting multiple services and APIs, this workflow: Discovers Targeted Channels: Scrapes YouTube channels based on specific keywords Extracts comprehensive channel metadata Identifies potential business opportunities Intelligent Lead Qualification: Filters channels with contact emails Validates email authenticity Ensures high-quality lead generation Personalized Outreach: Sends customized cold emails Leverages channel-specific personalization Automates initial contact process Key Benefits 🕵️ Automated Lead Discovery: Find potential collaborators or clients 🧠 Smart Filtering: Eliminate invalid or irrelevant leads 📧 Personalized Outreach: Contextual, channel-specific communication ⏱️ Time-Saving: Eliminate manual research and email hunting Workflow Architecture 🔍 Stage 1: Channel Scraping Apify Integration**: Scrapes YouTube channels Keyword-Based Search**: Target specific niches Metadata Extraction**: Collect channel details, emails 🧩 Stage 2: Lead Qualification Email Existence Check**: Filter channels with contact info ZeroBounce Verification**: Validate email authenticity Quality Control**: Ensure only valid leads proceed 📬 Stage 3: Personalized Outreach Gmail Integration**: Send customized cold emails Dynamic Personalization**: Use channel-specific details Automated Communication**: Streamline initial contact Potential Use Cases Marketing Agencies**: Find potential clients Influencer Marketers**: Discover collaboration opportunities Content Creators**: Network and expand professional connections Sales Teams**: Generate targeted lead lists Recruitment Specialists**: Identify industry professionals Setup Requirements Apify Account API token YouTube Scraper Actor Configured search keywords ZeroBounce Account Email verification API Validation credits Gmail Account OAuth2 authentication Configured sending profile n8n Installation Cloud or self-hosted instance Import workflow configuration Configure API credentials Future Enhancement Suggestions 🤖 AI-powered email personalization 📊 Advanced lead scoring mechanisms 🔄 Automated follow-up sequences 📈 Integration with CRM platforms 🌐 Multi-platform lead generation Ethical Considerations Respect email communication guidelines Comply with anti-spam regulations Provide clear opt-out mechanisms Maintain professional, value-driven outreach Connect With Me Ready to supercharge your lead generation? 📧 Email: Yaron@nofluff.online 🎥 YouTube: @YaronBeen 💼 LinkedIn: Yaron Been Transform your outreach strategy with intelligent, automated workflows!
by Miha
Combine Tech News in a Personalized Weekly Newsletter This n8n template automates the collection, storage, and summarization of technology news from top sites, turning it into a concise, personalized weekly newsletter. If you like staying informed but want to reduce daily distractions, this workflow is perfect for you. It leverages RSS feeds, vector databases, and LLMs to read and curate tech content on your behalf—so you only receive what truly matters. How it works A daily scheduled trigger fetches articles from multiple popular tech RSS feeds like Wired, TechCrunch, and The Verge. Fetched articles are: Normalized to extract titles, summaries, and publish dates. Converted to vector embeddings via OpenAI and stored in memory for fast semantic querying. A weekly scheduled trigger activates the AI summarization flow: The AI is provided with your interests (e.g., AI, games, gadgets) and the desired number of items (e.g., 15). It queries the vector store to retrieve relevant articles and summarizes the most newsworthy stories. The summary is converted into a clean, email-friendly format and sent to your inbox. How to use Connect your OpenAI and Gmail accounts to n8n. Customize the list of RSS feeds in the “Set Tech News RSS Feeds” node. Update your interests and number of desired news items in the “Your Topics of Interest” node. Activate the workflow and let the automation run on schedule. Requirements OpenAI** credentials for embeddings and summarization Gmail** (or another email service) for sending the newsletter Customizing this workflow Want to use different sources? Swap in your own RSS feeds, or use an API-based news aggregator. Replace the in-memory vector store with Pinecone, Weaviate, or another persistent vector DB for longer-term storage. Adjust the agent's summarization style to suit internal updates, industry-specific briefings, or even entertainment recaps. Prefer chat over email? Replace the email node with a Telegram bot to receive your personalized tech newsletter directly in a Telegram chat.
by Ranjan Dailata
Who this is for? The LinkedIn Profile Extract and JSON Resume Builder is a powerful workflow that scrapes professional profile data from LinkedIn using Bright Data's infrastructure, then transforms that data into a clean, structured JSON resume using Google Gemini. The workflow is ideal for automating resume parsing, candidate profiling, or integrating into recruiting platforms. This workflow is tailored for: HR professionals & recruiters automating resume screening Talent acquisition platforms enriching candidate profiles Developers & AI builders creating resume-parsing AI pipelines Data scientists working on labor market analytics Growth hackers profiling prospects via public data What problem is this workflow solving? Parsing resumes or LinkedIn profiles into machine-readable formats is often a manual, error-prone process. Most scraping tools either fail due to anti-bot protections or return unstructured HTML that's hard to work with. This workflow solves that by: Using Bright Data's Web Unlocker for reliable, CAPTCHA-free LinkedIn scraping Extracting clean text and structured profile data via Google Gemini LLM Automatically generating a standards-compliant JSON Resume and Skills Sending the resume to webhooks or storing it for downstream usage What this workflow does Accepts LinkedIn Profile URL and required metadata (Bright Data zone, webhook) Scrapes LinkedIn profile using Bright Data Web Unlocker Extracts clean content and skills using Google Gemini LLM Builds a JSON-formatted resume following the JSON resume schema Sends the JSON resume via Webhook Notification Persists the output by saving the file to disk Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Set URL and Bright Data Zone node with the LinkedIn profile, Bright Data Zone and the Webhook notification URL. For testing purposes, you can obtain a webhook url using https://webhook.site/ How to customize this workflow to your needs Add Language Translation Insert a translation LLM node to support multilingual profiles. Generate PDF Resumes Convert JSON to formatted PDF resumes using an HTML-to-PDF module. Push to ATS or CRM Add integration nodes to pipe data into applicant tracking systems (ATS), CRMs, or databases. Use Alternative LLMs Swap Gemini with OpenAI or Anthropic Claude if preferred.
by Sobek
📝 DESCRIPTION OF THE WORKFLOW This workflow connects Salesforce and Geotab to streamline fleet tracking for field service jobs (Work Orders). When a new Work Order is created in Salesforce (with a 'New' status and valid coordinates), it creates a circular geofence zone in Geotab and updates the Work Order with the zone ID. If geolocation is missing, an alert email is sent to dedicated email. The workflow uses a Salesforce Outbound Message to trigger an n8n webhook. It includes robust credential handling and optional logic to skip or notify on bad data. Use Cases: Automating vehicle geofence setup for service visits Enhancing CRM-to-fleet system synchronisation Enforcing work orders data quality via alerts Integrations Used: Salesforce Geotab API Microsoft Outlook (or any SMTP-compatible service) Tags: geotab, salesforce, fleet management, gps tracking, field service, crm, automation, webhook, integration ADDITIONAL RESOURCES 🔗 Salesforce Salesforce Login \[Salesforce Setup (Admin Console)]\(https://login.salesforce.com/ → click “Setup” gear icon) Outbound Messages Documentation Salesforce Developer Documentation Salesforce Workbench (API Testing Tool) 🔗 Geotab Geotab Login (MyGeotab) Geotab Developer Portal Geotab API Explorer Geotab SDK (JavaScript Samples) Geotab Support Centre
by Sergey Skorobogatov
Accept YooKassa payments and log transactions in Google Sheets 🧾 Summary This workflow allows you to accept online payments via YooKassa and log both orders and transactions in Google Sheets — all without writing a single line of code. It supports full payment flow: product selection, payment initiation, webhook processing, refund updates, and payment status checks. 👥 Who is this for? This template is ideal for: Online stores with simple checkout flows Sellers of digital products or info-courses Entrepreneurs using Telegram bots or web forms Anyone needing quick payment integration with Google Sheets tracking 🎯 What problem does this workflow solve? Setting up online payments usually requires backend infrastructure. This no-code solution automates the entire payment flow: Handles product listing and price retrieval Initiates payments with email and return URL Listens for payment.succeeded and refund.succeeded events Records every action into structured Google Sheets ⚙️ What this workflow does 1. GET /products Returns a sorted list of products from a Google Sheet (products). 2. POST /payment Validates required fields (product_id, email, return_url) Checks email format Fetches product data from products Generates a unique idempotence key Sends a request to YooKassa API Saves the order into the orders sheet Returns a payment confirmation link 3. POST /yoomoney Webhook to process payment/refund events: On payment.succeeded, adds entry to transactions On refund.succeeded, updates transaction status 4. GET /status/\:id Returns real-time payment status from YooKassa 🚀 Setup Connect credentials: Google Sheets (OAuth2) YooKassa (Basic Auth using shopId and secretKey) Update the following Google Sheets: products: should contain product_id, title, price orders: for saving confirmed purchases transactions: for logging all successful or refunded payments Test endpoints using any HTTP client: Example payload for /payment: { "product_id": "abc123", "email": "user@example.com", "return_url": "https://your.site/success" } 🔧 How to customize this workflow Add delivery logic (e.g., email with product link after successful payment) Replace Google Sheets with a database (e.g., PostgreSQL) Connect Telegram or other messengers for post-payment notifications Add promo codes, discounts, or subscriptions logic 💼 Use cases Simple online checkouts Telegram bots selling access Educational product sales MVP e-commerce flows Donation or membership payments 📎 Notes ✅ Includes Sticky Notes for sections ✅ Includes error handling and validation ✅ No custom code needed except UUID generation