by Grigory Frolov
WordPress Blog to Google Sheets Sync Posts • Categories • Tags • Media 🧩 Overview This n8n workflow automatically syncs your WordPress website content — including posts, categories, tags, and media — into Google Sheets. It helps automate content reporting, SEO analysis, and data backups. The workflow can run on schedule or on demand via a webhook. 💡 Use cases Maintain a live database of blog posts in Google Sheets. Create dashboards in Google Data Studio or Looker Studio. Track new articles for newsletters or social media scheduling. Backup all WordPress content and media outside of your CMS. ⚙️ Prerequisites Before importing the workflow, ensure you have: A WordPress website with the REST API enabled (default in WP 4.7+). Authentication: either Application Passwords or Basic Auth credentials. A Google Sheet with the following tabs: Posts Categories Tags Media The following credentials configured in n8n: HTTP Basic Auth (for WordPress) Google Sheets OAuth2 🚀 Setup instructions Import the workflow into your n8n instance. Replace all example WordPress API URLs with your domain, for example: https://yourdomain.com/wp-json/wp/v2/ Connect your HTTP Basic Auth credentials (WordPress username + Application Password). Connect your Google Sheets OAuth2 account. Update the spreadsheet ID in each Google Sheets node with your own. Adjust the Schedule Trigger (e.g. run daily at 2:00 AM). Run once manually to verify data sync. 🧠 Workflow structure | Section | Description | |----------|--------------| | Schedule / Webhook Trigger | Starts the workflow manually or automatically | | Variables & Loop Vars | Initialize pagination for REST API requests | | Get Posts → Split Out → Update Posts | Fetch and update all WordPress posts | | Get Categories → Update Categories | Sync WordPress categories | | Get Tags → Update Tags | Sync WordPress tags | | Get Media → Split Out → Update Media | Sync media library (images, videos, etc.) | | IF Loops | Handles pagination logic until all items are retrieved | ⚠️ Notes & Limitations Works with standard WordPress REST API endpoints only. Custom post types require editing endpoint URLs. The per_page value defaults to 10; increase for faster syncs. For large sites, consider increasing n8n memory or adding execution logs. Avoid running the workflow too frequently to prevent API rate limits. 🎥 Video Tutorial A step-by-step setup guide is available here: 👉 https://www.youtube.com/watch?v=czSMWyD6f-0 Please subscribe to my YouTube channel to support me: 👉 https://www.youtube.com/@gregfrolovpersonal 👨💻 Author Created by: Grigory Frolov SEO & Automation Specialist — helping businesses integrate WordPress, AI, and data tools with n8n. 🧾 License This workflow is provided under the MIT License. Feel free to use, modify, and share improvements with the community.
by Lucas Perret
This workflow monitor G2 reviews URLS. When a new review is published, it will: trigger a Slack notification record the review in Google Sheets To install it, you'll need: access to Slack, Google Sheets and ScrapingBee Full guide here: https://lempire.notion.site/Scrape-G2-reviews-with-n8n-3f46e280e8f24a68b3797f98d2fba433?pvs=4
by Jorge Martínez
Automating WhatsApp replies in Go High Level with Redis and Anthropic Description Integrates GHL + Wazzap with Redis and an AI Agent using ClientInfo to process messages, generate accurate replies, and send them via a custom field trigger. Who’s it for This workflow is for businesses using GoHighLevel (GHL), including the Wazzap plugin for WhatsApp, who want to automate inbound SMS/WhatsApp replies with AI. It’s ideal for teams that need accurate, data-driven responses from a predefined ClientInfo source and want to send them back to customers without paying for extra inbound automations. How it works / What it does Receive message in n8n via Webhook from GHL (Customer Replied (SMS) automation). WhatsApp messages arrive the same way using the Wazzap plugin. Filter message type: If audio → skip processing and send fallback asking for text. If text → sanitize by fixing escaped quotes, escaping line breaks/carriage returns/tabs, and removing invalid fields. Buffer messages in Redis to group multiple messages sent in a short window. Run AI Agent using the ClientInfo tool to answer only with accurate service/branch data. Sanitize AI output before sending back. Update GHL contact custom field (IA_answer) with the AI’s response. Send SMS reply automatically via GHL’s outbound automation triggered by the updated custom field. How to set up In GHL, create: Inbound automation: Trigger on Customer Replied (SMS) → Send to your n8n Webhook. Outbound automation: Trigger when IA_answer is updated → Send SMS to the contact. Create a custom field named IA_answer. Connect Wazzap in GHL to handle WhatsApp messages. Configure Redis in n8n (host, port, DB index, password). Add your AI model credentials (Anthropic, OpenAI, etc.) in n8n. (Optional) Set up the Google Drive Excel Merge sub-workflow to enrich ClientInfo with external data. Requirements GoHighLevel sub-account API key**. Anthropic (Claude)** API key or another supported LLM provider. Redis database** for temporary message storage. GHL automations: one for inbound messages to n8n, one for outbound replies when **IA\_answer is updated. GHL custom field: **IA\_answer to store and trigger replies. Wazzap plugin** in GHL for WhatsApp message handling. How to customize the workflow Add more context or business-specific data to the AI Agent prompt so replies match your brand tone and policies. Expand the ClientInfo dataset with additional services, branches, or product details. Adjust the Redis wait time to control how long the workflow buffers messages before replying.
by sato rio
This workflow streamlines the entire inventory replenishment process by leveraging AI for demand forecasting and intelligent logic for supplier selection. It aggregates data from multiple sources—POS systems, weather forecasts, SNS trends, and historical sales—to predict future demand. Based on these predictions, it calculates shortages, requests quotes from multiple suppliers, selects the optimal vendor based on cost and lead time, and executes the order automatically. 🚀 Who is this for? Retail & E-commerce Managers** aiming to minimize stockouts and reduce overstock. Supply Chain Operations** looking to automate procurement and vendor selection. Data Analysts** wanting to integrate external factors (weather, trends) into inventory planning. 💡 How it works Data Aggregation: Fetches data from POS systems, MySQL (historical sales), OpenWeatherMap (weather), and SNS trend APIs. AI Forecasting: Formats the data and sends it to an AI prediction API to forecast demand for the next 7 days. Shortage Calculation: Compares the forecast against current stock and safety stock to determine necessary order quantities. Supplier Optimization: For items needing replenishment, the workflow requests quotes from multiple suppliers (A, B, C) in parallel. It selects the best supplier based on the lowest total cost within a 7-day lead time. Execution & Logging: Places the order via API, updates the inventory system, and logs the transaction to MySQL. Anomaly Detection: If the AI's confidence score is low, it skips the auto-order and sends an alert to Slack for manual review. ⚙️ Setup steps Configure Credentials: Set up credentials for MySQL and Slack in n8n. API Keys: You will need an API key for OpenWeatherMap (or a similar service). Update Endpoints: The HTTP Request nodes use placeholder URLs (e.g., pos-api.example.com, ai-prediction-api.example.com). Replace these with your actual internal APIs, ERP endpoints, or AI service (like OpenAI). Database Prep: Ensure your MySQL database has a table named forecast_order_log to store the order history. Schedule: The workflow is set to run daily at 03:00. Adjust the Schedule Trigger node as needed. 📋 Requirements n8n** (Self-hosted or Cloud) MySQL** database Slack** workspace External APIs for POS, Inventory, and Supplier communication (or mock endpoints for testing).
by Jamot
How it works Your WhatsApp AI Assistant automatically handles customer inquiries by linking your Google Docs knowledge base to incoming WhatsApp messages. The system instantly processes customer questions, references your business documentation, and delivers AI-powered responses through OpenAI or Gemini - all without you lifting a finger. Works seamlessly in individual chats and WhatsApp groups where the assistant can respond on your behalf. Set up steps Time to complete: 15-30 minutes Step 1: Create your WhapAround account and connect your WhatsApp number (5 minutes) Step 2: Prepare your Google Doc with business information and add the document ID to the system (5 minutes) Step 3: Configure the WhatsApp webhook and map message fields (10 minutes) Step 4: Connect your OpenAI or Gemini API key (3 minutes) Step 5: Send a test message to verify everything works (2 minutes) Optional: Set up PostgreSQL database for conversation memory and configure custom branding/escalation rules (additional 15-20 minutes) Detailed technical configurations, webhook URLs, and API parameter settings are provided within each workflow step to guide you through the exact setup process.
by Chandan Singh
This workflow synchronizes MySQL database table schemas with a vector database in a controlled, idempotent manner. Each database table is indexed as a single vector to preserve complete schema context for AI-based retrieval and reasoning. The workflow prevents duplicate vectors and automatically handles schema changes by detecting differences and re-indexing only when required. How it works The workflow starts with a manual trigger and loads global configuration values. All database tables are discovered and processed one by one inside a loop. For each table, a normalized schema representation is generated, and a deterministic hash is calculated. A metadata table is checked to determine whether a vector already exists for the table. If a vector exists, the stored schema hash is compared with the current hash to detect schema changes. When a schema change is detected, the existing vector and metadata are deleted. The updated table schema is embedded as a single vector (without chunking) and upserted into the vector database. Vector identifiers and schema hashes are persisted for future executions. Setup steps Set the MySQL database name using mysql_database_name. Configure the Pinecone index name using pinecone_index. Set the vector namespace using vector_namespace. Configure the Pinecone index host using vector_index_host. Add your Pinecone API key using pinecone_apikey. Select the embedding model using embedding_model. Configure text processing options: chunk_size chunk_overlap Set the metadata table identifier using dataTable_Id. Save and run the workflow manually to perform the initial schema synchronization. Limitations This workflow indexes database table schemas only. Table data (rows) are not embedded or indexed. Each table is stored as a single vector. Very large or highly complex schemas may approach model token limits depending on the selected embedding model. Schema changes are detected using a hash-based comparison. Non-structural changes that do not affect the schema representation will not trigger re-indexing.
by Mantaka Mahir
How it works This workflow automates the process of converting Google Drive documents into searchable vector embeddings for AI-powered applications: • Takes a Google Drive folder URL as input • Initializes a Supabase vector database with pgvector extension • Fetches all files from the specified Drive folder • Downloads and converts each file to plain text • Generates 768-dimensional embeddings using Google Gemini • Stores documents with embeddings in Supabase for semantic search Built for the Study Agent workflow to power document-based Q&A, but also works perfectly for any RAG system, AI chatbot, knowledge base, or semantic search application that needs to query document collections. Set up steps Prerequisites: • Google Drive OAuth2 credentials • Supabase account with Postgres connection details • Google Gemini API key (free tier available) Setup time: ~10 minutes Steps: Add your Google Drive OAuth2 credentials to the Google Drive nodes Configure Supabase Postgres credentials in the SQL node Add Supabase API credentials to the Vector Store node Add Google Gemini API key to the Embeddings node Update the input with your Drive folder URL Execute the workflow Note: The SQL query will drop any existing "documents" table, so backup data if needed. Detailed node-by-node instructions are in the sticky notes within the workflow. Works with: Study Agent (main use case), custom AI agents, chatbots, documentation search, customer support bots, or any RAG application.
by Jimleuk
On my never-ending quest to find the best embeddings model, I was intrigued to come across Voyage-Context-3 by MongoDB and was excited to give it a try. This template implements the embedding model on a Arxiv research paper and stores the results in a Vector store. It was only fitting to use Mongo Atlas from the same parent company. This template also includes a RAG-based Q&A agent which taps into the vector store as a test to helps qualify if the embeddings are any good and if this is even noticeable. How it works This template is split into 2 parts. The first part being the import of a research document which is then chunked and embedded into our vector store. The second part builds a RAG-based Q&A agent to test the vector store retrieval on the research paper. Read the steps for more details. How to use First ensure you create a Voyage account voyageai.com and a MongoDB database ready. Start with Step 1 and fill in the "Set Variables" node and Click on the Manual Execute Trigger. This will take care of populating the vector store with the research paper. To use the Q&A agent, it is required to publish the workflow to access the public chat interface. This is because "Respond to Chat" works best in this mode and not in editor mode. To use for your own document, edit the "Set Variables" node to define the URL to your own document. This embeddings approach should work best on larger documents. Requirements Voyageai.com account for embeddings. You may need to add credit to get a reasonable RPM for this workflow. MongoDB database either self-hosted or online at https://www.mongodb.com. OpenAI account for RAG Q&A agent. Customising this workflow The Voyage embeddings work with any vector store so feel free to swap out to other such as Qdrant or Pinecone if you're not a fan of MongoDB Atlas. If you're feeling brave, instead of the 3 sequential pages setup I have, why not try the whole document! Fair warning that you may hit memory problems if your instance isn't sufficiently sized - but if it is, go head and share the results!
by Kevin Meneses
How it works This workflow takes a list of links from Google Sheets, visits each page, extracts the main text using Decodo, and creates a summary with the help of artificial intelligence. It helps you turn research articles or web pages into clear, structured insights you can reuse for your projects, content ideas, or newsletters. Input: A Google Sheet named input with one column called url. Output: Another Google Sheet named output, where all the processed data is stored: URL:** original article link Title:** article title Source:** website or domain Published Date:** publication date (if found) Main Topic:** main theme of the article Key Ideas:** three main takeaways or insights Summary:** short text summary Text Type:** type of content (e.g., article, blog, research paper) Setup steps Connect your Google Sheets account. Add your links to the input sheet. In the Decodo node, insert your API key. Configure the AI model (for example, Gemini). Run the workflow and check the results in the output sheet.
by Feras Dabour
AI X (twitter) Threads Bot with Approval Loop This n8n workflow transforms your Telegram messenger into a personal assistant for creating and publishing X-Threads. You can simply send an idea as a text or voice message, collaboratively edit the AI’s suggestion in a chat, and then publish the finished thread directly to X just by saying “Okay.” What You’ll Need to Get Started Before you can use this workflow, you’ll need a few prerequisites set up. This workflow connects three different services, so you will need API credentials for each: Telegram Bot API Key*: You can get this by talking to the “BotFather” on Telegram. It will guide you through creating your new bot and provide you with the API token. New Chat with Telegram BotFather OpenAI API Key*: This is required for the “Speech to Text” and “AI Agent” nodes. You’ll need an account with OpenAI to generate this key. OpenAI API Platform Blotato API Key*: This service is used to publish the final post to X. You’ll need a Blotato account and to connect your X profile there to get the key. Blotato platform for social media publishing Once you have these keys, you can add them to the corresponding credentials in your n8n instance. How the Workflow Operates, Step-by-Step Here is a detailed breakdown of how the workflow processes your request and handles the publishing. 1. Input & Initial Processing This phase captures your idea and converts it into usable text. Node Name Role in Workflow Start: Telegram Message This Telegram Trigger node initiates the entire process upon receiving any message from you in the bot. Prepare Input Consolidates the message content, ensuring the AI receives only one clean text input. Check: ist it a Voice? Checks the incoming message for text. If text is empty, it proceeds to voice handling. Get Voice File If a voice note is detected, this node downloads the raw audio file from Telegram. Speech to Text This node uses the OpenAI Whisper API to convert the downloaded audio file into a text string. 2. AI Core & Iteration Loop This is the central dialogue system where the AI drafts the content and engages in the feedback loop. AI: Draft & Revise Post The main logic agent. It analyzes your request, applies the “System Prompt” rules, drafts the post, and handles revisions based on your feedback. OpenAI Chat Model Defines the large language model (LLM) used for generating and revising the post. Window Buffer Memory A memory buffer that stores the last turns of the conversation, allowing the AI to maintain context when you request changes (e.g., “Make it shorter”). Check if Approved This crucial node detects the specific JSON structure the AI outputs only when you provide an approval keyword (like “ok” or “approved”). Post Suggestion Or Ask For Approval Sends the AI’s post draft back to your Telegram chat for review and feedback. AI Agent System Prompt (Internal Instructions - English) The agent operates under a strict prompt that dictates its behavior and formatting (found within the AI: Draft & Revise Post node. 3. Publishing & Status Check Once approved, the workflow handles the publication and monitors the post’s status in real-time. Node Name Role in Workflow Approval: Extract Final Thread Posts Parses the incoming JSON, extracting only the clean text ready for publishing. Create post with Blotato Uses the Blotato API to upload the finalized content to your connected X account. Give Blotat 5s :) A brief pause to allow the publishing service to start processing the request. Check post status Checks back with Blotato to determine if the post is published, in progress, or failed. Published? Checks if the status is “published” to send the success message. In Progress? Checks if the post is still being processed. If so, it loops back to the next wait period. Give Blotat other 5s :) Pauses the workflow before re-checking the post status, preventing unnecessary API calls. Final Notification Node Name Role in Workflow Send a confirmation message Sends a confirmation message and the direct link to the published X post. Send an error message Sends a notification if the post failed to upload or encountered an error during processing. 🛠️ Personalizing Your Content Bot The true power of this n8n workflow lies in its flexibility. You can easily modify key components to match your unique brand voice and technical preferences. 1. Tweak the Content Creator Prompt The personality, tone, and formatting rules for your X content are all defined in the System Prompt. Where to find it: Inside the AI: Draft & Revise Post node, under the System Message setting. What to personalize: Adjust the tone, change the formatting rules (e.g., number of hashtags, required emojis), or insert specific details about your industry or target audience. 2. Switch the AI Model or Provider You can easily swap the language model used for generation. Where to find it: The OpenAI Chat Model node. What to personalize: Model: Swap out the default model for a more powerful or faster alternative (e.g., gpt-4 family, or models from other providers if you change the node). Provider: You can replace the entire Langchain block (including the AI Model and Window Buffer Memory nodes) with an equivalent block using a different provider’s Chat/LLM node (e.g., Anthropic, Cohere, or Google Gemini), provided you set up the corresponding credentials and context flow. 3. Modify Publishing Behavior (Schedule vs. Post) The final step is currently set to publish immediately, but you might prefer to schedule posts. Where to find it: The Create post with Blotato node. What to personalize: Consult the Blotato documentation for alternative operations. Instead of choosing the “Create Post” operation (which often posts immediately), you can typically select a “Schedule Post” or “Add to Queue” operation within the Blotato node. If scheduling, you will need to add a step (e.g., a Set node or another agent prompt) before publishing to calculate and pass a Scheduled Time parameter to the Blotato node.
by Khairul Muhtadin
This Workflow auto-ingests Google Drive documents, parses them with LlamaIndex, and stores Azure OpenAI embeddings in an in-memory vector store—cutting manual update time from ~30 minutes to under 2 minutes per doc. Why Use This Workflow? Cost Reduction: Eliminates pays monthly fee on cloud just for store knowledge Ideal For Knowledge Managers / Documentation Teams:** Automatically keep product docs and SOPs in sync when source files change on Google Drive. Support Teams:** Ensure the searchable KB is always up-to-date after doc edits, speeding agent onboarding and resolution time. Developer / AI Teams:** Populate an in-memory vector store for experiments, rapid prototyping, or local RAG demos. How It Works Trigger: Google Drive Trigger watches a specific document or folder for updates. Data Collection: The updated file is downloaded from Google Drive. Processing: The file is uploaded to LlamaIndex cloud via an HTTP Request to create a parsing job. Intelligence Layer: Workflow polls LlamaIndex job status (Wait + Monitor loop). If parsing status equals SUCCESS, the result is retrieved as markdown. Output & Delivery: Parsed markdown is loaded into LangChain's Default Data Loader, passed to Azure OpenAI embeddings (deployment "3small"), then inserted into an in-memory vector store. Storage & Logging: Vector store holds embeddings in memory (good for prototyping). Optionally persist to an external vector DB for production. Setup Guide Prerequisites | Requirement | Type | Purpose | |-------------|------|---------| | n8n instance | Essential | Execute and import the workflow — use the n8n instance | | Google Drive OAuth2 | Essential | Watch and download documents from Google Drive | | LlamaIndex Cloud API | Essential | Parse and convert documents to structured markdown | | Azure OpenAI Account | Essential | Generate embeddings (deployment configured to model name "3small") | | Persistent Vector DB (e.g., Pinecone) | Optional | Persist embeddings for production-scale search | Installation Steps Import the workflow JSON into your n8n instance: open your n8n instance and import the file. Configure credentials: Azure OpenAI: Provide Endpoint, API Key and set deployment name. LlamaIndex API: Create an HTTP Header Auth credential in n8n. Header Name: Authorization. Header Value: Bearer YOUR_API_KEY. Google Drive OAuth2: Create OAuth 2.0 credentials in Google Cloud Console, enable Drive API, and configure the Google Drive OAuth2 credential in n8n. Update environment-specific values: Replace the workflow's Google Drive fileId with the GUID or folder ID you want to watch (do not commit public IDs). Customize settings: Polling interval (Wait node): adjust for faster or slower job status checks. Target file or folder: toggled on the Google Drive Trigger node. Embedding model: change Azure OpenAI deployment if needed. Test execution: Save changes and trigger a sample file update on Drive. Verify each node runs and the vector store receives embeddings. Technical Details Core Nodes | Node | Purpose | Key Configuration | |------|---------|-------------------| | Knowledge Base Updated Trigger (Google Drive Trigger) | Triggers on file/folder changes | Set trigger type to specific file or folder; configure OAuth2 credential | | Download Knowledge Document (Google Drive) | Downloads file binary | Operation: download; ensure OAuth2 credential is selected | | Parse Document via LlamaIndex (HTTP Request) | Uploads file to LlamaIndex parsing endpoint | POST multipart/form-data to /parsing/upload; use HTTP Header Auth credential | | Monitor Document Processing (HTTP Request) | Polls parsing job status | GET /parsing/job/{{jobId}}; check status field | | Check Parsing Completion (If) | Branches on job status | Condition: {{$json.status}} equals SUCCESS | | Retrieve Parsed Content (HTTP Request) | Fetches parsed markdown result | GET /parsing/job/{{jobId}}/result/markdown | | Default Data Loader (LangChain) | Loads parsed markdown into document format | Use as document source for embeddings | | Embeddings Azure OpenAI | Generates embeddings for documents | Credentials: Azure OpenAI; Model/Deployment: 3small | | Insert Data to Store (vectorStoreInMemory) | Stores documents + embeddings | Use memory store for prototyping; switch to DB for persistence | Workflow Logic On Drive change, the file binary is downloaded and sent to LlamaIndex. Workflow enters a monitor loop: Monitor Document Processing fetches job status, If node checks status. If not SUCCESS, Wait node delays before re-check. When parsing completes, the workflow retrieves markdown, loads documents, creates embeddings via Azure OpenAI, and inserts data into an in-memory vector store. Customization Options Basic Adjustments: Poll Delay: Set Wait node (default: every minute) to balance speed vs. API quota. Target Scope: Switch the trigger from a single file to a folder to auto-handle many docs. Embedding Model: Swap Azure deployment for a different model name as needed. Advanced Enhancements: Persistent Vector DB Integration: Replace vectorStoreInMemory with Pinecone or Milvus for production search. Notification: Add Slack or email nodes to notify when parsing completes or fails. Summarization: Add an LLM summarization step to generate chunk-level summaries. Scaling option: Batch uploads and chunking to reduce embedding calls; use a queue (Redis or n8n queue patterns) and horizontal workers for high throughput. Performance & Optimization | Metric | Expected Performance | Optimization Tips | |--------|----------------------|-------------------| | Execution time (per doc) | ~10s–2min (depends on file size & LlamaIndex processing) | Chunk large docs; run embeddings in batches | | API calls (per doc) | 3–8 (upload, poll(s), retrieve, embedding calls) | Increase poll interval; consolidate requests | | Error handling | Retries via Wait loop and If checks | Add exponential backoff, failure notifications, and retry limits | Troubleshooting | Problem | Cause | Solution | |---------|-------|----------| | Authentication errors | Invalid/missing credentials | Reconfigure n8n Credentials; do not paste API keys directly into nodes | | File not found | Incorrect fileId or permissions | Verify Drive fileId and OAuth scopes; share file with the service account if needed | | Parsing stuck in PENDING | LlamaIndex processing delay or rate limit | Increase Wait node interval, monitor LlamaIndex dashboard, add retry limits | | Embedding failures | Model/deployment mismatch or quota limits | Confirm Azure deployment name (3small) and subscription quotas | Created by: khmuhtadin Category: Knowledge Management Tags: google-drive, llamaindex, azure-openai, embeddings, knowledge-base, vector-store Need custom workflows? Contact us
by Automate With Marc
📥 Invoice Intake & Notification Workflow This automated n8n workflow monitors a Google Drive folder for newly uploaded invoice PDFs, extracts essential information (like client name, invoice number, amount, due date), logs the data into a Google Sheet for recordkeeping, and sends a formatted Telegram message to notify the billing team. For step-by-step video build of workflows like this: https://www.youtube.com/@automatewithmarc ✅ What This Workflow Does 🕵️ Watches a Google Drive folder for new invoice files 📄 Extracts data from PDF invoices using AI (LangChain Information Extractor) 📊 Appends extracted data into a structured Google Sheet 💬 Notifies the billing team via Telegram with invoice details 🤖 Optionally uses Claude Sonnet AI model to format human-friendly summaries ⚙️ How It Works – Step-by-Step Trigger: Workflow starts when a new PDF invoice is added to a specific Google Drive folder. Download & Parse: The file is downloaded and its content extracted. Data Extraction: AI-powered extractor pulls invoice details (invoice number, client, date, amount, etc.). Log to Google Sheets: All extracted data is appended to a predefined Google Sheet. AI Notification Formatting: An Anthropic Claude model formats a clear invoice notification message. Telegram Alert: The formatted summary is sent to a Telegram channel or group to alert the billing team. 🧠 AI & Tools Used Google Drive Trigger & File Download PDF Text Extraction Node LangChain Information Extractor Google Sheets Node (Append Data) Anthropic Claude (Telegram Message Formatter) Telegram Node (Send Notification) 🛠️ Setup Instructions Google Drive: Set up OAuth2 credentials and specify the folder ID to watch. Google Sheets: Link the workflow to your invoice tracking sheet. Telegram: Set up your Telegram bot and obtain the chat ID. Anthropic & OpenAI: Add your Claude/OpenAI credentials if formatting is enabled. 💡 Use Cases Automated bookkeeping and invoice tracking Real-time billing alerts for accounting teams AI-powered invoice ingestion and summary