by Dataki
BigQuery RAG with OpenAI Embeddings This workflow demonstrates how to use Retrieval-Augmented Generation (RAG) with BigQuery and OpenAI. By default, you cannot directly use OpenAI Cloud Models within BigQuery. Try it This template comes with access to a *public BigQuery table** that stores part of the n8n documentation (about nodes and triggers), allowing you to try the workflow right away: n8n-docs-rag.n8n_docs.n8n_docs_embeddings* ⚠️ *Important:* BigQuery uses the *requester pays model.* The table is small (~40 MB), and BigQuery provides *1 TB of free processing per month**. Running 3–4 queries for testing should remain within the free tier, unless your project has already consumed its quota. More info here: BigQuery Pricing* Why this workflow? Many organizations already use BigQuery to store enterprise data, and OpenAI for LLM use cases. When it comes to RAG, the common approach is to rely on dedicated vector databases such as Qdrant, Pinecone, Weaviate, or PostgreSQL with pgvector. Those are good choices, but in cases where an organization already uses and is familiar with BigQuery, it can be more efficient to leverage its built-in vector capabilities for RAG. Then comes the question of the LLM. If OpenAI is the chosen provider, teams are often frustrated that it is not directly compatible with BigQuery. This workflow solves that limitation. Prerequisites To use this workflow, you will need: A good understanding of BigQuery and its vector capabilities A BigQuery table containing documents and an embeddings column The embeddings column must be of type FLOAT and mode REPEATED (to store arrays) A data pipeline that generates embeddings with the OpenAI API and stores them in BigQuery This template comes with a public table that stores part of the n8n documentation (about nodes and triggers), so you can try it out: n8n-docs-rag.n8n_docs.n8n_docs_embeddings How it works The system consists of two workflows: Main workflow** → Hosts the AI Agent, which connects to a subworkflow for RAG Subworkflow** → Queries the BigQuery vector table. The retrieved documents are then used by the AI Agent to generate an answer for the user.
by 荒城直也
Weather Monitoring Across Multiple Cities with OpenWeatherMap, GPT-4o-mini, and Discord This workflow provides an automated, intelligent solution for global weather monitoring. It goes beyond simple data fetching by calculating a custom "Comfort Index" and using AI to provide human-like briefings and activity recommendations. Whether you are managing remote teams or planning travel, this template centralizes complex environmental data into actionable insights. Who’s it for Remote Team Leads:** Keep an eye on environmental conditions for team members across different time zones. Frequent Travelers & Event Planners:** Monitor weather risks and comfort levels for multiple destinations simultaneously. Smart Home/Life Enthusiasts:** Receive daily morning briefings on air quality and weather alerts directly in Discord. How it works Schedule Trigger: The workflow runs every 6 hours (customizable) to ensure data is up to date. Data Collection: It loops through a list of cities, fetching current weather, 5-day forecasts, and Air Quality Index (AQI) data via the OpenWeatherMap node and HTTP Request node. Smart Processing: A Code node calculates a "Comfort Index" (based on temperature and humidity) and flags specific alerts (e.g., extreme heat, high winds, or poor AQI). AI Analysis: The OpenAI node (using GPT-4o-mini) analyzes the aggregated data to compare cities and recommend the best location for outdoor activities. Conditional Routing: An If node checks for active weather alerts. Urgent alerts are routed to a specific Discord notification, while routine briefings are sent normally. Archiving: All processed data is appended to Google Sheets for historical tracking and future analysis. How to set up Credentials: Connect your OpenWeatherMap, OpenAI, Discord (Webhook), and Google Sheets accounts. Locations: Open the 'Set Monitoring Locations' node and edit the JSON array with the cities, latitudes, and longitudes you wish to track. Google Sheets: Configure the 'Log to Google Sheets' node with your specific Spreadsheet ID and Sheet Name. Discord: Ensure your Webhook URL is correctly pasted into the Discord nodes. Requirements OpenWeatherMap API Key** (Free tier is sufficient). OpenAI API Key** (Configured for GPT-4o-mini). Discord Webhook URL**. Google Sheet** with headers ready for logging. How to customize Adjust Alert Thresholds:** Modify the logic in the 'Process and Analyze Data' Code node to change what triggers a "High Wind" or "Extreme Heat" alert. Refine AI Persona:** Edit the System Prompt in the 'AI Weather Analysis' node to change the tone or focus of the weather briefing. Change Frequency:** Adjust the Schedule Trigger to run once a day or every hour depending on your needs.
by Yusuke Yamamoto
This n8n template demonstrates a multi-modal AI recipe assistant that suggests delicious recipes based on user input, delivered via Telegram. The workflow can uniquely handle two types of input: a photo of your ingredients or a simple text list. Use cases are many: Get instant dinner ideas by taking a photo of your fridge contents, reduce food waste by finding recipes for leftover ingredients, or create a fun and interactive service for a cooking community or food delivery app! Good to know This workflow uses two different AI models (one for vision, one for text generation), so costs will be incurred for each execution. See OpenRouter Pricing or your chosen model provider's pricing page for updated info. The AI prompts are in English, but the final recipe output is configured to be in Japanese. You can easily change the language by editing the prompt in the Recipe Generator node. How it works The workflow starts when a user sends a message or an image to your bot on Telegram via the Telegram Trigger. An IF node intelligently checks if the input is text or an image. If an image is sent, the AI Vision Agent analyzes it to identify ingredients. A Structured Output Parser then forces this data into a clean JSON list. If text is sent, a Set node directly prepares the user's text as the ingredient list. Both paths converge, providing a standardized ingredient list to the Recipe Generator agent. This AI acts as a professional chef to create three detailed recipes. Crucially, a second Structured Output Parser takes the AI's creative text and formats it into a reliable JSON structure (with name, difficulty, instructions, etc.). This ensures the output is always predictable and easy to work with. A final Set node uses a JavaScript expression to transform the structured recipe data into a beautiful, emoji-rich, and easy-to-read message. The formatted recipe suggestions are sent back to the user on Telegram. How to use Configure the Telegram Trigger with your own bot's API credentials. Add your AI provider credentials in the OpenAI Vision Model and OpenAI Recipe Model nodes (this template uses OpenRouter, but it can be swapped for a direct OpenAI connection). Requirements A Telegram account and a bot token. An AI provider account that supports vision and text models, such as OpenRouter or OpenAI. Customising this workflow Modify the prompt in the Recipe Generator to include dietary restrictions (e.g., "vegan," "gluten-free") or to change the number of recipes suggested. Swap the Telegram nodes for Discord, Slack, or a Webhook to integrate this recipe bot into a different platform or your own application. Connect to a recipe database API to supplement the AI's suggestions with existing recipes.
by Adem Tasin
✔ Short Description Automate your lead qualification pipeline — capture Typeform Webhook leads, enrich with APIs, score intelligently, and route to HubSpot, Slack, and Sheets in real-time. 🧩 Description Automate your lead management pipeline from form submission to CRM enrichment and routing. This workflow intelligently processes Typeform Webhook submissions, enriches leads using Hunter.io and Abstract API, scores them with dynamic logic, and routes them into HubSpot while keeping your sales team and tracking sheets up to date. It’s a full-stack automation designed to turn raw form submissions into prioritized, qualified CRM-ready leads — without manual intervention. 💡 Who’s it for Marketing teams managing inbound leads from web forms Sales operations teams that qualify and route leads CRM administrators automating lead data entry and scoring Automation professionals building data enrichment systems ⚙️ How it works / What it does Trigger: Receives new Typeform Webhook submissions via Webhook. Data Extraction: Parses name, email, and company info. Email Verification: Validates email deliverability with Hunter.io. Company Enrichment: Fetches company data (industry, size, country) using Abstract API. Lead Scoring Logic: Calculates a lead score and assigns a tier (Hot / Warm / Cold). Conditional Routing: Hot Leads (≥70) → Sent to HubSpot as Qualified. Warm/Cold Leads (<70) → Sent to HubSpot as Nurture stage. Revalidation Loop: Waits (e.g., 3 days) → Rechecks Nurture leads in HubSpot. Logs them to Google Sheets and alerts your Slack channel. 🧰 How to set up Connect accounts: Typeform Webhook (for inbound lead capture) Hunter.io (API key for email verification) Abstract API (for company enrichment) HubSpot (via OAuth2 credentials) Slack (for notifications) Google Sheets (for logging) Customize the Webhook URL inside your Typeform Webhook integration. Replace API keys with your own (Hunter.io, Abstract). Adjust scoring logic inside the Lead Scoring & Routing Logic node to fit your business. Set Wait duration (default: 10 seconds for testing → change to 3 days for production). Activate the workflow and test it with a sample form submission. 🔧 Requirements Typeform account with webhook capability Hunter.io account + API key Abstract API account + API key HubSpot account with OAuth2 credentials Slack workspace & channel Google Sheets integration 🎨 How to customize the workflow Scoring rules:** Modify the “Lead Scoring & Routing Logic” node to adjust how points are calculated (e.g., country, industry, employee size). CRM target:** Replace HubSpot nodes with another CRM (e.g., Pipedrive, Salesforce). Notification channel:** Swap Slack for Email, Discord, or MS Teams. Data source:** Replace Typeform Webhook with another trigger like Webflow Forms, Airtable, or custom API input. Tracking:** Add Google Analytics or Notion API for additional reporting. 🧭 Summary End-to-end lead automation workflow that combines form data, enrichment APIs, CRM updates, and Slack alerts into one intelligent system. Ideal for any team looking to centralize and qualify leads automatically — from submission to sales. 🧑💻 Creator Information Developed by: Adem Tasin 🌐 Website: ademtasin 💼 LinkedIn: Adem Tasin
by Rohit Dabra
WooCommerce AI Agent — n8n Workflow (Overview) Description: Turn your WooCommerce store into a conversational AI assistant — create products, place orders, run reports and manage coupons using natural language via n8n + an MCP Server. Key features Natural-language commands mapped to WooCommerce actions (products, orders, reports, coupons). Structured JSON outputs + lightweight mapping to avoid schema errors. Calls routed through your MCP Server for secure, auditable tool execution. Minimal user prompts — agent auto-fetches context and asks only when necessary. Extensible: add new tools or customize prompts/mappings easily. Demo of the workflow: Youtube Video 🚀 Setup Guide: WooCommerce + AI Agent Workflow in n8n 1. Prerequisites Running n8n instance WooCommerce store with REST API keys OpenAI API key MCP server (production URL) 2. Import Workflow Open n8n dashboard Go to Workflows → Import Upload/paste the workflow JSON Save as WooCommerce AI Agent 3. Configure Credentials OpenAI Create new credential → OpenAI API Add your API key → Save & test WooCommerce Create new credential → WooCommerce API Enter Base URL, Consumer Key & Secret → Save & test MCP Client In MCP Client node, set Server URL to your MCP server production URL Add authentication if required 4. Test Workflow Open workflow in editor Run a sample request (e.g., create a test product) Verify product appears in WooCommerce 5. Activate Workflow Once tested, click Activate in n8n Workflow is now live 🎉 6. Troubleshooting Schema errors** → Ensure fields match WooCommerce node requirements Connection issues** → Re-check credentials and MCP URL
by Rahul Joshi
Description Automatically compare candidate resumes to job descriptions (PDFs) from Google Drive, generate a 0–100 fit score with gap analysis, and update Google Sheets—powered by Azure OpenAI (GPT-4o-mini). Fast, consistent screening with saved reports in Drive. 📈📄 What This Template Does Fetches job descriptions and resumes (PDF) from Google Drive. 📥 Extracts clean text from both PDFs for analysis. 🧼 Generates an AI evaluation (score, must-have gaps, nice-to-have bonuses, summary). 🤝 Parses the AI output to structured JSON. 🧩 Delivers a saved text report in Drive and updates a Google Sheet. 🗂️ Key Benefits Saves time with automated, consistent scoring. ⏱️ Clear gap analysis for quick decisions. 🔍 Audit-ready reports stored in Drive. 🧾 Centralized tracking in Google Sheets. 📊 No-code operation after initial setup. 🧑💻 Features Google Drive search and download for JDs and resumes. 📂 PDF-to-text extraction for reliable parsing. 📝 Azure OpenAI (GPT-4o-mini) comparison and scoring. 🤖 Robust JSON parsing and error handling. 🛡️ Automatic report creation in Drive. 💾 Append or update candidate data in Google Sheets. 📑 Requirements n8n instance (cloud or self-hosted). Google Drive credentials in n8n with access to JD and resume folders (e.g., “JD store”, “Resume_store”). Azure OpenAI access with a deployed GPT-4o-mini model and credentials in n8n. Google Sheets credentials in n8n to append or update candidate rows. PDFs for job descriptions and resumes stored in the designated Drive folders. Target Audience Talent acquisition and HR operations teams. 🧠 Recruiters (in-house and agencies). 🧑💼 Hiring managers seeking consistent shortlisting. 🧭 Ops teams standardizing candidate evaluation records. 🗃️ Step-by-Step Setup Instructions Connect Google Drive and Google Sheets in n8n Credentials and verify folder access. 🔑 Add Azure OpenAI credentials and select GPT-4o-mini in the AI node. 🧠 Import the workflow and assign credentials to all nodes (Drive, AI, Sheets). 📦 Set folder references for JDs (“JD store”) and resumes (“Resume_store”). 📁 Run once to validate extraction, scoring, report creation, and sheet updates. ✅
by Moka Ouchi
How it works This workflow automates the creation and management of a daily space-themed quiz in your Slack workspace. It's a fun way to engage your team and learn something new about the universe every day! Triggers Daily:** The workflow automatically runs at a scheduled time every day. Fetches NASA's Picture of the Day:** It starts by fetching the latest Astronomy Picture of the Day (APOD) from the official NASA API, including its title, explanation, and image URL. Generates a Quiz with AI:** Using the information from NASA, it prompts a Large Language Model (LLM) like OpenAI's GPT to create a unique, multiple-choice quiz question. Posts to Slack:** The generated quiz is then posted to a designated Slack channel. The bot automatically adds numbered reactions (1️⃣, 2️⃣, 3️⃣, 4️⃣) to the message, allowing users to vote. Waits and Tallies Results:** After a configurable waiting period, the workflow retrieves all reactions on the quiz message. A custom code node then tallies the votes, identifies the users who answered correctly, and calculates the total number of participants. Announces the Winner:** Finally, it posts a follow-up message in the same channel, revealing the correct answer, a detailed explanation, and mentions all the users who got it right. Set up steps This template should take about 10-15 minutes to set up. Credentials: NASA: Add your NASA API credentials in the Get APOD node. You can get a free API key from the NASA API website. OpenAI: Add your OpenAI API credentials in the OpenAI: Create Quiz node. Slack: Add your Slack API credentials to all the Slack nodes. You'll need to create a Slack App with the following permissions: chat:write, reactions:read, and reactions:write. Configuration: In the Workflow Configuration node, set your channelId to the Slack channel where you want the quiz to be posted. You can also customize the quizDifficulty, llmTone, and answerTimeoutMin to fit your audience. Activate Workflow: Once configured, simply activate the workflow. It will run automatically at the time specified in the Schedule Trigger node (default is 21:00 daily). Requirements An n8n instance A NASA API Key An OpenAI API Key A Slack App with the appropriate permissions and API credentials
by koichi nagino
Description Start your day with the perfect outfit suggestion tailored to the local weather. This workflow runs automatically every morning, fetches the current weather forecast for your city, and uses an AI stylist to generate a practical, gender-neutral outfit recommendation. It then designs a clean, vertical image card with all the details—date, temperature, weather conditions, and the complete outfit advice—and posts it directly to your Slack channel. It’s like having a personal stylist and weather reporter deliver a daily briefing right where your team communicates. Who’s it for Teams working in a shared office location who want a fun, daily update. Individuals looking to automate their morning routine and take the guesswork out of getting dressed. Community managers wanting to add engaging, automated content to their Slack workspace. Anyone interested in a practical example of combining weather data, AI, and dynamic image generation. How it works / What it does Triggers Daily: The workflow automatically runs every day at 6 AM. Fetches Weather: It gets the current weather forecast for a specified city (default is Tokyo) using the OpenWeatherMap node. Consults AI Stylist: The weather data is sent to an AI model, which acts as a stylist and returns a practical, gender-neutral outfit suggestion. Designs an Image Card: It dynamically creates a vertical image and writes the date, detailed weather info, and the AI's full recommendation onto it. Posts to Slack: Finally, it uploads the completed image card to your designated Slack channel with a friendly morning greeting. Requirements An n8n instance. An OpenWeatherMap API Key. An OpenRouter API Key (or credentials for another compatible AI model). A Slack workspace and the necessary permissions to connect an app. How to set up Set Weather Location: In the Get Weather Data node, add your OpenWeatherMap API Key and change the city name if you wish. Configure AI Model: In the OpenRouter Chat Model node, add your API Key. Configure Slack: In the Upload a file node, add your Slack credentials and, most importantly, select the channel where you want the forecast to be posted. Adjust Schedule (Optional): You can change the trigger time in the Daily 6AM Trigger node. How to customize the workflow Change the AI's Personality: Edit the system message in the Generate Outfit Advice node. You could ask the AI to be a pirate, a 90s fashion icon, or a formal stylist. Customize the Image: In the Create Image Card node, you can change the background color, font sizes, colors, and the layout of the text. Use a Different Platform: Swap the Slack node for a Discord, Telegram, or Email node to send the forecast to your preferred platform.
by Guillaume Duvernay
Create truly authoritative articles that blend your unique, internal expertise with the latest, most relevant information from the web. This template orchestrates an advanced "hybrid research" content process that delivers unparalleled depth and credibility. Instead of a simple prompt, this workflow first uses an AI planner to deconstruct your topic into key questions. Then, for each question, it performs a dual-source query: it searches your trusted Lookio knowledge base for internal facts and simultaneously uses Linkup to pull fresh insights and sources from the live web. This comprehensive "super-brief" is then handed to a powerful AI writer to compose a high-quality article, complete with citations from both your own documents and external web pages. 👥 Who is this for? Content Marketers & SEO Specialists:** Scale the creation of authoritative content that is both grounded in your brand's facts and enriched with timely, external sources for maximum credibility. Technical Writers & Subject Matter Experts:** Transform complex internal documentation into rich, public-facing articles by supplementing your core knowledge with external context and recent data. Marketing Agencies:** Deliver exceptional, well-researched articles for clients by connecting the workflow to their internal materials (via Lookio) and the broader web (via Linkup) in one automated process. 💡 What problem does this solve? The Best of Both Worlds:** Combines the factual reliability of your own knowledge base with the timeliness and breadth of a web search, resulting in articles with unmatched depth. Minimizes AI "Hallucinations":** Grounds the AI writer in two distinct sets of factual, source-based information—your internal documents and credible web pages—dramatically reducing the risk of invented facts. Maximizes Credibility:* Automates the inclusion of source links from *both** your internal knowledge base and external websites, boosting reader trust and demonstrating thorough research. Ensures Comprehensive Coverage:** The AI-powered "topic breakdown" ensures a logical structure, while the dual-source research for each point guarantees no stone is left unturned. Fully Automates an Expert Workflow:** Mimics the entire process of an expert research team (outline, internal review, external research, consolidation, writing) in a single, scalable workflow. ⚙️ How it works This workflow orchestrates a sophisticated, multi-step "Plan, Dual-Research, Write" process: Plan (Decomposition): You provide an article title and guidelines via the built-in form. An initial AI call acts as a "planner," breaking down the main topic into an array of logical sub-questions. Dual Research (Knowledge Base + Web Search): The workflow loops through each sub-question and performs two research actions in parallel: It queries your Lookio assistant to retrieve relevant information and source links from your uploaded documents. It queries Linkup to perform a targeted web search, gathering up-to-date insights and their source URLs. Consolidate (Brief Creation): All the retrieved information—internal and external—is compiled into a single, comprehensive research brief for each sub-question. Write (Final Generation): The complete, source-rich brief is handed to a final, powerful AI writer (e.g., GPT-5). Its instructions are clear: write a high-quality article based only on the provided research and integrate all source links as hyperlinks. 🛠️ Setup Set up your Lookio assistant: Sign up at Lookio, upload your documents to create a knowledge base, and create a new assistant. In the Query Lookio Assistant node, paste your Assistant ID in the body and add your Lookio API Key for authentication (we recommend a Bearer Token credential). Connect your Linkup account: In the Query Linkup for AI web-search node, add your Linkup API key for authentication (we recommend a Bearer Token credential). Linkup's free plan is very generous. Connect your AI provider: Connect your AI provider (e.g., OpenAI) credentials to the two Language Model nodes. Activate the workflow: Toggle the workflow to "Active" and use the built-in form to generate your first hybrid-research article! 🚀 Taking it further Automate Publishing:* Connect the final *Article result* node to a *Webflow* or *WordPress** node to automatically create draft posts in your CMS. Generate Content in Bulk:* Replace the *Form Trigger* with an *Airtable* or *Google Sheet** trigger to generate a batch of articles from your content calendar. Customize the Writing Style:* Tweak the system prompt in the final *New content - Generate the AI output** node to match your brand's tone of voice, prioritize internal vs. external sources, or add SEO keywords.
by Cheng Siong Chin
How It Works Daily triggers automatically fetch fleet data and simulate key performance metrics for each vehicle. An AI agent analyzes maintenance requirements, detects potential issues, and routes alerts according to urgency levels. Fleet summaries are aggregated, logged into the database for historical tracking, and AI-enhanced insights are parsed to provide actionable information. Slack notifications are then sent to relevant teams, ensuring timely monitoring, informed decisions, and proactive fleet management. Setup Steps Configure daily triggers to automatically fetch, process, and update fleet data. Connect Slack, the database, and AI APIs to enable notifications and analytical processing. Set AI parameters and provide API keys for accessing the models and ensuring proper scoring. Configure PostgreSQL to log all fleet data, summaries, and alerts for historical tracking. Define Slack channels to receive real-time alerts, summaries, and actionable insights for the team. Prerequisites Slack workspace, database access, AI account (OpenRouter or compatible), fleet data source, n8n instance Use Cases Fleet monitoring, predictive maintenance, multi-vehicle management, cost optimization, emergency alerts, compliance tracking Customization Adjust AI parameters, alert thresholds, Slack message formatting, integrate alternative data sources, add email notifications, expand logging Benefits Prevent breakdowns, reduce manual monitoring, enable data-driven decisions, centralize alerts, scale across vehicles, AI-powered insights
by Santhej Kallada
In this tutorial, I’ll show how to create UGC (User Generated Content) videos automatically using n8n and Sora 2. This workflow uses OpenAI to generate detailed prompts and Sora 2 to produce realistic UGC-style videos that look natural and engaging. Who is this for? Marketers and social media managers scaling short-form video content Agencies producing branded or influencer-style content Content creators and freelancers automating their video workflows Anyone exploring AI-driven video generation and automation What problem is this workflow solving? Creating authentic, human-like UGC videos manually takes time and effort. This workflow automates the entire process by: Generating engaging scripts or prompts via OpenAI Sending those prompts to Sora 2 for automatic video generation Managing rendering and delivery inside n8n Eliminating manual editing and production steps What this workflow does This workflow connects n8n, OpenAI, and Sora 2 to fully automate the creation of short-form UGC videos. The steps include: Taking user input (topic, tone, niche). Using OpenAI to create a detailed video prompt. Sending the prompt to Sora 2 via HTTP Request to generate the video. Handling video rendering and storing or sending results automatically. By the end, you’ll have a complete UGC video pipeline running on autopilot — producing content for under $1.50 per video. Setup Create Accounts: Sign up for n8n.io (cloud or self-hosted). Get access to OpenAI API and Sora 2. Generate API Keys: Retrieve API keys from OpenAI and Sora 2. Store them securely in n8n credentials. Create Workflow: Add a Form Trigger or Webhook Trigger for input (topic, target audience). Add an OpenAI Node to generate script prompts. Connect an HTTP Request Node to send the prompt to Sora 2. Use a Wait Node or delay logic for video rendering completion. Store or send the output video file via Gmail, Telegram, or Google Drive. Test the Workflow: Run a test topic. Confirm that Sora 2 generates and returns a video automatically. How to customize this workflow to your needs Adjust OpenAI prompts for specific video styles (tutorials, product demos, testimonials). Integrate video output with social media platforms via n8n nodes. Add text-to-speech layers for voiceover automation. Schedule automatic content creation using Cron triggers. Connect with Notion or Airtable to manage content ideas. Notes You’ll need valid API keys for both OpenAI and Sora 2. Sora 2 may charge per render (approx. $1–$1.50 per video). Ensure your workflow includes sufficient delay/wait handling for video rendering. Works seamlessly on n8n Cloud or self-hosted setups. Want a Video Tutorial on How to Set Up This Automation? 👉 Watch on YouTube
by Antonio Gasso
Process multiple invoices automatically using Mistral's dedicated OCR model—at approximately $0.002 per page. Upload batches of PDF, PNG, or JPG invoices through a simple form, extract structured financial data with AI, validate results with confidence scoring, and save everything to Google Sheets. What this workflow does Accepts multiple invoice uploads via n8n Form Trigger Processes files in batch with rate limiting Converts each file to base64 and sends to Mistral OCR API Extracts 9 standard fields using GPT-4o-mini Information Extractor Validates data and assigns confidence scores (high/medium/low) Saves all results to Google Sheets with status tracking Fields extracted Invoice Number, Date, Vendor Name, Tax ID, Subtotal, Tax Rate, Tax Amount, Total Amount, Currency Use cases Accountants processing client invoices in bulk Small businesses digitizing paper receipts Bookkeepers automating repetitive data entry Finance teams building searchable invoice databases Setup requirements Mistral API Key (console.mistral.ai) — HTTP Header Auth credential OpenAI API Key (platform.openai.com) Google Sheets OAuth connection Google Sheet with 15 columns (template in workflow notes)