by InfyOm Technologies
✅ What problem does this workflow solve? Accounting teams spend hours manually entering purchase bills into accounting systems—copying vendor details, creating items, checking duplicates, and reconciling totals. This workflow removes that manual effort entirely. With OCR + AI + QuickBooks integration, this automation converts uploaded purchase bills into fully reconciled QuickBooks bills—accurately, consistently, and without human intervention. ⚙️ What does this workflow do? Accepts multiple purchase bills in a single upload Extracts structured invoice data using OCR + AI Automatically syncs vendors and items with :contentReference[oaicite:0]{index=0} Creates missing vendors or items when needed Generates clean, validated bills inside QuickBooks Prevents duplicate vendors or line items 🧠 How It Works – Step-by-Step 1. 📤 Upload Purchase Bills Users upload one or multiple PDF bills using an n8n form Each bill is automatically split and processed individually 2. 🔍 OCR & Invoice Data Extraction The workflow extracts text from each PDF An AI extraction engine powered by :contentReference[oaicite:2]{index=2} identifies: Invoice number & dates Vendor details Line items (name, quantity, price, amount) Subtotal, tax, and total 3. 🔄 Item & Vendor Reconciliation (QuickBooks) Fetches existing items from QuickBooks If an item does not exist: Automatically creates it Checks if the vendor exists: Creates a new vendor if missing Ensures zero duplicates in QuickBooks 4. 🧾 Bill Payload Creation Builds a clean QuickBooks-compatible bill payload Maps: Items Vendor Dates Taxes Totals Handles edge cases like missing quantities or unit prices 5. 💰 Bill Creation in QuickBooks Creates a finalized bill inside QuickBooks Each bill is immediately ready for reconciliation and reporting 🛠 Tools & Integrations Used n8n Form Trigger** – Bill upload PDF Extractor** – Text extraction AI Invoice Parser** – Structured data extraction QuickBooks API** – Vendor, item, and bill creation OpenAI / OpenRouter** – Intelligent field mapping 💡 Key Benefits ⏱ Eliminates hours of manual bill entry 🧠 Intelligent OCR with structured extraction 🚫 No duplicate vendors or items ⚡ Instant QuickBooks synchronization 📊 Accurate accounting data every time 👤 Who can use this? Perfect for: 🧾 Accounting teams 🏢 Finance departments 📈 SMBs using QuickBooks 🚀 SaaS platforms automating bookkeeping If you're processing large volumes of purchase bills, this workflow turns documents into structured accounting data—automatically.
by Rosh Ragel
Automatically Upload Expenses to QuickBooks from Google Sheets What It Does This n8n workflow template automates the process of uploading categorized expenses from Google Sheets into QuickBooks Online. It leverages your Google Sheets data to create expense entries in QuickBooks with minimal manual effort, streamlining the accounting process. Prerequisites QuickBooks Online Credential**: Set up your QuickBooks Online connection in n8n for expense creation. Google Sheets Credential**: Set up your Google Sheets connection in n8n to read and write data. How It Works Refresh Google Sheets Data: The workflow will first refresh the list of vendors and chart of accounts from your Google Sheets template. Import Bank Transactions: Open the provided Google Sheets template and copy-paste your transactions from your online banking CSV file. Categorize Transactions: Quickly categorize the transactions in Google Sheets, or assign this task to a team member. Run the Workflow: Once the transactions are categorized, run the workflow again, and each expense will be created automatically in QuickBooks Online. Example Use Cases Small Business Owners**: Automatically track and upload monthly expenses to QuickBooks Online without manually entering data. Accountants**: Automate the transfer of bank transactions to QuickBooks, streamlining the financial process. Bookkeepers**: Quickly categorize and upload business expenses to QuickBooks with minimal effort. Setup Instructions Connect Your Google Sheets and QuickBooks Credentials: In n8n, connect your Google Sheets and QuickBooks accounts. Follow the credential setup instructions for both services. Setup the Google Sheets Node: Link the specific Google Sheet that contains your expense data. Make sure the sheet includes the correct columns for transactions, vendors, and accounts. Setup the QuickBooks Node: Configure the QuickBooks Online node to create expense entries in QuickBooks from the data in your Google Sheets. Setup the HTTP Node for API Calls: Use the HTTP node to make custom API calls to QuickBooks Configure the QuickBooks Realm ID: Obtain the QuickBooks Realm ID from your QuickBooks Online Developer account to use for custom API calls. This ensures the workflow targets the correct QuickBooks instance. How to Use Import Transactions: Copy and paste your bank transactions from the CSV into the provided Google Sheets template. Categorize Transactions: Manually categorize the transactions in the sheet, or delegate this task to another person to ensure they’re correctly tagged (e.g., Utilities, Office Supplies, Travel). Run the Workflow: Execute the workflow to automatically upload the categorized expenses into QuickBooks. Verify in QuickBooks: After the workflow runs, log into QuickBooks Online to confirm the expenses have been created and categorized correctly. Free Google Sheets Template To get started quickly, download my free Google Sheets template that includes pre-configured sheets for bank transactions, vendors, and chart of accounts. This template will make it easier for you to import and categorize your expenses before running the n8n workflow. Download the Free Google Sheets Template Customization Options Category Mapping**: Customize how categories in Google Sheets are mapped to QuickBooks expense types. Additional API Calls**: Add custom API calls if you need extra functionality, such as creating custom reports or syncing additional data. Notifications**: Configure email or Slack notifications to alert you when the expenses have been successfully uploaded. Why It's Useful Time-Saving**: Automatically upload and categorize expenses in QuickBooks without needing to enter them manually. Error Reduction**: Minimize human error by automating the process of uploading and categorizing transactions. Efficiency**: Connects Google Sheets to QuickBooks, making it easy to manage expenses in one place without having to toggle between multiple apps. Accuracy**: Syncs data between Google Sheets and QuickBooks in a structured, automated way for consistent and reliable financial reporting. Flexibility**: Allow external users or lower-permission employees to categorize financial transactions without providing direct access to QBO
by Marth
How It Works: The 5-Node Monitoring Flow This concise workflow efficiently captures, filters, and delivers crucial cybersecurity-related mentions. 1. Monitor: Cybersecurity Keywords (X/Twitter Trigger) This is the entry point of your workflow. It actively searches X (formerly Twitter) for tweets containing the specific keywords you define. Function:** Continuously polls X for tweets that match your specified queries (e.g., your company name, "Log4j," "CVE-2024-XXXX," "ransomware"). Process:** As soon as a matching tweet is found, it triggers the workflow to begin processing that information. 2. Format Notification (Code Node) This node prepares the raw tweet data, transforming it into a clean, actionable message for your alerts. Function:** Extracts key details from the raw tweet and structures them into a clear, concise message. Process:** It pulls out the tweet's text, the user's handle (@screen_name), and the direct URL to the tweet. These pieces are then combined into a user-friendly notificationMessage. You can also include basic filtering logic here if needed. 3. Valid Mention? (If Node) This node acts as a quick filter to help reduce noise and prevent irrelevant alerts from reaching your team. Function:** Serves as a simple conditional check to validate the mention's relevance. Process:** It evaluates the notificationMessage against specific criteria (e.g., ensuring it doesn't contain common spam words like "bot"). If the mention passes this basic validation, the workflow continues. Otherwise, it quietly ends for that particular tweet. 4. Send Notification (Slack Node) This is the delivery mechanism for your alerts, ensuring your team receives instant, visible notifications. Function:** Delivers the formatted alert message directly to your designated communication channel. Process:* The notificationMessage is sent straight to your specified *Slack channel** (e.g., #cyber-alerts or #security-ops). 5. End Workflow (No-Op Node) This node simply marks the successful completion of the workflow's execution path. Function:** Indicates the end of the workflow's process for a given trigger. How to Set Up Implementing this simple cybersecurity monitor in your n8n instance is quick and straightforward. 1. Prepare Your Credentials Before building the workflow, ensure all necessary accounts are set up and their respective credentials are ready for n8n. X (Twitter) API:* You'll need an X (Twitter) developer account to create an application and obtain your Consumer Key/Secret and Access Token/Secret. Use these to set up your *Twitter credential** in n8n. Slack API:* Set up your *Slack credential* in n8n. You'll also need the *Channel ID** of the Slack channel where you want your security alerts to be posted (e.g., #security-alerts or #it-ops). 2. Import the Workflow JSON Get the workflow structure into your n8n instance. Import:** In your n8n instance, go to the "Workflows" section. Click the "New" or "+" icon, then select "Import from JSON." Paste the provided JSON code (from the previous response) into the import dialog and import the workflow. 3. Configure the Nodes Customize the imported workflow to fit your specific monitoring needs. Monitor: Cybersecurity Keywords (X/Twitter):** Click on this node. Select your newly created Twitter Credential. CRITICAL: Modify the "Query" parameter to include your specific brand names, relevant CVEs, or general cybersecurity terms. For example: "YourCompany" OR "CVE-2024-1234" OR "phishing alert". Use OR to combine multiple terms. Send Notification (Slack):** Click on this node. Select your Slack Credential. Replace "YOUR_SLACK_CHANNEL_ID" with the actual Channel ID you noted earlier for your security alerts. (Optional: You can adjust the "Valid Mention?" node's condition if you find specific patterns of false positives in your search results that you want to filter out.) 4. Test and Activate Verify that your workflow is working correctly before setting it live. Manual Test:** Click the "Test Workflow" button (usually in the top right corner of the n8n editor). This will execute the workflow once. Verify Output:** Check your specified Slack channel to confirm that any detected mentions are sent as notifications in the correct format. If no matching tweets are found, you won't see a notification, which is expected. Activate:** Once you're satisfied with the test results, toggle the "Active" switch (usually in the top right corner of the editor) to ON. Your workflow will now automatically monitor X (Twitter) at the specified polling interval.
by DIGITAL BIZ TECH
AI-Powered LinkedIn Post Generator Workflow Overview This workflow is a two-part intelligent content creation system built in n8n, designed to generate professional and on-brand LinkedIn posts. It combines a conversational frontend agent that interacts naturally with users and a backend post generation engine powered by structured templates and Mistral Cloud AI models. Workflow Structure Frontend:** Conversational “LinkedIn Agent” that guides the user. Backend:** “Post Generator” engine that produces final, high-quality content using dynamic templates. LinkedIn Agent (Frontend Flow) Trigger:** When chat message received Starts the workflow whenever a user sends a message to the chatbot or embedded interface. Agent:** LinkedIn Agent Welcomes the user and lists 7 available post templates: Educational Promotional Discussion Case Study & Testimonial News Personal General Prompts the user to select a template number. Asks for a topic after the user’s choice. Sends both template number and topic to the backend using a Tool call. Memory:** Simple Memory1 Stores the last 10 messages to maintain conversational context. LLM Model:** Mistral Cloud Chat Model1 Used for reasoning, conversational responses, and user guidance. Tool Used:** template Invokes another trigger in the same workflow: When Executed by Another Workflow. Passes the user’s chosen template and topic to the backend. Post Generation Engine (Backend Flow) Trigger:** When Executed by Another Workflow Receives payload from the template tool (template ID + topic). Router Node:** Switch between templates Directs flow to the correct post template logic based on user’s choice (1–7). Example: 1 → Knowledge & Educational 2 → Promotion 3 → Discussion 4 → Case Study & Testimonial etc. Prompt Template Nodes:** Each Set node defines a large, structured prompt containing: Specific tone, audience, and purpose rules Example hooks and CTAs Layout and line formatting instructions “FORBIDDEN PHRASES” list (e.g., no “game-changer”, “revolutionary”) Expert Writer Agent:** post generator A specialized agent node that receives the selected prompt template. Generates the final LinkedIn post text using strict formatting and tone rules. Model: Mistral Cloud Chat Model Output:** The generated post text is sent back to the template tool and displayed to the user in chat. Integrations Used | Service | Purpose | Credential | |----------|----------|-------------| | Mistral Cloud | LLM & post generation | Mistral Cloud account dbt | | n8n Agent Framework | Multi-agent orchestration | Native | | Chat UI / Webhook | Frontend interaction | Custom embedded UI or webhook trigger | Agent System Prompt Summary > “You are an intelligent LinkedIn assistant that helps users craft posts. List available templates, guide them to select one, and collect a topic. Then use the provided template tool to request the backend writer to generate a final post.” Backend writer’s system prompt: > “You are an expert LinkedIn marketing leader. Generate structured, professional posts for AI/automation topics. Avoid hype, buzzwords, and clichés. Keep sentences short, tone confident, and use strong openers.” Key Features Dual-agent architecture (Frontend Assistant + Backend Writer) 7 dynamic content templates for flexibility Conversational chat interface for ease of use Strict brand tone enforcement with style run Fully automated generation and return of final post in chat Summary > A modular, agent-based n8n workflow for automated LinkedIn post creation, featuring conversational input, structured templates, and AI-generated output powered by Mistral Cloud. Perfect for content teams, social media managers, and AI automation startups. Need Help or More Workflows? Want to customize this workflow for your business. Our team at Digital Biz Tech can tailor it precisely to your use case — from automation logic to AI-powered content engines. We can help you set it up for free — from connecting credentials to deploying it live. Contact: rajeet.nair@digitalbiz.tech Website: https://www.digitalbiz.tech LinkedIn: https://www.linkedin.com/company/digital-biz-tech/ You can also DM us on LinkedIn for any help.
by Cong Nguyen
📄 What this workflow does This workflow turns your n8n into an automated product-video generator powered by Google Sheets. When a new row is added with status = run, it: Downloads the product image from Google Drive. Converts the image to base64 and sends it to Gemini, which creates a branded ad-style variant. Saves the generated image back into a designated Google Drive folder. Sends the image to FAL (image-to-video) to generate a short promotional video clip. Polls FAL’s response_url until the video is ready. Uploads the video to Google Drive (videos folder). Updates the original Google Sheet row with the video link and sets status = finished. Handles API latency via wait/polling and logs failures into the sheet if needed. 👤 Who is this for Marketing teams automating creative asset production. E-commerce businesses needing quick product promo videos. Agencies creating branded ad content at scale. ✅ Requirements An n8n instance. A Google Sheet with at least these columns: STT, link_image, note, status, link_video. Google Sheets & Google Drive OAuth2 credentials connected in n8n. Gemini API key (for ad-style image generation). FAL API key (for image-to-video). ⚙️ How to set up Import the provided workflow JSON into n8n. Connect Google Sheets credentials and point to your sheet (documentId + gid). Connect Google Drive credentials and update folder IDs in the two Upload File nodes (images/videos). Add Gemini and FAL API keys in the respective HTTP Request headers (via Credentials). Test: add a row with link_image, note, and status = run. The workflow should generate and save a video, then update the sheet with the link. 🔁 How it works Trigger → Google Sheets Trigger fires on rowAdded where status = run. Pre-processing → Download the product image from Google Drive → extract base64. LLM Image Generation → Gemini generates an ad-style variant based on note. Storage → Upload the generated image into the “images” Drive folder. Video Creation → FAL converts the branded image into a short video. Polling → Wait node + HTTP Request check job status until video is completed. Write-back → Upload final video into the “videos” Drive folder, update the sheet with the link_video, and set status = finished.
by Camille Roux
Create a reusable “photos to post” queue from your Lightroom Cloud album—ideal for Lightroom-to-Instagram automation with n8n. It discovers new photos, stores clean metadata in a Data Table, and generates AI alt text to power on-brand captions and accessibility. Use it together with “Lightroom Image Webhook (Direct JPEG for Instagram)” and “Instagram Auto-Publisher for Lightroom Photos (AI Captions).” What it’s for Automate Lightroom to Instagram; centralize photo data for scheduled IG posting; prep AI-ready alt text and metadata for consistent, hands-free publishing. Parameters to set Lightroom Cloud credentials (client/app + API key) Album/collection ID to monitor in Lightroom Cloud Data Table name for the posting queue (e.g., Photos) AI settings: language/tone for alt text (concise, brand-aware) Image analysis URL: public endpoint of Workflow 2 (Lightroom Image Webhook) Works best with Workflow 2: Lightroom Image Webhook (Direct JPEG for Instagram) Workflow 3: Instagram Auto-Publisher for Lightroom Photos (AI Captions) Learn more & stay in the loop Want the full story (decisions, trade-offs, and tips) behind this Lightroom Cloud → Instagram automation? 👉 Read the write-up on my blog: camilleroux.com If you enjoy street & urban photography or you’re curious how I use these n8n workflows day-to-day: 👉 Follow my photo account on Instagram: @camillerouxphoto 👉 Follow me on other networks: links available on my site (X, Bluesky, Mastodon, Threads)
by Salman Mehboob
🎬 What This Workflow Does This workflow turns your Google Sheet into a fully automated AI video factory powered by Google Veo 3 via Vertex AI. Simply fill in your prompts, choose your video settings, tick a checkbox — and walk away. The workflow handles everything: sending the prompt to Veo 3, waiting for the video to generate, downloading it, uploading it to Google Drive, and writing the link back to your sheet automatically. No manual downloading. No checking APIs. No copy-pasting links. Just tick and go. Whether you need 5 videos or 500, this workflow runs each one through the same reliable pipeline — with full error handling so you always know what succeeded and what failed, and exactly why. 🧠 Why This Workflow Exists Google Veo 3 is one of the most powerful AI video generation models in the world — but the Google Cloud console only lets you generate videos one at a time. If you have a client who needs 50 product videos, or a content team producing videos at scale, doing it manually through the UI is completely impractical. This workflow solves that. It gives you a spreadsheet-driven production pipeline for Veo 3 — where each row is one video job, fully configurable with its own prompt, resolution, aspect ratio, and duration. ✅ What You Get ✅ Checkbox trigger — tick a row in Google Sheets to start generation ✅ Per-row settings — each video can have its own resolution, aspect ratio, and duration ✅ Smart polling loop — automatically checks every 60 seconds until the video is ready ✅ Google Drive upload — finished videos land in your Drive folder automatically ✅ Sheet auto-update — Drive link written back to the exact row when done ✅ Full error handling — content blocks and API errors are logged to the sheet with the exact reason ✅ No wasted credits — validation gate blocks empty or incomplete rows before any API call is made 🗂️ Google Sheet Template Use this ready-made Google Sheet template to get started immediately: 👉 Click here to open the Google Sheet Template (File → Make a copy) Your sheet must have these columns in this exact order: | Column | Description | |---|---| | prompt | Your video description — what Veo 3 should generate | | resolution | Video quality: 720p or 1080p | | aspectRatio | 16:9 for landscape, 9:16 for portrait/vertical | | durationSeconds | Video length: 4, 6, or 8 seconds only | | send_for_generation | Checkbox column — tick this to trigger the workflow | | video_drive_link | Auto-filled — Drive link on success, error reason on failure | | row_number | Auto-filled by Apps Script — do not edit manually | ⚙️ Full Setup Guide — Step by Step STEP 1 — Create a Google Cloud Project & Enable Vertex AI Go to console.cloud.google.com Click the project dropdown at the top → New Project Give it a name (e.g. veo-automation) → click Create Once created, make sure this project is selected In the left menu go to APIs & Services → Library Search for Vertex AI API → click it → click Enable Now go to Billing in the left menu Link a billing account to your project — Vertex AI requires billing to be enabled even if you're within free tier limits STEP 2 — Get Your Vertex AI API Key go on this url https://console.cloud.google.com/vertex-ai/studio in the vertex ai studio create the api key Save the key somewhere safe — you will paste it into the Data Collection node in n8n STEP 3 — Get Your Project ID In Google Cloud Console, look at the top bar — your Project ID is shown next to the project name It looks like: project-xxxxxxx-xxxx-xxxx-ada or similar Copy this — you will paste it into the Data Collection node in n8n STEP 4 — Set Up n8n Credentials In n8n you need two Google credential connections: Google Sheets OAuth2 In n8n go to Settings → Credentials → New Search for Google Sheets OAuth2 API Follow the OAuth flow to connect your Google account This credential is used by the Update video Link in sheet and Update Error in Sheet nodes Google Drive OAuth2 Same process — search for Google Drive OAuth2 API Connect the same or a different Google account This credential is used by the Upload Video to Drive node STEP 5 — Configure the Data Collection Node Open the Data Collection node in n8n and update these values: | Field | What to put | |---|---| | project_id | Your Google Cloud Project ID from Step 3 | | vertex_api | Your Vertex AI API Key from Step 2 | | veo_3_model | veo-3.0-generate-preview (or newer model if available) | | api_endpoint | Leave as us-central1-aiplatform.googleapis.com | | region | Leave as us-central1 | STEP 6 — Set Up the Google Drive Folder Go to drive.google.com Create a new folder where your videos will be saved (e.g. Veo Generated Videos) Open the folder — copy the folder ID from the URL: URL looks like: https://drive.google.com/drive/folders/xxxxxxxxxxxxxx-xxxxxxxxxxx The folder ID is the last part: xxxxxxxxx-xxxxxxxxx Paste this into the Upload Video to Drive node → Folder ID field STEP 7 — Connect Your Google Sheet Open the Update video Link in sheet node Click the Document field → select your copy of the Google Sheet template Make sure the sheet tab selected is VEO_3 Do the same in the Update Error in Sheet node Here is the full rewritten section in clean markdown without any code blocks around the URL: STEP 8 — Set Up the Apps Script Webhook Trigger This is what connects your Google Sheet checkbox to n8n. Without this, ticking the checkbox does nothing. 📄 Download the Apps Script file here How to install it: Open your Google Sheet Click Extensions → Apps Script Delete any existing code in the editor Paste the entire Apps Script code provided with this template Find the url field in the config block and replace YOUR_N8N_WEBHOOK_URL_HERE with your n8n webhook URL — found in the Google Sheet Trigger node in n8n. Use the Production URL, not the test URL. Click Save (floppy disk icon at the top) Now add the installable trigger — this step is critical: In Apps Script, click the clock icon on the left sidebar (Triggers) Click + Add Trigger in the bottom right corner Set these exact options: Choose which function to run: handleEdit Choose which deployment to run: Head Select event source: From spreadsheet Select event type: On edit Click Save Google will ask you to authorise the script — click through and allow all permissions Go back to your sheet and tick a checkbox in column E to test — open Apps Script Executions tab to confirm it fired successfully ⚠️ Why the installable trigger is required: The basic onEdit function that Google runs automatically does NOT have permission to make external HTTP requests. The installable trigger runs under your Google account permissions and can call external URLs like your n8n webhook. This is a Google limitation — not an n8n one. STEP 9 — Activate the Workflow in n8n In n8n open the workflow Toggle the workflow to Active using the switch in the top right corner Copy the Production webhook URL from the Google Sheet Trigger node Paste it into your Apps Script config in Step 8 where it says YOUR_N8N_WEBHOOK_URL_HERE 🔄 How the Workflow Runs — Full Flow Explained 🔄 How the Workflow Runs — Full Flow Explained Google Sheet checkbox ticked ↓ Apps Script fires webhook → n8n ↓ Validation Gate (If node) — checks prompt, resolution, aspectRatio are all filled — if anything missing → workflow stops, no API call made ↓ Data Collection node — bundles all settings into one clean item ↓ Vertex AI — Send for Generation — POST to predictLongRunning endpoint — returns an operation name (like a tracking number) ↓ Wait 60 seconds ↓ Fetch / Check Video — POST to fetchPredictOperation — checks if done: true ↓ IF raiMediaFilteredCount === 0 → TRUE: video exists → Convert Base64 to MP4 file → Upload to Google Drive → Write Drive link to sheet ✅ → FALSE: check for error → IF raiMediaFilteredCount > 0 OR error exists → Write error reason to sheet ❌ → ELSE: still processing → Wait 60 seconds → check again ⏳ ⚙️ Video Settings Reference Resolution 720p — standard quality, faster, lower cost 1080p — high quality, uses more credits Aspect Ratio 16:9 — landscape (YouTube, ads, presentations) 9:16 — vertical (Instagram Reels, TikTok, YouTube Shorts) Duration 4, 6, or 8 seconds — Veo 3 only accepts these exact values, no other numbers ❌ Error Handling — What Gets Logged If a video fails, the exact reason is written to the video_drive_link column: Prompt violates Veo safety policy,| Full AI reason from Google || This means your client can see exactly which rows failed and why — without needing to check any logs. 💡 Tips for Best Results Keep prompts on a single line — multi-line prompts are handled automatically by the workflow Use descriptive cinematic prompts: "Slow motion close-up of ocean waves at sunset, golden hour, 4K" Start with 720p to test — switch to 1080p for final production runs Use 9:16 for Instagram Reels, TikTok, YouTube Shorts Use 16:9 for YouTube, presentations, ads 🔧 Requirements n8n (self-hosted or cloud) — version 1.0+ Google Cloud account with billing enabled Vertex AI API enabled on your project Google Sheets + Google Drive OAuth credentials in n8n Google Apps Script access on your sheet
by Davide
This workflow is a beginner-friendly tutorial demonstrating how to use the Evaluation tool to automatically score the AI’s output against a known correct answer (“ground truth”) stored in a Google Sheet. Advantages ✅ Beginner-friendly – Provides a simple and clear structure to understand AI evaluation. ✅ Flexible input sources – Works with both Google Sheets datasets and manual test entries. ✅ Integrated with Google Gemini – Leverages a powerful AI model for text-based tasks. ✅ Tool usage – Demonstrates how an AI agent can call external tools (e.g., calculator) for accurate answers. ✅ Automated evaluation – Outputs are automatically compared against ground truth data for factual correctness. ✅ Scalable testing – Can handle multiple dataset rows, making it useful for structured AI model evaluation. ✅ Result tracking – Saves both answers and correctness scores back to Google Sheets for easy monitoring. How it Works The workflow operates in two distinct modes, determined by the trigger: Manual Test Mode: Triggered by "When clicking 'Execute workflow'". It sends a fixed question ("How much is 8 * 3?") to the AI agent and returns the answer to the user. This mode is for quick, ad-hoc testing. Evaluation Mode: Triggered by "When fetching a dataset row". This mode reads rows of data from a linked Google Sheet. Each row contains an input (a question) and an expected_output (the correct answer). It processes each row as follows: The input question is sent to the AI Agent node. The AI Agent, powered by a Google Gemini model and equipped with a Calculator tool, processes the question and generates an answer (output). The workflow then checks if it's in evaluation mode. Instead of just returning the answer, it passes the AI's actual_output and the sheet's expected_output to another Evaluation node. This node uses a second Google Gemini model as a "judge" to evaluate the factual correctness of the AI's answer compared to the expected one, generating a Correctness score on a scale from 1 to 5. Finally, both the AI's actual_output and the automated correctness score are written back to a new column in the same row of the Google Sheet. Set up Steps To use this workflow, you need to complete the following setup steps: Credentials Configuration: Set up the Google Sheets OAuth2 API credentials (named "Google Sheets account"). This allows n8n to read from and write to your Google Sheet. Set up the Google Gemini (PaLM) API credentials (named "Google Gemini(PaLM) (Eure)"). This provides the AI language model capabilities for both the agent and the evaluator. Prepare Your Google Sheet: The workflow is pre-configured to use a specific Google Sheet. You must clone the provided template sheet (the URL is in the Sticky Note) to your own Google Drive. In your cloned sheet, ensure you have at least two columns: one for the input/question (e.g., input) and one for the expected correct answer (e.g., expected_output). You may need to update the node parameters that reference $json.input and $json.expected_output to match your column names exactly. Update Document IDs: After cloning the sheet, get its new Document ID from its URL and update the documentId field in all three Evaluation nodes ("When fetching a dataset row", "Set output Evaluation", and "Set correctness") to point to your new sheet instead of the original template. Activate the Workflow: Once the credentials and sheet are configured, toggle the workflow to Active. You can then trigger a manual test run or set the "When fetching a dataset row" node to poll your sheet automatically to evaluate all rows. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Avkash Kakdiya
How it works This workflow automates LinkedIn community engagement by monitoring post comments, filtering new ones, generating AI-powered replies, and posting responses directly on LinkedIn. It also logs all interactions into Google Sheets for tracking and analytics. Step-by-step Trigger & Fetch A Schedule Trigger runs the workflow every 10 minutes. The workflow fetches the latest comments on a specific LinkedIn post using LinkedIn’s API with token-based authentication. Filter for New Comments Retrieves the timestamp of the last processed comment from Google Sheets. Filters out previously handled comments, ensuring only fresh interactions are processed. AI-Powered Reply Generation Sends the new comment to OpenAI GPT-3.5 Turbo with a structured prompt. AI generates a professional, concise, and engaging LinkedIn-appropriate reply (max 2–3 sentences). Post Back to LinkedIn Automatically posts the AI-generated reply under the original comment thread. Maintains consistent formatting and actor identity. Data Logging Appends the original comment, AI response, and metadata into Google Sheets. Enables tracking, review, and future engagement analysis. Benefits Saves time by automating LinkedIn comment replies. Ensures responses are timely, professional, and on-brand. Maintains authentic engagement without manual effort. Prevents duplicate replies by filtering with timestamps. Creates a structured log in Google Sheets for auditing and analytics.
by Asfandyar Malik
Short Description Automatically scrape new Upwork job listings, save them to Google Sheets, and get real-time WhatsApp alerts when new matching jobs appear. This workflow helps freelancers and agencies track new opportunities instantly — without checking Upwork manually. Who’s it for For freelancers, agencies, and automation enthusiasts who want to monitor Upwork jobs automatically and receive instant notifications for relevant projects. How it works This workflow connects with RapidAPI to fetch new Upwork job listings, filters relevant ones, stores them in a Google Sheet, and sends WhatsApp alerts for matching results. It includes: Trigger node** for scheduled or webhook-based execution HTTP Request node** connected to RapidAPI for scraping Google Sheets node** to store job data Filter (IF) node** to select relevant jobs WhatsApp API node** to send alerts automatically How to set up Get an API key from RapidAPI and subscribe to an Upwork scraper API. Create a Google Sheet with columns like Title, Budget, Category, Link, and Description. Connect your Google account to n8n using Google Sheets credentials. Set up your WhatsApp API endpoint (e.g., via Waha API or WhatsApp Cloud API). Paste your API keys into the HTTP Request nodes and test the workflow. Schedule the workflow to run automatically (e.g., every hour or once daily). Requirements RapidAPI account (for Upwork scraper API) Google Sheets account WhatsApp API access (Waha / Cloud API) n8n cloud or self-hosted instance How to customize You can modify this workflow to: Track specific job categories or keywords (e.g., “automation”, “AI”, “n8n”) Send alerts to Telegram, Discord, or Slack instead of WhatsApp Add budget or client rating filters for higher-quality job leads Connect it with Airtable or Notion for advanced job tracking
by M Sayed
Automate Egyptian gold and currency price monitoring with beautiful Telegram notifications! 🚀 This workflow scrapes live gold prices and official exchange rates from the Egyptian market every hour and sends professionally formatted updates to your Telegram channel/group. ✨ Features: 🕐 Smart Scheduling: Runs hourly between 10 AM - 10 PM (Cairo timezone) 🥇 Gold Prices: Tracks different gold types with buy/sell rates 💱 Currency Rates: Official exchange rates (USD, EUR, SAR, AED, GBP, etc.) 🎨 Beautiful Formatting: Emoji-rich messages with proper Arabic text formatting ⚡ Reliable: Built-in retry mechanisms and error handling 🇪🇬 Localized: Tailored specifically for the Egyptian market
by Robert Breen
This n8n workflow automatically generates a custom YouTube thumbnail using OpenAI’s DALL·E based on a YouTube video’s transcript and title. It uses Apify actors to extract video metadata and transcript, then processes the data into a prompt for DALL·E and creates a high-resolution image for use as a thumbnail. ✅ Key Features 📥 Form Trigger**: Accepts a YouTube URL from the user. 🧠 GPT-4o Prompt Creation**: Summarizes transcript and title into a descriptive DALL·E prompt. 🎨 DALL·E Image Generation**: Produces a clean, minimalist YouTube thumbnail with OpenAI’s image model. 🪄 Automatic Image Resizing**: Resizes final image to YouTube specs (1280x720). 🔍 Apify Integration**: Uses two Apify actors: Youtube-Transcript-Scraper to extract transcript youtube-scraper to get video metadata like title, channel, etc. 🧰 What You'll Need OpenAI API Key** Apify Account & API Token** YouTube video URL** n8n instance (cloud or self-hosted)** 🔧 Step-by-Step Setup 1️⃣ Form & Parameter Assignment Node**: Form Trigger How it works**: Collects the YouTube URL via a form embedded in your n8n instance. API Required**: None Additional Node**: Set Converts the single input URL into the format Apify expects: an array of { url } objects. 2️⃣ Apify Actors for Data Extraction Node**: HTTP Request (Query Metadata) URL: https://api.apify.com/v2/acts/streamers~youtube-scraper/run-sync-get-dataset-items Payload: JSON with startUrls array and filtering options like maxResults, isHD, etc. Node**: HTTP Request (Query Transcript) URL: https://api.apify.com/v2/acts/topaz_sharingan~Youtube-Transcript-Scraper/run-sync-get-dataset-items Payload: startUrls array API Required**: Apify API Token (via HTTP Query Auth) Notes**: You must have an Apify account and actor credits to use these actors. 3️⃣ OpenAI GPT-4o & DALL·E Generation Node**: OpenAI (Prompt Creator) Uses the transcript and title to generate a DALL·E-compatible visual prompt. Node**: OpenAI (Image Generator) Resource: image Model: DALL·E (default with GPT-4o key) API Required**: OpenAI API Key Prompt Strategy**: Create a minimalist YouTube thumbnail in an illustration style. The background should be a very simple, uncluttered setting with soft, ambient lighting that subtly reflects the essence of the transcript. The overall mood should be professional and non-cluttered, ensuring that the text overlay stands out without distraction. Do not include any text. 4️⃣ Resize for YouTube Format Node**: Edit Image Purpose**: Resize final image to 1280x720 with ignoreAspectRatio set to true. No API required** — this runs entirely in n8n. 👤 Created By Robert Breen Automation Consultant | AI Workflow Designer | n8n Expert 📧 robert@ynteractive.com 🌐 ynteractive.com 🔗 LinkedIn 🏷️ Tags openai dalle youtube thumbnail generator apify ai automation image generation illustration prompt engineering gpt-4o