by Jay Emp0
Ebook to Audiobook Converter ▶️ Watch Full Demo Video What It Does Turn any PDF ebook into a professional audiobook automatically. Upload a PDF, get an MP3 audiobook in your Google Drive. Perfect for listening to books, research papers, or documents on the go. Example: Input PDF → Output Audiobook Key Features Upload PDF via web form → Get MP3 audiobook in Google Drive Natural-sounding AI voices (MiniMax Speech-02-HD) Automatic text extraction, chunking, and audio merging Customizable voice, speed, and emotion settings Processes long books in batches with smart rate limiting Perfect For Students**: Turn textbooks into study audiobooks Professionals**: Listen to reports and documents while commuting Content Creators**: Repurpose written content as audio Accessibility**: Make content accessible to visually impaired users Requirements | Component | Details | |-----------|---------| | n8n | Self-hosted ONLY (cannot run on n8n Cloud) | | FFmpeg | Must be installed in your n8n environment | | Replicate API | For MiniMax TTS (Sign up here) | | Google Drive | OAuth2 credentials + "Audiobook" folder | ⚠️ Important: This workflow does NOT work on n8n Cloud because FFmpeg installation is required. Quick Setup 1. Install FFmpeg Docker users: docker exec -it <n8n-container-name> /bin/bash apt-get update && apt-get install -y ffmpeg Native installation: sudo apt-get install ffmpeg # Linux brew install ffmpeg # macOS 2. Get API Keys Replicate**: Sign up at replicate.com and copy your API token Google Drive**: Set up OAuth2 in n8n and create an "Audiobook" folder in Drive 3. Import & Configure Import n8n.json into your n8n instance Replace the Replicate API token in the "MINIMAX TTS" node Configure Google Drive credentials and select your "Audiobook" folder Activate the workflow Cost Estimate | Component | Cost | |-----------|------| | MiniMax TTS API | $0.15 per 1000 characters ($3-5 for average book) | | Google Drive Storage | Free (up to 15GB) | | Processing Time | ~1-2 minutes per 10 pages | How It Works PDF Upload → Extract Text → Split into Chunks → Convert to Speech (batches of 5) → Merge Audio Files (FFmpeg) → Upload to Google Drive The workflow uses four main modules: Extraction: PDF text extraction and intelligent chunking Conversion: MiniMax TTS processes text in batches Merging: FFmpeg combines all audio files seamlessly Upload: Final audiobook saved to Google Drive Voice Settings (Customizable) { "voice_id": "Friendly_Person", "emotion": "happy", "speed": 1, "pitch": 0 } Available emotions: happy, neutral, sad, angry, excited Limitations ⚠️ Self-hosted n8n ONLY (not compatible with n8n Cloud) PDF files only (not EPUB, MOBI, or scanned images) Large books (500+ pages) take longer to process Requires FFmpeg installation (see setup above) Troubleshooting FFmpeg not found? Docker: Run docker exec -it <container> /bin/bash then apt-get install ffmpeg Native: Run sudo apt-get install ffmpeg (Linux) or brew install ffmpeg (macOS) Rate limit errors? Increase wait time in the "WAITS FOR 5 SECONDS" node to 10-15 seconds Google Drive upload fails? Make sure you created the "Audiobook" folder in your Google Drive Reconfigure OAuth2 credentials in n8n Created by emp0 | More workflows: n8n Gallery
by InfyOm Technologies
✅ What problem does this workflow solve? Sending a plain PDF resume doesn’t stand out anymore. This workflow allows candidates to convert their resume and photo into a personalized video resume. Recruiters get a more engaging first impression, while candidates showcase their profile in a modern, impactful way. ⚙️ What does this workflow do? Presents a form for uploading: 📄 Resume (PDF) 🖼 Photo (headshot) Extracts key details from the resume (education, experience, skills). Detects gender from the photo to choose a suitable voice/avatar. Generates a script (spoken resume summary) based on the extracted information. Uploads the photo to HeyGen to create an avatar. Requests video generation on HeyGen: Uses the avatar photo Uses gender-specific settings Uses the generated script as narration Monitors video generation status until completion. Stores the final video URL in a Google Sheet for easy access and tracking. 🔧 Setup Instructions Google Services Connect Google Sheets to n8n to store records with: Candidate name Resume link Video link HeyGen Setup Get an API key from HeyGen. Configure: Avatar upload endpoint (image upload) Video generation endpoint (image ID + script) Form Setup Use the n8n Form Trigger to allow candidates to upload: Resume (PDF) Photo (JPEG/PNG) 🧠 How it Works – Step-by-Step 1. Candidate Submission A candidate fills out a form and uploads: Resume (PDF) Photo 2. Extract Resume Data The resume PDF is processed using OCR/AI to extract: Name Experience Skills Education highlights 3. Gender Detection The uploaded photo is analyzed to detect gender (used for voice/avatar selection). 4. Script Generation Based on the extracted resume info, a concise, natural script is generated automatically. 5. Avatar Upload & Video Creation The photo is uploaded to HeyGen to create a custom avatar. A video generation request is made using: The script The avatar (image ID) A matching voice for the detected gender 6. Video Status Monitoring The workflow polls HeyGen’s API until the video is ready. 7. Save Final Video URL Once complete, the video link is added to a Google Sheet alongside the candidate’s details. 👤 Who can use this? This workflow is ideal for: 🧑🎓 Students and job seekers looking to stand out 🧑💼 Recruitment agencies offering modern resume services 🏢 HR teams wanting engaging candidate submissions 🎥 Portfolio builders for professionals 🚀 Impact Instead of a static PDF, you can now send a dynamic video resume that captures attention, adds personality, and makes a lasting impression.
by Automate With Marc
Gemini 3 Image & PDF Extractor (Google Drive → Gemini 3 → Summary) Automatically summarize newly uploaded images or PDF reports using Google Gemini 3, triggered directly from a Google Drive folder. Perfect for anyone who needs fast AI-powered analysis of financial reports, charts, screenshots, or scanned documents. 🎥 Watch the full step-by-step video tutorial: https://www.youtube.com/watch?v=UuWYT_uXiw0 What this template does This workflow watches a Google Drive folder for new files and automatically: Detects new uploaded files Uses Google Drive Trigger Watches a specific folder for fileCreated events Filters by MIME type: image/png image/webp application/pdf Downloads the file automatically Depending on the file type: Images → Download via HTTP Request → Send to Gemini 3 Vision PDFs → Download via HTTP Request → Extract content → Send to Gemini 3 Analyzes content using Gemini 3 Two separate processing lanes: 🖼️ Image Lane Image is sent to Gemini 3 (Vision / Image Analyze) Extracts textual + visual meaning from charts, diagrams, or screenshots Passes structured output to an AI Analyst Agent Agent summarizes and highlights top 3 findings 📄 PDF Lane PDF is downloaded Text is extracted using Extract From File Processed using Gemini 3 via OpenRouter Chat Model AI Analyst Agent summarizes charts/tables and extracts insights Why this workflow is useful Save hours manually reading PDFs, charts, and screenshots Convert dense financial or operational documents into digestible insights Great for: Financial analysts Operations teams Market researchers Content & reporting teams Anyone receiving frequent reports via Drive Requirements Before using this template, you will need: Google Drive OAuth credential (for Drive trigger + file download) Gemini 3 / PaLM or OpenRouter API key (Optional) Update folder ID to your own Google Drive target folder ⚠️ No credentials are included in this template. Add them manually after importing it. Node Overview Google Drive Trigger Watches a specific Drive folder for newly added files Provides metadata like webContentLink and MIME type Filter by Type (IF Node) Routes files to Image lane or PDF lane png or webp → Image pdf → PDF 🖼️ Image Processing Lane Download Image (HTTP Request) Analyze Image (Gemini Vision) Analyzer Agent Summarizes findings Highlights actionable insights Powered by OpenRouter Gemini 3 📄 PDF Processing Lane Download PDF (HTTP Request) Extract From File → PDF Analyzer Agent (PDF) Summarizes extracted chart/report information Highlights key takeaways Setup Guide Import the template into your n8n workspace Open Google Drive Trigger Select your Drive OAuth credential Replace folder ID with your target folder Open Gemini 3 / OpenRouter AI Model nodes Add your API credentials Test by uploading: A PNG/WebP chart screenshot A multi-page PDF report Check the execution to view summary outputs Customization Ideas Add email delivery (send the summary to yourself daily) Save summaries into: Google Sheets Notion Slack channels n8n Data Tables Add a second agent to convert summaries into: Weekly reports PowerPoint slides Slack-ready bullet points Add classification logic: Revenue reports Marketing analytics Product dashboards Financial charts Troubleshooting Trigger not firing? Confirm your Drive OAuth credential has read access to the folder. Gemini errors? Ensure your model ID matches your API provider: models/gemini-3-pro-preview google/gemini-3-pro-preview PDF extraction empty? Check if the file contains selectable text or only images. (You can add OCR if needed.)
by Chris Pryce
Overview This workflow streamlines the process of setting up a chat-bot using the Signal Messager API. What this is for Chat-bot applications have become very popular on Whatsapp and Telegram. However, security conscious people may be hesitant to connect their AI agents to these applications. Compared to Whatsapp and Telegram, the Signal messaging app is more secure and end-to-end encrypted by default. In part because of this, it is more difficult to create a chat-bot application in this app. However, this is still possible to do if you host your own Signal API endpoint. This workflow requires the installation of a community-node package. Some additional setup for the locally hosted Signal API endpoint is also necessary. As such, it will only work with self-hosted instances of n8n. You may use any AI model you wish for this chat-bot, and connect different tools and APIs depending on your use-case. How to setup Step 1: Setup Rest API Before implementing this workflow, you must setup a local Signal Client Rest API. This can be done using a docker container based on this project: bbernhard/signal-cli-rest-api. version: "3" services: signal-cli-rest-api: image: bbernhard/signal-cli-rest-api:latest environment: MODE=normal #supported modes: json-rpc, native, normal #- AUTO_RECEIVE_SCHEDULE=0 22 * * * #enable this parameter on demand (see description below) ports: "8080:8080" #map docker port 8080 to host port 8080. volumes: "./signal-cli-config:/home/.local/share/signal-cli" #map "signal-cli-config" folder on host system into docker container. the folder contains the password and cryptographic keys when a new number is registered After starting the docker-container, you will be able to interact with a local Signal client over a Rest API at http://localhost:8080 (by default, this setting can be modified in the docker-compose file). Step 2: Install Node Package This workflow requires the community-node package developed by ZBlaZe: n8n-nodes-signal-cli-rest-api. Navigate to ++your-n8n-server-address/settings/community-nodes++, click the 'Install' button, and paste in the communiy-node package name '++n8n-nodes-signal-cli-rest-api++' to install this community node. Step 3: Register and Verify Account The last step requires a phone-number. You may use your own phone-number, a pre-paid sim card, or (if you are a US resident), a free Google Voice digital phone-number. An n8n web-form has been created in this workflow to make headless setup easier. In the Form nodes, replace the URL with the address of your local Signal Rest API endpoint. Open the webform and enter the phone number you are using to register your bot's Signal account Signal needs to verify you are human before registering an account. Visit this page to complete the captcha challenge. The right-click the 'Open Signal' button and copy the link address. Paste this into the second form field and hit 'Submit'. At this point you should receive a verification token as an SMS message to the phone-number you are using. Copy this and paste it into the second web-form. Your bot's signal account should be setup now. To use this account in n8n, add the Rest-API address and account-number (phone-number) as a new n8n credential. Step 4: Optional For extra security it is recommended to restrict communication with this chat-bot. In the 'If' workflow node, enter your own signal account phone-number. You may also provide a UUID. This is an identifier number unique to your mobile device. You can find this by sending a test message to your bot's signal account and copying it from the workflow execution data.
by Oneclick AI Squad
Monitors brand mentions across Twitter/X, Reddit, and News APIs in real-time (or scheduled), fetches mentions in parallel, normalizes data, uses AI to analyze sentiment/urgency/topics, detects duplicates, filters critical mentions, logs everything to Airtable, posts alerts to Slack, and emails daily HTML digest reports to the marketing team. Good to Know Runs every hour (configurable) to provide near-real-time brand monitoring Pulls mentions from multiple platforms in parallel: Twitter/X, Reddit, News sources Uses AI (OpenAI/Grok/etc.) for advanced sentiment classification, urgency detection, topic extraction, and duplicate deduplication Focuses on actionable insights: flags negative/urgent mentions for immediate response Generates beautiful HTML daily digest with summarized mentions, sentiment trends, and key highlights Stores historical data in Airtable for tracking, analytics, and long-term reporting Sends real-time Slack alerts for high-priority/negative mentions Reduces manual social monitoring time dramatically and helps catch reputation issues early How It Works 1. Trigger & configure Schedule Trigger** — Runs every hour (or custom interval) to check for new brand mentions Set brand monitoring config** — Defines brand name, keywords, excluded terms, monitoring parameters (via Set node or variables) 2. Fetch & collect mentions Fetch Twitter/X mentions** — Uses Twitter/X node or HTTP Request to search recent tweets (mentions, keywords) Fetch Reddit mentions** — Searches relevant subreddits or Reddit-wide for brand keywords/posts Fetch news article mentions** — Queries news APIs (e.g. NewsAPI, Google News via RSS/HTTP) for brand coverage Merge platform mentions** — Combines results from all sources into a unified stream Normalize mentions into unified schema** — Standardizes fields (text, author, platform, timestamp, URL, etc.) for consistent processing 3. AI analyze & deduplicate AI sentiment and urgency analysis** — Sends mentions to AI model (OpenAI node) with prompt to classify: Sentiment: positive / neutral / negative Urgency/severity: low / medium / high / critical Topics/themes Key excerpts Wait For Result** — Ensures AI responses are complete Process analysis results** — Parses structured JSON output from AI Filter mentions requiring alerts** — Routes based on sentiment/urgency thresholds Deduplicate** — Removes near-duplicate mentions (e.g. same content reposted) 4. Store, alert & report Log mention to Airtable** — Appends/updates records with full details, sentiment, AI analysis, timestamp Route by sentiment and urgency** — Critical/negative → immediate action path Send mention alert* — Posts formatted message to *Slack** (or Discord/Teams) with link, text snippet, sentiment badge Generate HTML daily digest report** — Compiles summary: total mentions, sentiment breakdown, top issues, trends Email HTML digest* — Sends polished report to marketing team via *Email** node (SMTP/Gmail) Log success and update listings** — Records workflow completion, stats for monitoring Data Sources Twitter/X** — Recent search for mentions/keywords (via Twitter node or HTTP Request with API) Reddit** — Subreddit or site-wide search for brand mentions News APIs** — NewsAPI.org, Google News RSS, or similar for article mentions AI Model** — OpenAI (GPT-4o / GPT series), Grok, or other LLM for sentiment/urgency analysis Storage** — Airtable base (tables for mentions, daily summaries) Notifications** — Slack (webhook or app), Email (SMTP) How to Use Import the workflow JSON into your n8n instance Configure credentials: Twitter/X API (OAuth or Bearer token for search) Reddit API (if using official; or RSS/HTTP for subreddits) News API key (e.g. NewsAPI.org) OpenAI API key (or Grok/other LLM) Airtable API key + base/table Slack webhook or app token Email SMTP credentials Set monitoring parameters — Edit brand name, keywords, exclude lists in Set monitoring config node Customize AI prompt — In the AI sentiment node, tweak for brand-specific tone, industry terms, urgency criteria Adjust schedule — Change interval in Monitor mentions every hour trigger Tune filters — Set thresholds for alerts (e.g. only negative + high urgency) Test manually — Use Execute Workflow to simulate with known mentions Activate — Turn on and watch Executions + Airtable/Slack for results Requirements n8n instance (self-hosted or cloud) API access/keys for Twitter/X, Reddit (optional), News source OpenAI (or compatible LLM) API key with good token limit Airtable workspace/base for logging Slack workspace for alerts Email account for daily digests Customizing This Workflow Add more platforms** — Include Facebook/Instagram (via Meta API), LinkedIn, Discord mentions Enhance AI analysis** — Add topic clustering, competitor comparison, virality scoring Improve deduplication** — Use fuzzy matching or embeddings for better duplicate detection Visual dashboard** — Export Airtable data to Google Looker Studio / Grafana for sentiment trends Auto-response** — For low-risk positive mentions, generate draft replies Language support** — Add multilingual sentiment detection Hourly vs. real-time** — Switch to webhook triggers if platforms support (e.g. Twitter webhooks if available) Daily/weekly reports** — Aggregate more stats, charts in HTML email
by Natnail Getachew
How it works New Google Form response triggers the workflow Checks if employee was already onboarded (prevents duplicates) Adds user to department-specific Slack channel If in Software department, grants GitHub repo access Invites user to Jira and creates an onboarding task Updates Google Sheet status to "Completed" Set up steps Estimated setup time: 10-15 minutes Connect Google Sheets (2 min) - Update sheet ID in trigger and update nodes Configure Slack (3 min) - Add channel IDs and admin user ID to Code node config Set up Jira (3 min) - Add project keys and component IDs to Code node config Configure GitHub (2 min) - Add org name and repo names to Code node config Detailed setup instructions are included in the sticky notes within the workflow.
by WeblineIndia
Cash Reconciliation Checker with Google Sheets, OpenAI & n8n This workflow automatically compares internal cash balances with custodian or bank balances using Google Sheets, detects mismatches by account_id, calculates balance differences, logs matched records and sends mismatched records through OpenAI for a short explanation before saving them for exception review. It is designed to help teams reduce manual reconciliation work and quickly identify balance issues. Quick Implementation Steps Import the workflow into n8n. Connect your Google Sheets OAuth2 credentials. Point the three Google Sheets nodes to: Internal balances sheet Custodian balances sheet Reconciliation / exception log sheets Ensure both source sheets use the same account_id values. Make sure balance fields are numeric: internal_balance custodian_balance Connect your OpenAI credentials. Adjust the Schedule Trigger frequency if needed. Run the workflow once and verify: matched records are logged mismatched records are analyzed and appended correctly What It Does The Cash Reconciliation Checker automates a common finance operations task: comparing balances between two separate data sources. In this workflow, one Google Sheet holds internal balances, while another holds custodian balances. The workflow fetches both datasets, standardizes the required fields and matches records using the shared account_id. After matching the accounts, the workflow calculates the difference between internal and custodian balances and checks whether the difference exceeds a built-in tolerance. If the balances match, the record is written to a reconciliation log as a successful result. If they do not match, the workflow routes the record into an exception path. For mismatches, the workflow uses OpenAI (gpt-4o-mini) to generate a short possible explanation based on the values in the record. That enriched mismatch record is then prepared and appended to a separate logging sheet for investigation and follow-up. Who’s It For This workflow is useful for teams and professionals who regularly compare balances across systems, such as: Finance operations teams Fund administration teams Treasury teams Accounting teams Reconciliation analysts Back-office operations teams Internal controls and audit support teams It is especially useful for organizations that currently reconcile balances manually in spreadsheets and want a faster, more consistent process. Requirements to Use This Workflow Before using this workflow, make sure you have the following: Required Platforms & Accounts n8n account** Google Sheets** OpenAI API access** Required n8n Credentials You will need to configure: Google Sheets OAuth2 credentials** OpenAI credentials** Required Google Sheets Structure This workflow expects the following source data structure based on the JSON: 1) Internal Balance Sheet Must contain at least: account_id currency internal_balance 2) Custodian Balance Sheet Must contain at least: account_id currency custodian_balance 3) Reconciliation Log Sheet Should support these columns: account_id currency internal_balance custodian_balance difference abs_difference mismatch checked_at recon_status 4) Exception / Alert Sheet Should support these columns: account_id currency internal_balance custodian_balance difference abs_difference mismatch checked_at ai_explanation recon_status Data Expectations To avoid processing issues: account_id should be consistent across both source sheets Balance fields should contain numeric values only currency should be present where relevant Empty or invalid balance values may be flagged as mismatches How It Works & Set Up Step 1 — Import the Workflow into n8n Import the provided JSON file into your n8n workspace. After import, you will see the workflow named: Cash Reconciliation Checker Step 2 — Review the Flow The workflow follows this sequence: Schedule Trigger → Fetch Internal Balances → Fetch Custodian Balances → Edit Internal Fields / Edit Custodian Fields → Match Accounts by Account ID → Calculate Balance Difference → Check for Balance Mismatch ├── Matched → Log Matched Records └── Mismatched → Generate AI Mismatch Explanation → Prepare Exception Record → Append The Data In The Sheet Step 3 — Configure the Schedule Trigger Node: Run Reconciliation on Schedule This node starts the workflow automatically using a schedule interval. What to do: Open the node Set your preferred execution frequency Example options: Every 15 minutes Hourly Daily End-of-day reconciliation schedule Use a timing pattern that fits your reconciliation process. Step 4 — Connect the Internal Balance Source Node: Fetch Internal Balances This Google Sheets node pulls records from the internal balance sheet. What to do: Connect your Google Sheets OAuth2 account Select the correct spreadsheet Select the correct sheet tab Required fields expected from this source: account_id currency internal_balance Step 5 — Connect the Custodian Balance Source Node: Fetch Custodian Balances This Google Sheets node pulls records from the custodian or bank balance sheet. What to do: Connect your Google Sheets OAuth2 account Select the correct spreadsheet Select the correct sheet tab Required fields expected from this source: account_id currency custodian_balance Step 6 — Standardize Both Datasets The workflow uses two Set nodes to clean and normalize fields before matching. Node: Edit Internal Fields This node maps and formats: account_id currency internal_balance It also converts internal_balance into a numeric value. Node: Edit Custodian Fields This node maps and formats: account_id currency custodian_balance It also converts custodian_balance into a numeric value. Why this matters This step helps ensure both datasets use a consistent field structure before comparison. Step 7 — Match Records by Account ID Node: Match Accounts by Account ID This Merge node combines both sources using: account_id What it does It aligns internal and custodian records so each account can be compared side by side. Important setup note This will only work properly if: both sheets contain matching account_id values the values are formatted consistently there are no accidental extra spaces or mismatched IDs Step 8 — Calculate the Balance Difference Node: Calculate Balance Difference This Code node performs the main reconciliation logic. What it calculates For each matched account, it creates: internal_balance custodian_balance difference abs_difference mismatch checked_at Logic used in this node The workflow uses a built-in tolerance: const TOLERANCE = 0.01; Reconciliation rule A record is treated as a mismatch if: either balance is invalid / not numeric, or the absolute difference is greater than 0.01 Output fields created account_id currency internal_balance custodian_balance difference abs_difference mismatch checked_at This is the core decision-making step in the workflow. Step 9 — Route Matched vs Mismatched Records Node: Check for Balance Mismatch This IF node checks: mismatch == true Routing behavior If mismatch = false The record is considered matched and goes to: Log Matched Records If mismatch = true The record is treated as an exception and goes to: Generate AI Mismatch Explanation This split keeps normal reconciliations separate from exception handling. Step 10 — Log Matched Records Node: Log Matched Records This Google Sheets node appends matched records to a reconciliation log sheet. Logged values include: account_id currency internal_balance custodian_balance difference abs_difference mismatch checked_at recon_status Fixed value used Matched records are saved with: recon_status = Matched This gives you a clean audit trail of successfully reconciled accounts. Step 11 — Generate AI Explanation for Exceptions Node: Generate AI Mismatch Explanation This node sends mismatch data to OpenAI (gpt-4o-mini) and asks for a short explanation. Prompt behavior in the workflow The AI is instructed to review: account ID currency internal balance custodian balance difference check timestamp It is then asked to provide the most likely cause of the mismatch from the following categories already defined in the workflow: settlement delay (T+1/T+2) pending fees or accrued interest FX conversion timing failed corporate actions bank charges not yet booked data entry error It also ends with: top 1–2 likely causes one recommended next action Why this is useful This adds context to exceptions and helps operations teams review mismatches faster. Step 12 — Prepare the Exception Record Node: Prepare Exception Record This Code node combines the AI output with the original mismatch data and formats it for logging. Fields included in the final exception record account_id currency internal_balance custodian_balance difference abs_difference mismatch checked_at ai_explanation recon_status Fixed value used Mismatch records are saved with: recon_status = Mismatch This creates a structured exception record ready for reporting or review. Step 13 — Append Mismatch Records to the Exception Sheet Node: Append The Data In The Sheet This Google Sheets node appends the prepared mismatch records into a separate sheet for follow-up. Logged values include: account_id currency internal_balance custodian_balance difference abs_difference mismatch checked_at ai_explanation recon_status This acts as your exception register for unresolved or suspicious balance breaks. Step 14 — Test Before Going Live Before enabling the workflow, run a few controlled tests. Recommended test scenarios Test 1 — Perfect match Use the same values in both sheets for one account. Expected result: record goes to Log Matched Records Test 2 — Small tolerance-safe difference Use a difference within 0.01. Expected result: record should still be treated as matched Test 3 — True mismatch Use a larger difference. Expected result: record goes through AI explanation path gets appended to exception sheet Test 4 — Invalid numeric value Use a blank or non-numeric balance. Expected result: record should be flagged as mismatch Once tests pass, you can safely activate the workflow. How To Customize Nodes This workflow is already useful as-is, but it can be adapted for different reconciliation processes. 1) Customize the Schedule Trigger Node: Run Reconciliation on Schedule You can change: frequency execution window time of day reconciliation cycle Useful if you want: intraday reconciliation end-of-day checks batch finance controls 2) Change Matching Logic Node: Match Accounts by Account ID Currently matches on: account_id You can modify your data model and workflow if you want to include additional matching dimensions such as: account + currency account + region account + entity Only do this if your sheet structure supports it. 3) Adjust the Tolerance Threshold Node: Calculate Balance Difference Current tolerance: const TOLERANCE = 0.01; You can change this if your business allows different variance thresholds. Example customizations 0 → exact reconciliation only 0.01 → cent-level tolerance 1 → whole-unit tolerance custom threshold based on asset class or currency 4) Expand the AI Explanation Logic Node: Generate AI Mismatch Explanation You can customize the prompt to include: business rules escalation notes internal SOP references suggested ownership routing severity classification This is helpful if you want the AI output to be more operationally specific. 5) Add More Fields to Logging Nodes: Log Matched Records Append The Data In The Sheet You can extend the output to include additional columns such as: legal entity desk custodian name region portfolio ID reviewer status resolution notes Only add fields that exist in your upstream data or are intentionally created in the workflow. 6) Improve Exception Classification Node: Prepare Exception Record You can enhance this node to add labels like: low severity medium severity high severity requires same-day review possible FX issue possible operational break This can help organize exception handling more efficiently. Add-ons This workflow can be extended with additional automation features depending on your operational needs. 1) Slack Alerts for Mismatches Send a real-time alert whenever a mismatch is detected. Useful for: finance ops teams treasury teams urgent exception monitoring 2) Email Notification Summary Send a daily or hourly summary of all mismatches to stakeholders. Useful for: finance managers controllers operations leads 3) Severity Scoring Add logic to classify mismatches by size or business impact. Useful for: prioritization faster review queues escalation workflows 4) Auto-Assignment to Reviewers Automatically assign mismatch cases to specific team members based on: currency entity account range custodian Useful for structured exception management. 5) Dashboard Reporting Push matched and mismatched records into a reporting dashboard. Useful for: reconciliation KPIs trend monitoring operational oversight 6) Multi-Currency or Multi-Entity Expansion Extend the workflow to support more entities, accounts or balance sources. Useful for: growing operations teams fund administrators larger finance environments Use Case Examples Below are some of the main ways this workflow can be used. There can absolutely be many more use cases depending on how your reconciliation process is structured. 1) Daily Internal vs Custodian Cash Reconciliation Automatically compare daily internal records against custodian balances and flag any balance breaks for investigation. 2) End-of-Day Treasury Balance Checks Run the workflow at the end of each business day to ensure treasury balances match external sources before close. 3) Exception Monitoring for Fund Operations Identify mismatched fund cash balances and create a structured exception sheet with AI-generated review notes. 4) Reconciliation Logging for Audit Trail Maintain a consistent log of matched and mismatched records for reporting, controls and audit readiness. 5) Early Warning for Data Quality Issues Use mismatches to spot operational problems such as missing values, incorrect balances or inconsistent source data. 6) Lightweight Finance Automation for Spreadsheet-Based Teams Support teams that still work mainly in spreadsheets but want to reduce repetitive reconciliation effort using automation. Troubleshooting Guide | Issue | Possible Cause | Solution | |---|---|---| | No records are being compared | One or both Google Sheets nodes are not returning data | Check that both source sheets contain rows and the correct sheet tabs are selected | | Records are not matching correctly | account_id values differ between the two source sheets | Make sure account_id values are identical and formatted consistently in both sheets | | All rows are being flagged as mismatches | Balance fields contain text, blanks or invalid values | Ensure internal_balance and custodian_balance contain numeric values only | | Small rounding differences are creating mismatches | Tolerance is too strict for your use case | Update the tolerance value in Calculate Balance Difference | | Matched records are not being logged | Google Sheets append node is not configured correctly | Verify the target spreadsheet, sheet tab and credentials in Log Matched Records | | Mismatch records are not being saved | Exception logging sheet is missing expected columns | Confirm the target sheet includes all mapped fields, including ai_explanation and recon_status | | AI explanation is blank | OpenAI credentials or model configuration issue | Reconnect your OpenAI credentials and verify the model is available | | Workflow fails after import | Credentials are not connected in your environment | Reassign all credential-dependent nodes after importing the workflow | | Workflow does not run automatically | Schedule Trigger is not active or workflow is disabled | Activate the workflow and confirm the schedule settings | | Numeric values look wrong in output | Source sheet values are stored with symbols or formatting | Remove currency symbols, commas or text formatting from balance columns | Need Help? If you want help setting up, customizing or extending this workflow, our n8n workflow automation team at WeblineIndia can help you move faster. We can help you with: n8n workflow setup and deployment Google Sheets and finance operations automations OpenAI-powered exception handling Slack / email alert integrations dashboard and reporting extensions custom reconciliation logic production-grade workflow improvements
by Avkash Kakdiya
How it works This workflow runs daily to collect the latest funding round data from Crunchbase. It retrieves up to 100 recent funding events, including company, investors, funding amount, and industry details. The data is cleaned and filtered to only include rounds announced in the last 30 days. Finally, the results are saved into both Google Sheets for reporting and Airtable for structured database management. Step-by-step Trigger & Data Fetching Schedule Trigger node – Runs the workflow once a day. HTTP Request node – Calls the Crunchbase API to fetch the latest 100 funding rounds with relevant details. Data Processing Code node – Parses the raw API response into clean fields such as company name, funding type, funding amount, investors, industry, and Crunchbase URL. Filter node – Keeps only funding rounds from the last 30 days to ensure the dataset remains fresh and relevant. Storage & Outputs Google Sheets node – Appends or updates the filtered funding records in a Google Sheet for easy sharing and reporting. Airtable node – Stores the same records in Airtable for more structured, database-style organization and management. Why use this? Automates daily collection of startup funding data from Crunchbase. Keeps only the most recent and relevant records for faster insights. Ensures data is consistently stored in both Google Sheets and Airtable. Supports reporting, collaboration, and database management in one flow.
by MUHAMMAD SHAHEER
Overview This workflow automates the process of turning your video transcripts into platform-specific social media posts using AI. It reads any uploaded transcript file, analyzes the text, and automatically generates full-length, engaging posts with image prompts for Facebook, LinkedIn, Instagram, Reddit, and WhatsApp. Perfect for creators, marketers, and automation builders who want to repurpose long-form content into viral posts, all in one click. How it Works The Manual Trigger starts the workflow. The Read Binary File node imports your video transcript (TXT format). The Move Binary Data and Set nodes convert it into a text string for processing. The AI Agent (LangChain) powered by Groq AI analyzes the transcript and generates human-like social media posts with realistic image prompts. The Function Node parses and structures the output by platform. The Google Sheets Node automatically saves all content — ready for scheduling or publishing. The SerpAPI Integration enhances contextual awareness by referencing real-time search trends. Set Up Steps Setting up this workflow typically takes 5–10 minutes. Connect your Google Sheets account (OAuth2). Connect your Groq AI and SerpAPI credentials. Upload your transcript file (e.g., from YouTube or podcast). Run the workflow to instantly generate platform-specific posts and prompts. View all results automatically saved in Google Sheets. Detailed instructions are included as sticky notes inside the workflow. Use Cases Turn YouTube videos or podcasts into multi-platform social content Auto-generate daily social posts using transcripts Build AI-powered repurposing systems for agencies or creators Save creative teams hours of manual copywriting work Requirements n8n account (self-hosted or cloud) Groq AI API Key SerpAPI Key (for optional trend enhancement) Google Sheets connection
by Dhruv Mali
Description This workflow acts as your automated HR assistant, scanning for employee milestones and posting AI-generated celebration messages to Google Chat. How it works Daily Scan:** Checks your Google Sheet every morning to identify birthdays and work anniversaries. AI Drafting:* Uses *Google Gemini** to write unique, warm messages for each employee, ensuring wishes never sound robotic or repetitive. Delivery:* Automatically posts the message to your team's *Google Chat** space and logs the activity. Set up steps Connect Accounts: Set up credentials for Google Sheets and Google PaLM/Gemini. Configure Settings: Open the SET-BIRTHDAY and SET - ANNIVERSARY nodes to enter your Agency Name and Google Chat API details (Space ID, Key, Token). Prepare Data: Ensure your Google Sheet contains columns for employee names, dates of birth, and joining dates.
by Philflow
This n8n template lets you run prompts against 350+ LLM models and see exactly what each request costs with real-time pricing from OpenRouter Use cases are many: Compare costs across different models, plan your AI budget, optimize prompts for cost efficiency, or track expenses for client billing! Good to know OpenRouter charges a platform fee on top of model costs. See OpenRouter Pricing for details. You need an OpenRouter account with API credits. Free signup available with some free models included. Pricing data is fetched live from OpenRouter's API, so costs are always up-to-date. How it works All available models are fetched from OpenRouter's API when you start. You select a model and enter your prompt via the form (or just use the chat). The prompt is sent to OpenRouter and the response is captured. Token usage (input/output) is extracted from the response using a LangChain Code node. Real-time pricing for your selected model is fetched from OpenRouter. The exact cost is calculated and displayed alongside your AI response. How to use Chat interface: Quick testing - just type a prompt and get the response with costs. Form interface: Select from all available models via dropdown, enter your prompt, and get a detailed cost breakdown. Click "Show Details" on the result form to see the full breakdown (input tokens, output tokens, cost per type). Requirements OpenRouter account with API key (Get one here) Customising this workflow Add a database node to log all requests and costs over time Connect to Google Sheets for cost tracking and reporting Extend with LLM-as-Judge evaluation to also check response quality
by Niels Berkhout
How it works This template is using a LinkedIn User Profile in combination with your detailed Ideal Customer Profile (ICP) to create a score, including reasoning and outreach messages. It is manually triggered and uses a Google Sheet as an entry point. Inside there are rows with only LinkedIn profile urls in it. Then the SourceGeek Node is being triggered and the complete profile info is retrieved. That info is being sent to an AI Agent where a long description of the Ideal Customer Profile is written down. The AI Agent will process all that data and will return with three values ICP rating (between 0 - 100) ICP reasoning. Where does the score come from A 1st, 2nd and 3rd outreach message which you can use later on After that the original Google Sheet row will be updated with the data created by the AI Agent How to use Populate a Google Sheet with LinkedIn Profile URLs which potentially can be your customers Let the SourceGeek fetch all their data from LinkedIn and enrich it with soft skills and much more Describe in detail your ICP and let the AI Agent determine the score of the profile Update the initial Google Sheet with the ICP Score and the reasoning how this score came to be Requirements A Google Sheet with LinkedIn profile urls The verified SourceGeek node