by Cheng Siong Chin
How It Works This workflow streamlines financial operations for accounting teams, finance departments, and tax professionals managing business expenses. It addresses the challenge of reconciling expenses with revenue data, accurately categorizing deductions, and ensuring tax compliance across complex transactions. The system triggers on schedule to fetch expense receipts and revenue data from financial systems simultaneously. An AI-powered receipt matching agent uses OpenAI models to intelligently pair receipts with corresponding revenue entries, handling variations in formatting, dates, and vendor names. A deduction categorization agent analyzes matched transactions using structured output parsing to classify expenses into appropriate tax categories based on IRS guidelines and business rules. The workflow calculates optimized tax deductions considering category limits and compliance requirements. A report generation agent compiles comprehensive tax packets with supporting documentation, which are finalized and automatically delivered to tax agents via email for review and filing. Setup Steps Configure financial system API credentials in "Fetch Expense Receipts" Set up OpenAI API key in all AI agent nodes for intelligent processing Define schedule frequency in "Schedule Trigger" based on accounting period requirements Customize deduction categories and rules in "Deduction Categorization Agent" Configure tax calculation parameters in "Calculate Tax Deductions" node per regulations Prerequisites Financial system API access with read permissions, OpenAI API access. Use Cases Monthly expense reconciliation, quarterly tax preparation, annual tax filing automation Customization Add approval workflows for high-value expenses, integrate additional financial systems Benefits Reduces tax preparation time by 70%, maximizes legitimate deductions through intelligent categorization
by Romuald Członkowski
Social Media Intelligence Workflow with Bright Data and OpenAI Get a 360 Social media presence report for a person Who's it for Business development professionals, recruiters, sales teams, and market researchers who need comprehensive social media intelligence on individuals for lead qualification, due diligence, partnership evaluation, or candidate assessment. How it works Enter target person's details through the web form (name, company, location) AI Discovery Agent searches across selected platforms using name variations Profile validator verifies discovered profiles with confidence scoring Platform-specific agents analyze each profile using Bright Data MCP tools GPT-4 synthesizes all data into a comprehensive intelligence report Report automatically generated as formatted Google Doc with direct link Requirements Bright Data MCP account with PRO access (Get your Bright Data API key here) OpenAI API key (or alternative LLM provider) Google Drive OAuth connection for report delivery n8n self-hosted instance or cloud account How to set up Update Bright Data credentials: Find "Bright Data MCP" node (look for red warning note) Replace YOUR_BRIGHT_DATA_TOKEN_HERE with your actual token Update UNLOCKER_CODE_HERE with your unlocker code Update Google Drive settings: Find "Create Empty Google Doc" node Select target folder there Configure your LLM credentials (OpenAI or alternative) Test with your own name using "Basic" search depth Watch Youtube Tutorial How to customize the workflow Add platforms**: Extend the Switch node with new cases and create corresponding prompt builders Modify analysis depth**: Edit the platform-specific prompt builders to focus on different metrics Change report format**: Update the final LLM Chain prompt to adjust report structure Add notifications**: Insert Slack or email nodes after report generation Adjust confidence thresholds**: Modify validators to change profile verification requirements Alternative outputs**: Replace Google Docs with PDF, Excel, or webhook to CRM
by Cheng Siong Chin
How It Works This workflow delivers intelligent multilingual audio content creation for global marketing teams, e-learning providers, and content production studios. It solves the complex challenge of generating culturally adapted, professionally voiced translations optimized for each target language. The system begins with AI-powered localization that adapts source content for cultural context, idioms, and regional preferences rather than literal translation. Specialized AI agents then optimize speech parameters (pace, tone, emphasis) and voice characteristics (pitch, timbre, style) specific to each language's phonetic requirements. The workflow prepares language arrays and loops through each target language, generating optimized audio via ElevenLabs with customized voice parameters. All audio files are processed, formatted with metadata, and aggregated into a complete deliverable package, transforming single-source content into publication-ready multilingual audio assets. Setup Steps Configure OpenAI API credentials in all AI agent nodes Set up ElevenLabs account, obtain API key Define target languages list in "Workflow Configuration" node using ISO language codes Customize localization prompts in AI agents to match brand voice and content type Adjust voice parameter ranges and optimization criteria based on audio requirements Configure output formatting in "Aggregate Results" node Prerequisites OpenAI API access with GPT-4 capabilities, active ElevenLabs subscription with multi-voice access. Use Cases Global product launch campaigns, international e-learning course production Customization Modify AI prompts for industry-specific terminology, add quality validation checkpoints Benefits Achieves native-quality audio across languages, reduces production time by 80%
by Cheng Siong Chin
How It Works This workflow automates hospital emergency department triage by intelligently processing patient intake information through multiple AI-powered assessment stages. Designed for emergency departments, urgent care centers, and hospital admission teams, it solves the critical challenge of rapid, accurate patient prioritization during high-volume periods. The system captures initial patient data through a chat interface, uses specialized AI agents to analyze medical history and current symptoms, validates business rules for priority assignment, performs stability checks, calculates priority scores, and determines required actions. It then routes patients to appropriate care pathways while sending notifications to relevant medical teams and logging all interactions for audit compliance. The workflow leverages OpenAI models and structured JSON parsing to ensure consistent, protocol-driven triage decisions. Setup Steps Configure OpenAI credentials with API key for AI agent access Set up Hospital Triage Agent node with your clinical triage protocols Configure Patient Consent and Structured JSON checkers with validation rules Connect notification endpoints for Execute Appointment and Send Notification nodes Set up audit logging system integration in Log Interactions node Customize business rule validation parameters for your facility's triage categories Prerequisites Active OpenAI API account, hospital system API access for appointments and notifications Use Cases Emergency department patient intake, urgent care prioritization, virtual triage for telehealth Customization Modify triage agent prompts to reflect your clinical protocols, adjust priority scoring algorithms Benefits Accelerates triage processing by 60%, ensures standardized clinical assessment
by Janak Patel
Who’s it for This template is ideal for YouTube video creators who spend a lot of time manually generating SEO assets like descriptions, tags, titles, keywords, and thumbnails. If you're looking to automate your YouTube SEO workflow, this is the perfect solution for you. How it works / What it does Connect a Google Sheet to n8n and pull in the Hindi script (or any language). Use OpenAI to generate SEO content: Video description Tags Keywords Titles Thumbnail titles etc. Use the generated description as input to create a thumbnail image using an image generation API. Store all outputs in the same Google Sheet in separate columns. Optionally, use tools like VidIQ or TubeBuddy to test the SEO strength of generated titles, tags, and keywords. 💡 Note: This example uses Runway’s image generation API, but you can plug in any other image-generation service of your choice. Requirements A Google Sheet with clearly named columns Hindi, English, or other language scripts in the sheet OpenAI API key Runway API key (or any other image generation API) How to set up You can set up this workflow in 15 minutes by following the pre-defined steps. Replace the manual Google Sheet trigger with a scheduled trigger for daily or timed automation. You may also swap Google Sheets with any database or data source of your choice. No Google Sheets API required. Requires minimal JavaScript or Python knowledge for advanced customizations.
by Robert Breen
Give business users a chat box; get back valid BigQuery SQL and live query results. The workflow: Captures a plain-language question from a chat widget or internal portal. Fetches the current table + column schema from your BigQuery dataset (via INFORMATION_SCHEMA). Feeds both the schema and the question to GPT-4o so it can craft a syntactically correct SQL query using only fields that truly exist. Executes the AI-generated SQL in BigQuery and returns the results. Stores a short-term memory by session, enabling natural follow-up questions. Perfect for analysts, customer-success teams, or any stakeholder who needs data without writing SQL. ⚙️ Setup Instructions Import the workflow n8n → Workflows → Import from File (or Paste JSON) → Save Add credentials | Service | Where to create credentials | Node(s) to update | |---------|----------------------------|-------------------| | OpenAI | <https://platform.openai.com> → Create API key | OpenAI Chat Model | | Google BigQuery | Google Cloud Console → IAM & Admin → Service Account JSON key | Google BigQuery (schema + query) | Point the schema fetcher to your dataset In Google BigQuery1 you’ll see: SELECT table_name, column_name, data_type FROM n8nautomation-453001.email_leads_schema.INFORMATION_SCHEMA.COLUMNS Replace n8nautomation-453001.email_leads_schema with YOUR_PROJECT.YOUR_DATASET. Keep the rest of the query the same—BigQuery’s INFORMATION_SCHEMA always surfaces table_name, column_name, and data_type. Update the execution node Open Google BigQuery (the second BigQuery node). In Project ID select your project. The SQL Query field is already {{ $json.output.query }} so it will run whatever the AI returns. (Optional)Embed the chat interface Test end-to-end Open the embedded chat widget. Ask: “How many distinct email leads were created last week?” After a few seconds the workflow will return a table of results—or an error if the schema lacks the requested fields. As specific questions about your data Activate Toggle Active so the chat assistant is available 24/7. 🧩 Customization Ideas Row-limit safeguard**: automatically append LIMIT 1000 to every query. Chart rendering**: send query results to Google Sheets + Looker Studio for instant visuals. Slack bot**: forward both the question and the SQL result to a Slack channel for team visibility. Schema caching**: store the INFORMATION_SCHEMA result for 24 hours to cut BigQuery costs. Contact Email:** rbreen@ynteractive.com Website:** https://ynteractive.com YouTube:** https://www.youtube.com/@ynteractivetraining LinkedIn:** https://www.linkedin.com/in/robertbreen
by Ranjan Kumar
Who’s it for This template is ideal for creators, bloggers, and automation enthusiasts who want to auto-generate blog posts from AI-generated content — without lifting a finger. Whether you're running a tech blog, AI newsletter, or just want to keep your WordPress site fresh, this workflow does the heavy lifting. How it works This n8n workflow automatically publishes WordPress posts using trending content from Reddit RSS feeds (like /r/artificial and /r/MachineLearning), enhanced with AI writing and royalty-free images. RSS Feed Trigger: Fetches new Reddit posts every minute from multiple AI-related subreddits. AI Blog Writer: Uses an LLM (Groq / GPT-4o) to convert Reddit titles + content into a full blog article (title, content, category, tags, image keyword). Image Generator: Queries the Pexels API using the keyword provided by the AI to fetch a relevant blog image. Category & Tag Manager: Automatically creates or reuses categories and tags in WordPress. WordPress Publisher: Posts the article in draft or published form — complete with featured image and metadata. Everything is dynamically generated — no hardcoded text or API keys! How to set up Estimated time: 15–20 minutes You’ll need: 🧠 Groq or OpenAI API key (for AI article generation) 🖼️ Pexels API key (for fetching featured images) 📰 WordPress API credentials (with media + post permissions) Customization via Sticky Notes: Choose your own RSS feeds (or subreddit URLs) Modify the AI prompt to match your writing style Set post status (draft or publish) Add your WordPress API URL and credentials Requirements Free n8n account (or self-hosted instance) API credentials (Groq/OpenAI, Pexels, WordPress) Working WordPress site with REST API access Sticky notes explaining: Setup instructions AI prompt format Required credential names
by Don Jayamaha Jr
Get real-time MEXC Spot Market data instantly in Telegram! This workflow connects the MEXC REST v3 API with Telegram and optional GPT-4.1-mini formatting, providing users with latest prices, 24h stats, order book depth, trades, and candlesticks in structured, Telegram-ready messages. 🔎 How It Works A Telegram Trigger node listens for commands. User Authentication ensures only authorized Telegram IDs can access the bot. A Session ID is generated from chat.id for lightweight memory. The MEXC AI Agent coordinates multiple API calls via HTTP nodes: Ticker (Latest Price) → /api/v3/ticker/price?symbol=BTCUSDT 24h Stats → /api/v3/ticker/24hr?symbol=BTCUSDT Order Book Depth → /api/v3/depth?symbol=BTCUSDT&limit=50 Best Bid/Ask Snapshot → /api/v3/ticker/bookTicker?symbol=BTCUSDT Candlesticks (Klines) → /api/v3/klines?symbol=BTCUSDT&interval=15m&limit=200 Recent Trades → /api/v3/trades?symbol=BTCUSDT&limit=100 Utility Nodes refine the data: Calculator → spreads, averages, mid-prices. Think → formats raw JSON into human-readable summaries. Simple Memory → saves symbol, sessionId, and context across turns. Message Splitter prevents Telegram messages from exceeding 4000 characters. Results are sent back to Telegram in structured, readable reports. ✅ What You Can Do with This Agent Get latest prices & 24h stats for any spot pair. Retrieve order book depth (customizable levels). Monitor best bid/ask quotes for spreads. View candlestick OHLCV data for multiple timeframes. Check recent trades (up to 100). Receive clean Telegram reports — no raw JSON. 🛠️ Setup Steps Create a Telegram Bot Use @BotFather to create a bot and copy its API token. Configure in n8n Import MEXC AI Agent v1.02.json. Update the User Authentication node with your Telegram ID. Add Telegram API credentials (bot token). Add OpenAI API key (Optional) Add MEXC API key Deploy & Test Activate the workflow in n8n. Send a query like BTCUSDT to your bot. Instantly receive structured MEXC Spot Market data in Telegram. 📤 Output Rules Output grouped into Price, 24h Stats, Order Book, Candlesticks, Trades. No raw JSON — formatted summaries only. Complies with Telegram’s 4000-character message limit (auto-split). 📺 Setup Video Tutorial Watch the full setup guide on YouTube: ⚡ Unlock real-time MEXC Spot Market insights in Telegram — clean, fast, and API-key free. 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. 🔗 For support: Don Jayamaha – LinkedIn
by Don Jayamaha Jr
Instantly access Upbit Spot Market Data in Telegram with AI Automation This workflow integrates the Upbit REST API with GPT-4o-mini and Telegram, giving you real-time price data, order books, trades, and candles directly in chat. Perfect for crypto traders, market analysts, and investors who want structured Upbit data at their fingertips—no manual API calls required. ⚙️ How It Works A Telegram bot listens for user queries like upbit KRW-BTC 15m. The Upbit AI Agent parses the request and fetches live data from the official Upbit REST API: Price & 24h stats (/v1/ticker) Order book depth & best bid/ask (/v1/orderbook) Recent trades (/v1/trades/ticks) Dynamic OHLCV candles across all timeframes (/v1/candles/{seconds|minutes|days|weeks|months|years}) A built-in Calculator tool computes spreads, % change, and midpoints. A Think module reshapes raw JSON into simplified, clean fields. The agent formats results into concise, structured text and sends them back via Telegram. 📊 What You Can Do with This Agent ✅ Get real-time prices and 24h change for any Upbit trading pair. ✅ View order book depth and best bid/ask snapshots. ✅ Fetch multi-timeframe OHLCV candles (from 1s to 1y). ✅ Track recent trades with price, volume, side, and timestamp. ✅ Calculate midpoints, spreads, and percentage changes. ✅ Receive clean, human-readable reports in Telegram—no JSON parsing needed. 🛠 Set Up Steps Create a Telegram Bot Use @BotFather and save your bot token. Configure Telegram API and OpenAI in n8n Add your bot token under Telegram credentials. Replace your Telegram ID in the authentication node to restrict access. Import the Workflow Load Upbit AI Agent v1.02.json into n8n. Ensure connections to tools (Ticker, Orderbook, Trades, Klines). Deploy and Test Example query: upbit KRW-BTC 15m → returns price, order book, candles, and trades. Example query: upbit USDT-ETH trades 50 → returns 50 latest trades. 📺 Setup Video Tutorial Watch the full setup guide on YouTube: ⚡ Unlock clean, structured Upbit Spot Market data instantly—directly in Telegram! 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. 🔗 For support: Don Jayamaha – LinkedIn
by Dayong Huang
How it works This template creates a fully automated Twitter content system that discovers trending topics, analyzes why they're trending using AI, and posts intelligent commentary about them. The workflow uses MCP (Model Context Protocol) with the twitter154 MCP server from MCPHub to connect with Twitter APIs and leverages OpenAI GPT models to generate brand-safe, engaging content about current trends. Key Features: 🔍 Smart Trend Discovery: Automatically finds US trending topics with engagement scoring 🤖 AI-Powered Analysis: Uses GPT to explain "why it's trending" in 30-60 words 📊 Duplicate Prevention: MySQL database tracks posted trends with 3-day cooldowns 🛡️ Brand Safety: Filters out NSFW content and low-quality hashtags ⚡ Rate Limiting: Built-in delays to respect API limits 🐦 Powered by twitter154: Uses the robust "Old Bird" MCP server for comprehensive Twitter data access Set up steps Setup time: ~10 minutes Prerequisites: OpenAI API key for GPT models Twitter API access for posting MySQL database for trend tracking MCP server access**: twitter154 from aigeon-ai via MCPHub Configuration: Set up MCP integration with twitter154 server endpoint: https://api.mcphub.com/mcp/aigeon-ai-twitter154 Configure credentials for OpenAI, Twitter, and MySQL connections Set up authentication for the twitter154 MCP server (Header Auth required) Create MySQL table for keyword registry (schema provided in workflow) Test the workflow with manual execution before enabling automation Set schedule for automatic trend discovery (recommended: every 2-4 hours) MCP Server Features Used: Search Tweets**: Core functionality for trend analysis Get Trends Near Location**: Discovers trending topics by geographic region AI Tools**: Leverages sentiment analysis and topic classification capabilities Customization Options: Modify trend scoring criteria in the AI agent prompts Adjust cooldown periods in database queries Change target locale from US to other regions (WOEID configuration) Customize tweet formatting and content style Configure different MCP server endpoints if needed Perfect for: Social media managers, content creators, and businesses wanting to stay current with trending topics while maintaining consistent, intelligent posting schedules. Powered by: The twitter154 MCP server ("The Old Bird") provides robust access to Twitter data including tweets, user information, trends, and AI-powered text analysis tools.
by Shinji Watanabe
Who’s it for Teams that care about space-weather impact—SRE/infra, satellite ops, aviation, power utilities, researchers—or anyone who wants timely, readable alerts when NASA publishes significant solar events. How it works / What it does Every 30 minutes a Cron trigger runs, the NASA DONKI node fetches the past 24 hours of space-weather notifications, and a code step de-duplicates, labels event types, and assigns a severity (CRITICAL / HIGH / OTHER). A Switch routes items: CRITICAL/HIGH** → an LLM (“AI Agent”) produces a concise Japanese alert → Slack posts with local time and source link. OTHER** → an LLM creates a short summary for record-keeping → a small merge step prepares fields → Google Sheets appends a new row. Sticky notes in the canvas explain the schedule, data source, and overall flow. How to set up Add credentials for Slack, Google Sheets, and OpenAI (or compatible LLM). Open the Slack nodes and select your workspace + target channel. Select your Google Sheet and worksheet for logging. (Optional) Adjust the Cron interval and the NASA lookback window. Test with a manual execution, then activate. Requirements Slack Bot with permission to post to the chosen channel Google account with access to the target Sheet OpenAI (or API-compatible) credentials for the LLM nodes Internet access to NASA DONKI (no API key required) How to customize the workflow Tweak severity rules inside the Analyze & Prioritize code node. Edit prompt tone/length in each AI Agent node. Change Slack formatting or mention style (@channel vs none). Add filters (e.g., alert only on CME/FLR) or extend logging fields in the merge step.
by Toshiki Hirao
Managing contracts manually is time-consuming and prone to human error, especially when documents need to be shared, tracked, and stored across different tools. This workflow automates the entire process by capturing contract PDFs and Words uploaded to Slack, extracting key information with GPT, and organizing the data into a structured format inside Google Sheets. Essential fields such as client, service provider, contract value, and important dates are automatically parsed and logged, eliminating repetitive manual entry. Once the data is saved, a confirmation message is posted back to Slack so your team can quickly verify that everything has been recorded accurately. Who’s it for This workflow is ideal for operations teams, legal departments, or growing businesses that manage multiple contracts and want to maintain accuracy without spending hours on administration. By integrating Slack, GPT, and Google Sheets, you gain a simple but powerful contract management system that reduces risk, improves visibility, and keeps everyone aligned. Instead of scattered files and manual spreadsheets, you have a single automated pipeline that ensures your contract data is always up to date and accessible. How it works The workflow is triggered when a contract in PDF or Word format is shared in the designated Slack channel. The uploaded file is automatically retrieved for processing. Its content is extracted and converted into plain text. If the file is not in PDF or Word format, an error message is sent. GPT interprets the extracted text and structures the essential fields (e.g., Client, Service Provider, Effective Date, Expiration Date, Signature Date, Contract Value). The structured contract information is appended as a new row in the contract tracker spreadsheet on Google Sheets. A summary of the saved data is posted back to Slack for quick validation. How to set up You need to import this workflow into your n8n instance. You must authenticate your Slack account and select the target channel for contract submissions. You should link your Google account and specify the spreadsheet where the contract data will be stored. In this template, the required columns are Client, Service Provider, Effective Date, Expiration Date, Signature Date, and Contract Value. You can adjust the GPT parsing prompt to match the specific fields that your organization requires. You upload a sample contract in PDF or Word format to Slack and verify that the extracted data is correctly recorded in Google Sheets. Requirements You must have an active n8n instance in the cloud. You need a Slack account with permission to upload files and send messages. You must use a Google Sheets account with edit access to the target spreadsheet. You need a GPT integration (e.g., OpenAI) to enable AI-powered text parsing. How to customize the workflow You can modify this workflow to fit your organization’s unique contract needs. For example, you may update the GPT parsing prompt to capture additional fields, change the target Google Sheets structure, or integrate notifications into other tools. You have full flexibility to expand or simplify the steps so the workflow matches your team’s processes and compliance requirements.