by Vigh Sandor
Telegram AI Channel Bot - Text & Image Response Generator with TGPT Overview This n8n workflow creates an automated Telegram channel bot that responds to messages with AI-generated text or images using TGPT. The bot monitors a specific Telegram channel and generates responses based on message prefixes. Features Automated text response generation using TGPT Image generation capabilities with customizable dimensions (1920x1080) Duplicate message prevention Time-window filtering (15 seconds) to process only recent messages Continuous polling with 10-second intervals Setup Instructions Prerequisites n8n Instance: Ensure you have n8n installed and running Telegram Bot: Create a new bot via @BotFather on Telegram Telegram Channel: Create or have admin access to a Telegram channel Linux Environment: The workflow requires Linux for command execution Configuration Steps 1. Obtain Telegram Bot Token Open Telegram and search for @BotFather Send /newbot and follow the prompts Save the bot token provided (format: 1234567890:ABCdefGHIjklMNOpqrsTUVwxyz) 2. Get Channel ID Add your bot as an administrator to your Telegram channel Send a test message to the channel Visit: https://api.telegram.org/bot<YOUR_BOT_TOKEN>/getUpdates Look for "chat":{"id":-100XXXXXXXXXX} - this is your channel ID 3. Configure the Workflow Import the workflow JSON into n8n Open the Config node Replace your_telegram_token with your actual bot token Replace your_telegram_channel_id with your channel ID Save the changes 4. Set Up Telegram Credentials in n8n Navigate to the Send Telegram Text Response node Click on the credentials field Create new Telegram credentials using your bot token Apply the same credentials to Send Telegram Image Response node 5. System Requirements The workflow automatically installs required packages: util-linux-misc (for script command) curl (for downloading TGPT) TGPT binary (downloaded automatically from GitHub) Activation Save all configuration changes Toggle the workflow to Active status The bot will start polling every 10 seconds How to Use Sending Commands to the Bot Text Generation To generate a text response, send a message to your Telegram channel with the following format: am# Your prompt here Example: am# Explain quantum computing in simple terms The bot will: Remove the am# prefix Send the prompt to TGPT with GPT-4 model Generate a response with temperature 0.3 (more focused/deterministic) Reply with the generated text in the channel Image Generation To generate an image, send a message with this format: ami# Your image description here Example: ami# A futuristic city with flying cars at sunset The bot will: Remove the ami# prefix Use TGPT to generate an image (1920x1080 resolution) Save the image temporarily Send the generated image to the channel Important Usage Notes Response Time The bot checks for new messages every 10 seconds Messages older than 15 seconds are ignored Expect a delay of 10-30 seconds for responses depending on generation time Message Processing Only messages from the configured channel are processed The bot maintains a list of processed message IDs to avoid duplicates Maximum of 15 messages are retrieved per polling cycle Limitations Text generation uses temperature 0.3 (less creative, more accurate) Image generation uses temperature 0.7 (more creative) Images are generated at 1920x1080 resolution The bot requires continuous n8n execution Troubleshooting Bot Not Responding Verify the workflow is active Check that the bot is an admin in the channel Confirm the channel ID is correct (negative number for channels) Ensure messages start with exact prefixes: am# or ami# Duplicate Responses The workflow includes duplicate prevention If issues persist, restart the workflow to clear the processed IDs cache Missing Dependencies The workflow automatically downloads TGPT on first run If errors occur, check the Execute nodes' output for installation issues Performance Issues Consider increasing the polling interval if server resources are limited Monitor the n8n execution logs for timeout errors Advanced Configuration Modify Polling Interval Edit the Schedule node to change the 10-second interval Adjust Time Window In the Process Offset node, modify timeWindowSeconds variable (default: 15) Change AI Model Parameters Text generation: Edit --temperature "0.3" in Execute - Text node Image generation: Edit --temperature "0.7" in Execute - Image node Both use --model "gtp-4" by default Customize Image Dimensions In Execute - Image node, modify: --height=1080 --width=1920 Security Considerations Keep your bot token private Use private channels to prevent unauthorized access Regularly monitor bot activity through Telegram's BotFather Consider implementing rate limiting for production use Maintenance Regularly check n8n logs for errors Update TGPT version URL in Execute nodes when new versions are released Clear /tmp/ directory periodically to remove temporary files Monitor disk space for image generation temporary files
by 1Shot API
Get Paid in Stablecoins for Reposting from your X Account The x402 payment standard is growing in popularity and has enabled new monetization opportunities for internet resources. This workflow lets you automate the monetization of your followers on X by receiving payment in the form of stablecoins in return for reposting content from your X account via the X API node. Selling impressions can be good business, its whats powered the largest internet companies in the world so far. This workflow gives you a starting point to sell impressions from your social media following you worked hard to build over time. Setup 1Shot API Create a free 1Shot API account, provision a wallet on your target network and fund it will some gas tokens. Import into your 1Shot API account the stablecoin you want to get paid in. Use the 1Shot Prompts tab to find x402 compatible tokens. Configure the x402 Gateway node in the workflow to use the stablecoin you imported from step 2. Configure the refund nodes to point to the transfer function of the stablecoin you selected. X API Create a X Developer account and generate an API Key and Secret. Use the key and secret to authenticate the X node in the workflow Telegram Create a Telegram Bot and use its API key to authenticate the Telegram nodes in the workflow. Get the Chat ID with your bot and put input it into the telegram nodes so that you can receive moderation requests.
by Dean Pike
Transform LinkedIn profile URLs into comprehensive enriched lead profiles, quickly and automatically. Add URLs to your sheet, run the workflow, and get fully enriched contact data: names, job titles, company details, career history, recent activity, and more – all written back to your spreadsheet. What it does Reads unenriched rows from Google Sheets (detects empty "First Name" field) Scrapes LinkedIn profiles via Apify (dev_fusion~linkedin-profile-scraper actor) Polls for completion with smart retry logic (15-second intervals, max 20 attempts) Extracts comprehensive profile data: Personal info (name, location, headline, bio) Current role (title, description, company, industry, size, website) Additional concurrent positions (for people with multiple roles) Most recent previous employer Last 2 LinkedIn posts with links Writes enriched data back to the same Google Sheet row Handles errors gracefully with status updates Requirements Apify account + API token Google Sheets OAuth2 credentials A Google Sheet with columns: LinkedIn, First Name, Last Name, Job Position, Location, Industry, Company Name, Company URL, Company Size, LI Other Profile Information, Status, Apify ID, Add date, row_number Setup Create your Google Sheet with the required columns (or duplicate the template structure) Replace YOUR_APIFY_API_KEY in three HTTP Request nodes: "Start Apify Scrape", "Check Status", and "Fetch LinkedIn Data" Connect your Google Sheets OAuth2 credentials to the two Google Sheets nodes Update the document ID if using your own sheet (currently points to a specific sheet) Add LinkedIn profile URLs to the "LinkedIn" column, leaving "First Name" blank Run manually – workflow processes all unenriched rows sequentially Sample Output Google Sheet row output - from one successfully enriched lead profile via LinkedIn URL: Link to Google Sheets sample file Next steps Use the enriched data for sales outreach, recruiting pipelines, or lead scoring. The "LI Other Profile Information" column contains a rich text summary ideal for AI-powered lead qualification or personalized messaging. Tip: Process small batches (5-10 profiles) first to verify Apify results and check for rate limiting. The Apify dataset ID is stored in each row, so you can retrieve raw JSON data later if needed for deeper analysis.
by Avkash Kakdiya
How it works This workflow automates the job curation process by retrieving pending job search inputs from a spreadsheet, querying the JSearch API for relevant job listings, and writing the curated results back to another sheet. It is designed to streamline job discovery and reduce manual data entry. Step-by-step 1. Trigger & Input The workflow starts on a defined schedule (e.g., once per day). It reads a row from the Job Scraper sheet where the status is marked as "Pending". The selected row includes fields like Position and Location, which are used to build the search query. 2. Job Search & Processing Sends a search request to the JSearch API using the Position and Location from the spreadsheet. Parses the API response and extracts individual job listings. Filters out empty, irrelevant, or invalid entries to ensure clean and relevant job data. 3. Output & Status Update Writes valid job listings to the Job Listing output sheet with fields such as job title, company name, location, and more. Updates the original row in the source sheet to mark it as Scraped, ensuring it will not be processed again in future runs. Benefits Reduces manual effort in job research and listing. Ensures only valid, structured data is stored and used. Prevents duplicate processing with automatic status updates. Simple to expand by adding more job sources or filters.
by Beex
Summary This workflow detects ticket classification events in Beex where the communication channel was WhatsApp, extracts the messages from the interaction, and logs them as activity in the corresponding HubSpot contact. How it works Beex Trigger Receives the ticket classification event On Managment Create via a pre-configured callback. Filter by Channel The automation only considers classification events where the communication channel was a WhatsApp message. Get Phone The phone number is used to find the contact to whom the activity should be assigned in HubSpot. The country code must be configured manually. Search Contact Finds the contact in HubSpot using the phone number. Get Messages When a ticket is categorized, its ID and all messages from the interaction can be retrieved from trigger node. Routing, Formatting, and Consolidation We route messages based on their content, whether text, image, or audio. Each message is formatted in HTML, compatible with HubSpot activities. Sort Messages Messages are sorted according to their created_at field. Consolidate Chats Individual messages are consolidated into a single record (all in HTML format). Send all chat content to hs_communication_body in API HubSpot. Setup Instructions Install Beex Nodes: Before importing the template, install the Beex trigger and node packages using the following package names: n8n-beex-nodes Configure HubSpot Credentials: Configure your HubSpot connection with the following: Access token (typically from a private application) Read and write permissions for the Contacts objects Configure Beex Credentials: For Beex users with platform access (for testing requests, contact frank@beexcc.com): Go to Platform Settings → API Key and Callback. Copy your API key and paste it into the Beex node (Get Messages) in n8n. Activate Typing Registry in Callback Integration option Configure the Webhook URL: Copy the Webhook URL (Test/Production) from the Beex activation node and paste it into the Callback Integration section in Beex. Save your changes. Requirements HubSpot: An account with a private application token and read/write permissions for **Contacts objects. Beex: An account with permissions to receive **Typing Registry events in Callback Integration. Customization Options You can customize the HTML format provided to text, audio, or image messages.
by Robert Breen
This n8n workflow template automatically monitors your Google Sheets for new entries and uses AI to generate detailed descriptions for each topic. Perfect for content creators, researchers, project managers, or anyone who needs automatic content generation based on simple topic inputs. What This Workflow Does This automated workflow: Monitors a Google Sheet for new rows added to the "data" tab Takes the topic from each new row Uses OpenAI GPT to generate a detailed description of that topic Updates the same row with the AI-generated description Logs all activity in a separate "actions" tab for tracking The workflow runs every minute, checking for new entries and processing them automatically. Tools & Services Used N8N** - Workflow automation platform OpenAI API** - AI-powered description generation (GPT-4.1-mini) Google Sheets** - Data input, storage, and activity logging Google Sheets Trigger** - Real-time monitoring for new rows Prerequisites Before implementing this workflow, you'll need: N8N Instance - Self-hosted or cloud version OpenAI API Account - For AI description generation Google Account - For Google Sheets integration Google Sheets API Access - For both reading and writing to sheets Step-by-Step Setup Instructions Step 1: Set Up OpenAI API Access Visit OpenAI's API platform Create an account or log in Navigate to API Keys section Generate a new API key Copy and securely store your API key Step 2: Set Up Your Google Sheets Option 1: Use Our Pre-Made Template (Recommended) Copy our template: AI Description Generator Template Click "File" → "Make a copy" to create your own version Rename it as desired (e.g., "My AI Content Generator") Note your new sheet's URL - you'll need this for the workflow Option 2: Create From Scratch Go to Google Sheets Create a new spreadsheet Set up the main "data" tab: Rename "Sheet1" to "data" Set up column headers in row 1: A1: topic B1: description Create an "actions" tab: Add a new sheet and name it "actions" Set up column headers: A1: Update Copy your sheet's URL Step 3: Configure Google API Access Enable Google Sheets API Go to Google Cloud Console Create a new project or select existing one Enable "Google Sheets API" Enable "Google Drive API" Create Service Account (for N8N) In Google Cloud Console, go to "IAM & Admin" → "Service Accounts" Create a new service account Download the JSON credentials file Share your Google Sheet with the service account email address Step 4: Import and Configure the N8N Workflow Import the Workflow Copy the workflow JSON from the template In your N8N instance, go to Workflows → Import from JSON Paste the JSON and import Configure OpenAI Credentials Click on the "OpenAI Chat Model" node Set up credentials using your OpenAI API key Test the connection to ensure it works Configure Google Sheets Integration For the Trigger Node: Click on "Row added - Google Sheet" node Set up Google Sheets Trigger OAuth2 credentials Select your spreadsheet from the dropdown Choose the "data" sheet Set polling to "Every Minute" (already configured) For the Update Node: Click on "Update row in sheet" node Use the same Google Sheets credentials Select your spreadsheet and "data" sheet Verify column mapping (topic → topic, description → AI output) For the Actions Log Node: Click on "Append row in sheet" node Use the same Google Sheets credentials Select your spreadsheet and "actions" sheet Step 5: Customize the AI Description Generator The workflow uses a simple prompt that can be customized: Click on the "Description Writer" node Modify the system message to change the AI behavior: write a description of the topic. output like this. { "description": "description" } Need Help with Implementation? For professional setup, customization, or troubleshooting of this workflow, contact: Robert - Ynteractive Solutions Email**: robert@ynteractive.com Website**: www.ynteractive.com LinkedIn**: linkedin.com/in/robert-breen-29429625/ Specializing in AI-powered workflow automation, business process optimization, and custom integration solutions.
by Muhammad Asadullah
Short Description (for listing) Import products from Google Sheets to Shopify with automatic handling of single products and multi-variant products (sizes, colors, etc.). Includes SKU management, inventory tracking, and image uploads via GraphQL API. Category E-commerce Productivity Data Import/Export Full Description Overview This workflow automates the process of importing products from a Google Sheet into your Shopify store. It intelligently detects and handles both simple products and products with multiple variants (like different sizes or colors), creating them with proper SKU management, pricing, inventory levels, and images. Key Features ✅ Dual Product Support: Handles single products and multi-variant products automatically ✅ Smart SKU Parsing: Automatically groups variants by parsing SKU format (e.g., 12345-SM, 12345-MD) ✅ Inventory Management: Sets stock levels for each variant at your default location ✅ Image Upload: Attaches product images from URLs ✅ GraphQL API: Uses Shopify's modern GraphQL API for reliable product creation ✅ Batch Processing: Process multiple products in one workflow run Use Cases Initial store setup with bulk product import Regular inventory updates from spreadsheet Migrating products from another platform Managing seasonal product catalogs Synchronizing products with external systems Requirements Shopify store with Admin API access Google Sheets API credentials n8n version 1.0+ Basic understanding of GraphQL (helpful but not required) What You'll Need to Configure Shopify Admin API token Your Shopify store URL (in 'set store url' node) Google Sheets connection (Optional) Vendor name and product type defaults Input Format Your Google Sheet should contain columns: Product Name SKU (format: BASESKU-VARIANT for variants) Size (or other variant option) Price On hand Inventory Product Image (URL) Products with the same name are automatically grouped as variants. How It Works Reads product data from your Google Sheet Groups products by name and detects if they have variants Switches to appropriate creation path (single or variant) Creates product in Shopify with options and variants Updates each variant with SKU and pricing Sets inventory levels at your location Uploads product images Technical Details Uses Shopify GraphQL Admin API (2025-04) Handles up to 100 variants per product Processes variants individually for accurate data mapping Includes error handling for missing data Supports one inventory location per run Common Modifications Change vendor name and product type Add more variant options (color, material, etc.) Customize product status (draft vs active) Modify inventory location selection Add product descriptions Perfect For Shopify store owners managing large catalogs E-commerce managers doing bulk imports Agencies setting up client stores Developers building automated product workflows Difficulty: Intermediate Estimated Setup Time: 15-30 minutes Nodes Used: 16 External Services: Shopify, Google Sheets
by iamvaar
Workflow explanation: Watch on YouTube Automated Missed Call Recovery with Gohighlevel + Twilio + Gemini Prerequisites for the HVAC n8n Workflow Before setting up the workflow in n8n, ensure you have completed the following foundational steps: Twilio Call Status Webhook:** Set the webhook of Sub-workflow 1 in the Twilio Voice section for "Call Status Changes". GoHighLevel Custom Fields:** Create two custom fields in GoHighLevel (GHL): called phone number call sid Twilio API Integration:** Ensure your Twilio API credentials are ready and configured in n8n. GoHighLevel Developer App:** Create a free GoHighLevel Developer App with the following scopes: contacts.readonly, contacts.write, opportunities.readonly, opportunities.write, locations.readonly. Generate the Client ID and Secret within the Developer App. Enter these into the n8n GHL OAuth credentials. Copy the OAuth Redirect URL from n8n to the App OAuth redirection settings and complete the authentication process. GoHighLevel Automation Workflow:** Create a workflow inside GHL that triggers when a "New Appointment is Created" and fires a POST webhook to the URL generated by Sub-workflow 3 in n8n. GoHighLevel Pipeline Setup:** Create a pipeline in GHL named "Missed call to appointment" with the following 3 stages: SMS sent No Reply Engaged | Appointment Link Sent BOOKED Scheduling Link:** Note down your GoHighLevel scheduling link and keep it handy to insert into the Twilio SMS node. Workflow Breakdown This n8n architecture is divided into three distinct sub-workflows. Here is the node-by-node explanation. Sub-Workflow 1: Automated Missed Call Follow-Up Goal: Detect a missed call, log it in GoHighLevel, and immediately text the prospect. When Webhook Received (n8n-nodes-base.webhook):** Acts as the entry point. It receives incoming POST call data from your telephony provider (Twilio) whenever a call status changes. Filter Valid Call Statuses (n8n-nodes-base.if):** Evaluates the incoming webhook payload. It only allows the workflow to continue if the CallStatus contains busy, no-answer, or canceled. Prepare Lead Data (n8n-nodes-base.set):** Cleans and maps the incoming JSON data. It extracts the caller's phone number, removes the + sign for clean formatting, grabs the called number and CallSid, and attaches specific tags like missed-call-lead. Create Lead in HighLevel (n8n-nodes-base.highLevel):** Pushes the cleaned data into GHL to create a new Contact. It maps the custom fields you created (called phone number and call sid) and assigns the hvac-inbound-missed tag. Create Opportunity in HighLevel (n8n-nodes-base.highLevel):** Creates a pipeline opportunity for the newly generated lead. It names the opportunity dynamically (e.g., "Missed Call.... [Phone].... [Date/Time]"). Send SMS via Twilio (n8n-nodes-base.twilio):** Sends the initial outreach text message to the caller (e.g., "Hi, I believe you missed a call with us... Please state your issue directly here"). Update Opportunity Status (n8n-nodes-base.highLevel):** Updates the GHL opportunity stage to the first stage in your pipeline ("SMS sent No Reply") to track that the initial outreach has occurred. Sub-Workflow 2: AI-Powered SMS Lead Qualification & Booking Goal: Process replies to the initial SMS, use AI to determine if it's a valid HVAC opportunity, and send a booking link. When SMS Received (n8n-nodes-base.twilioTrigger):** Listens for incoming SMS messages on your Twilio number. Check If Lead (n8n-nodes-base.highLevel):** Searches GHL to see if the sender's phone number already exists as a contact. Check Pipeline State (n8n-nodes-base.highLevel):** Looks up the specific opportunity associated with this contact in the "Missed call to appointment" pipeline. Lead Analyzer Agent (@n8n/n8n-nodes-langchain.agent):** The core AI brain of this sub-workflow. It consists of three integrated parts: The Agent: Prompted to act as an HVAC Opportunity Finder. It evaluates the user's SMS context to determine if they need HVAC services and if it's appropriate to send a booking link. Gemini Chat Model: Uses Google's gemini-3.1-flash-lite-preview model to process the prompt and context. Parse Structured Output: Forces the AI to return a clean JSON response (e.g., {"HVAC_oppurtunity?": "yes"}). If HVAC Opportunity Found (n8n-nodes-base.if):** Checks the parsed JSON output from the AI. If the AI determined the answer is "yes" or "yeah", the workflow proceeds. Send Response SMS (n8n-nodes-base.twilio):** Sends a text message containing your GHL scheduling link to prompt the prospect to book a visit. Update Lead Opportunity (n8n-nodes-base.highLevel):** Moves the GHL opportunity stage forward to "Engaged | Appointment Link Sent". Sub-Workflow 3: GoHighLevel Appointment Sync & Pipeline Advancement Goal: Finalize the pipeline sequence once the prospect actually books an appointment through your scheduling link. When Appointment Booked (n8n-nodes-base.webhook):** Receives the payload triggered by the GHL automation workflow you created in the prerequisites (fired when an appointment is booked). Check Lead SMS Origin (n8n-nodes-base.highLevel):** Queries GHL using the phone number from the appointment payload to ensure it matches up with the correct existing contact record. Check Pipeline State1 (n8n-nodes-base.highLevel):** Retrieves the current opportunity linked to this phone number that is currently sitting in the "Engaged" stage. Update Contact in HighLevel (n8n-nodes-base.highLevel):** Fills in the missing data gaps. Since the initial missed call only gave you a phone number, this node uses the data submitted in the booking form to update the contact's First Name, Last Name, and Email address. Update Opportunity in HighLevel (n8n-nodes-base.highLevel):** Moves the opportunity to its final stage: "BOOKED".
by Yatharth Chauhan
Feedback Sentiment Workflow (Typeform → GCP → Notion/Slack/Trello) This template ingests feedback from Typeform, runs Google Cloud Natural Language sentiment analysis, routes based on sentiment, and then creates a Notion database page and posts a Slack notification for positive items, or creates a Trello card for negative items. The flow is designed for quick setup and safe sharing using placeholders for IDs and credentials. How it Works Typeform Trigger Captures each new submission and exposes answers like Name and the long-text Feedback field. Google Cloud Natural Language Analyzes the feedback text and returns a sentiment score in: documentSentiment.score Check Sentiment Score (IF) True branch: Score > 0 → Positive False branch: Score ≤ 0 → Non-positive Add Feedback to Notion (True branch) Creates a new page in a Notion database with mapped properties. Notify Slack (after Notion) Posts the feedback, author, and score to a Slack channel for visibility. Create Trello Card (False branch) Logs non-positive items to a Trello list for follow-up. Required Accounts Google Cloud Natural Language API** enabled (OAuth2 or service credentials). Notion integration** with database access to create pages. Slack app/bot token** with permission to post to the target channel. Typeform account** with a form including: Long Text feedback question Name field Notion Database Columns Name (title):** Person name or responder label Feedback (rich_text):** Full feedback text Sentiment Score (number):** Numeric score from GCP ∈ [-1, 1] Source (select/text):** "Typeform" for provenance Submitted At (date):** Timestamp from the trigger Customization Options Sentiment Threshold:** Adjust IF condition (e.g., ≥ 0.25) for stricter positivity. Slack Routing:** Change channel, add blocks/attachments for richer summaries. Trello Path:** Point to a triage list and include labels for priority. Field Mapping:** Update the expression for feedback question to match Typeform label. Database Schema:** Add tags, product area, or customer tier for reporting. Setup Steps Connect credentials: Typeform, GCP Natural Language, Notion, Slack, Trello. Replace placeholders in workflow JSON: Form ID Database ID Slack Channel Trello List ID Map fields: Set Feedback + Name expressions from Typeform Trigger output into Notion and Slack. Adjust IF threshold for your definition of "positive". Test with a sample response and confirm: Notion page creation Slack notification Trello card logging
by Rajeet Nair
Overview This workflow intelligently routes incoming user requests using AI-powered task classification. It determines whether a task is simple or complex, assigns a confidence score, and dynamically delegates execution to the appropriate agent. If the confidence score is too low, the workflow triggers a fallback email alert for manual review—ensuring reliability and preventing incorrect automation. This design improves response accuracy, enables scalable automation, and introduces human-in-the-loop safety for uncertain scenarios. How It Works Webhook Trigger Receives incoming user requests. Workflow Configuration Stores the user request and confidence threshold. Supervisor Agent Analyzes the request. Classifies it as simple or complex. Returns a confidence score and reasoning. Structured Output Parser Ensures the classification follows a strict JSON format. Confidence Check (IF Node) Compares the confidence score with the threshold. Routing Logic If confidence is high: Task is passed to the Executor Agent Executor selects: Simple Agent Tool for basic tasks Complex Agent Tool for advanced tasks Agent Execution Each agent uses an OpenAI model to process the task. Fallback Handling If confidence is low: Sends an email alert for human review. Setup Instructions OpenAI Credentials Add credentials for all OpenAI nodes: Supervisor Executor Simple Agent Complex Agent Webhook Configuration Set the webhook path. Connect it to your frontend or API source. Email Node Setup Configure sender and recipient email addresses. Use SMTP or supported email service. Adjust Threshold Modify confidenceThreshold in the Set node if needed. Customize Prompts Update system messages in: Supervisor Agent Executor Agent Simple/Complex Agents Use Cases AI-powered task routing systems Customer support automation with fallback safety Intelligent chatbot orchestration Workflow automation with human-in-the-loop validation Multi-agent AI systems with decision control Requirements OpenAI API credentials Email (SMTP or service integration) n8n instance (cloud or self-hosted) Key Features AI-based task classification Confidence scoring for safe automation Dynamic agent routing Human fallback for low-confidence decisions Modular and scalable architecture Summary A smart AI routing workflow that classifies tasks, routes them to specialized agents, and ensures reliability through confidence scoring and fallback alerts—ideal for building safe, scalable automation systems in n8n.
by Robert Breen
This n8n workflow template creates an efficient data analysis system that uses Google Gemini AI to interpret user questions about spreadsheet data and processes them through a specialized sub-workflow for optimized token usage and faster responses. What This Workflow Does Smart Query Parsing**: Uses Gemini AI to understand natural language questions about your data Efficient Processing**: Routes calculations through a dedicated sub-workflow to minimize token consumption Structured Output**: Automatically identifies the column, aggregation type, and grouping levels from user queries Multiple Aggregation Types**: Supports sum, average, count, count distinct, min, and max operations Flexible Grouping**: Can aggregate data by single or multiple dimensions Token Optimization**: Processes large datasets without overwhelming AI context limits Tools Used Google Gemini Chat Model** - Natural language query understanding and response formatting Google Sheets Tool** - Data access and column metadata extraction Execute Workflow** - Sub-workflow processing for data calculations Structured Output Parser** - Converts AI responses to actionable parameters Memory Buffer Window** - Basic conversation context management Switch Node** - Routes to appropriate aggregation method Summarize Nodes** - Performs various data aggregations 📋 MAIN WORKFLOW - Query Parser What This Workflow Does The main workflow receives natural language questions from users and converts them into structured parameters that the sub-workflow can process. It uses Google Gemini AI to understand the intent and extract the necessary information. Prerequisites for Main Workflow Google Cloud Platform account with Gemini API access Google account with access to Google Sheets n8n instance (cloud or self-hosted) Main Workflow Setup Instructions 1. Import the Main Workflow Copy the main workflow JSON provided In your n8n instance, go to Workflows → Import from JSON Paste the JSON and click Import Save with name: "Gemini Data Query Parser" 2. Set Up Google Gemini Connection Go to Google AI Studio Sign in with your Google account Go to Get API Key section Create a new API key or use an existing one Copy the API key Configure in n8n: Click on Google Gemini Chat Model node Click Create New Credential Select Google PaLM API Paste your API key Save the credential 3. Set Up Google Sheets Connection for Main Workflow Go to Google Cloud Console Create a new project or select existing one Enable the Google Sheets API Create OAuth 2.0 Client ID credentials In n8n, click on Get Column Info node Create Google Sheets OAuth2 API credential Complete OAuth flow 4. Configure Your Data Source Option A: Use Sample Data The workflow is pre-configured for: Sample Marketing Data Make a copy to your Google Drive Option B: Use Your Own Sheet Update Get Column Info node with your Sheet ID Ensure you have a "Columns" sheet for metadata Update sheet references as needed 5. Set Up Workflow Trigger Configure how you want to trigger this workflow (webhook, manual, etc.) The workflow will output structured JSON for the sub-workflow ⚙️ SUB-WORKFLOW - Data Processor What This Workflow Does The sub-workflow receives structured parameters from the main workflow and performs the actual data calculations. It handles fetching data, routing to appropriate aggregation methods, and formatting results. Sub-Workflow Setup Instructions 1. Import the Sub-Workflow Create a new workflow in n8n Copy the sub-workflow JSON (embedded in the Execute Workflow node) Import as a separate workflow Save with name: "Data Processing Sub-Workflow" 2. Configure Google Sheets Connection for Sub-Workflow Apply the same Google Sheets OAuth2 credential you created for the main workflow Update the Get Data node with your Sheet ID Ensure it points to your data sheet (e.g., "Data" sheet) 3. Configure Google Gemini for Output Formatting Apply the same Gemini API credential to the Google Gemini Chat Model1 node This handles final result formatting 4. Link Workflows Together In the main workflow, find the Execute Workflow - Summarize Data node Update the workflow reference to point to your sub-workflow Ensure the sub-workflow is set to accept execution from other workflows Sub-Workflow Components When Executed by Another Workflow**: Trigger that receives parameters Get Data**: Fetches all data from Google Sheets Type of Aggregation**: Switch node that routes based on aggregation type Multiple Summarize Nodes**: Handle different aggregation types (sum, avg, count, etc.) Bring All Data Together**: Combines results from different aggregation paths Write into Table Output**: Formats final results using Gemini AI Example Usage Once both workflows are set up, you can ask questions like: Overall Metrics: "Show total Spend ($)" "Show total Clicks" "Show average Conversions" Single Dimension: "Show total Spend ($) by Channel" "Show total Clicks by Campaign" Two Dimensions: "Show total Spend ($) by Channel and Campaign" "Show average Clicks by Channel and Campaign" Data Flow Between Workflows Main Workflow: User question → Gemini AI → Structured JSON output Sub-Workflow: Receives JSON → Fetches data → Performs calculations → Returns formatted table Contact Information For support, customization, or questions about this template: Email**: robert@ynteractive.com LinkedIn**: Robert Breen Need help implementing these workflows, want to remove limitations, or require custom modifications? Reach out for professional n8n automation services and AI integration support.
by Margo Rey
Generate and send MadKudu Account Brief into Outreach This workflow generates an account brief tailored to your company using MadKudu MCP and OpenAI and syncs it to a custom field in Outreach. Its for Sales who want to give reps rich account context right inside Outreach, and draft Outreach email with Outreach Revenue Agent based on MadKudu account brief. ✨ Who it's for RevOps or GTM teams using MadKudu + Salesforce + Outreach Sales teams needing dynamic, AI-generated context for target accounts 🔧 How it works 1. Select Accounts: Use a Salesforce node to define which accounts to brief. Filter logic can be updated to match ICP or scoring rules (e.g., MadKudu Fit + LTB). 2. Generate Brief with MadKudu MCP & AI MadKudu MCP provides the account brief instructions, research online for company recent news and provides structured account context from your integrations connected to MadKudu + external signals (firmographics, past opportunities, active contacts, job openings...) The AI agent (OpenAI model) turns this into a readable account brief. 3. Send to Outreach Match account in Outreach via domain. Update a custom field (e.g., custom49) with the brief text. 📋 How to set up Connect your Salesforce account Used to pull accounts that need a brief. Set your OpenAI credentials Required for the AI Agent to generate the brief. Create a n8n Variable to store your MadKudu API key named madkudu_api_key used for the MadKudu MCP tool The AI Agent pulls the account brief instructions and all the context necessary to generate the briefs. Create an Oauth2 API credential to connect your Outreach account Used to sync to brief to Outreach. Customize the Salesforce filter In the “Get accounts” node, define which accounts should get a brief (e.g. Fit > 90). Map your Outreach custom field Update the JSON Body request with your actual custom field ID (e.g. custom49). 🔑 How to connect Outreach In n8n, add a new Oauth2 API credential and copy the callback URL Now go to Outreach developer portal Click “Add” to create a new app In Feature selection add Outreach API (OAuth) In API Access (Oauth) set the redirect URI to the n8n callback Select the following scopes accounts.read, accounts.write Save in Outreach Now enter the Outreach Application ID into n8n Client Id and the Outreach Application Secret into n8n Client secret Save in n8n and connect via Oauth your Outreach Account ✅ Requirements MadKudu account with access to API Key Salesforce Oauth Outreach Admin permissions to create an app OpenAI API Key 🛠 How to customize the workflow Change the targeting logic** Edit the Salesforce filter to control which accounts are eligible. Rewrite the prompt** Tweak the prompt in the AI Agent node to adjust format, tone, or insights included in the brief. Change the Outreach account field** Update the Outreach field where the brief is sync-ed if you're using a different custom field (e.g. custom48, custom32, etc). Use a different trigger** Swap the manual trigger for a Schedule or Webhook to automate the flow end-to-end.