by Feras Dabour
AI LinkedIn Content Bot with Approval Loop This n8n workflow transforms your Telegram messenger into a personal assistant for creating and publishing LinkedIn posts. You can simply send an idea as a text or voice message, collaboratively edit the AI's suggestion in a chat, and then publish the finished post directly to LinkedIn just by saying "Okay." What You'll Need to Get Started Before you can use this workflow, you'll need a few prerequisites set up. This workflow connects three different services, so you will need API credentials for each: Telegram Bot API Key: You can get this by talking to the "BotFather" on Telegram. It will guide you through creating your new bot and provide you with the API token. New Chat with Telegram BotFather OpenAI API Key: This is required for the "Speech to Text" and "AI Agent" nodes. You'll need an account with OpenAI to generate this key. [OpenAI API Platform](https://platform.openai.com ) Blotato API Key: This service is used to publish the final post to LinkedIn. You'll need a Blotato account and to connect your LinkedIn profile there to get the key. [Blotato platform for social media publishing] Once you have these keys, you can add them to the corresponding credentials in your n8n instance. How the Workflow Operates, Step-by-Step Here is a detailed breakdown of how the workflow processes your request and handles the publishing. 1\. Input & Initial Processing This phase captures your idea and converts it into usable text. | Node Name | Role in Workflow | | :--- | :--- | | Start: Telegram Message | This Telegram Trigger node initiates the entire process upon receiving any message from you in the bot. | | Prepare Input | Consolidates the message content, ensuring the AI receives only one clean text input. | | Check: ist it a Voice? | Checks the incoming message for text. If text is empty, it proceeds to voice handling. | | Get Voice File | If a voice note is detected, this node downloads the raw audio file from Telegram. | | Speech to Text | This node uses the OpenAI Whisper API to convert the downloaded audio file into a text string. | 2\. AI Core & Iteration Loop This is the central dialogue system where the AI drafts the content and engages in the feedback loop. | Node Name | Role in Workflow | | :--- | :--- | | AI: Draft & Revise Post | The main logic agent. It analyzes your request, applies the "System Prompt" rules, drafts the post, and handles revisions based on your feedback. | | OpenAI Chat Model | Defines the large language model (LLM) used for generating and revising the post. | | Window Buffer Memory | A memory buffer that stores the last turns of the conversation, allowing the AI to maintain context when you request changes (e.g., "Make it shorter"). | | Check if Approved | This crucial node detects the specific JSON structure the AI outputs only when you provide an approval keyword (like "ok" or "approved"). | | Post Suggestion Or Ask For Approval | Sends the AI's post draft back to your Telegram chat for review and feedback. | AI Agent System Prompt (Internal Instructions - English) The agent operates under a strict prompt that dictates its behavior and formatting (found within the AI: Draft & Revise Post node): > You are a LinkedIn Content Creator Agent for Telegram. > Keep the confirmation process, but change the output format as follows: > > Your Task > Analyze the user's message: > > * Topic > * Goal (e.g., reach, show expertise, recruiting, personal branding, leads) > * Target Audience > * Tonality (e.g., factual, personal, bold, inspiring) > > Create a LinkedIn post as ONE continuous text: > > * Strong hook in the first 1–2 lines. > * Clear main part with added value, story, example, or insight. > * Optional Call-to-Action (e.g., question to the community, invitation to exchange). > * Integrate hashtags at the end of the post (5–12 suitable hashtags, mix of niche + somewhat broader). > * Readable on LinkedIn: short paragraphs, emojis only sparingly. > > Present the suggestion to the user in the following format: > > Headline: Post Proposal: > Below that, the complete LinkedIn post (incl. hashtags at the end in the same text). > > Ask for feedback: > For example: > "Any changes? (Tone, length, formality, personal vs. professional, more technical content, different hashtags?)" > > If the user requests changes: > Adjust the post specifically based on the feedback. > Again, output only: > Post Proposal: > the revised complete post. > > If the user says “approved”, “ok”, “sounds good”, or similar: > Return exclusively this JSON, without additional text, without Markdown: > > > { > "Post": "The final LinkedIn post as one text, including hashtags at the end" > } > > > Important: > > * Never output JSON before approval, only normal suggestion text. > * The final output after approval consists of only one field: Post. 3\. Publishing & Status Check Once approved, the workflow handles the publication and monitors the post's status in real-time. | Node Name | Role in Workflow | | :--- | :--- | | Approval: Extract Final Post Text | Parses the incoming JSON, extracting only the clean text ready for publishing. | | Create post with Blotato | Uses the Blotato API to upload the finalized content to your connected LinkedIn account. | | Give Blotat 5s :) | A brief pause to allow the publishing service to start processing the request. | | Check post status | Checks back with Blotato to determine if the post is published, in progress, or failed. | | Published? | Checks if the status is "published" to send the success message. | | In Progress? | Checks if the post is still being processed. If so, it loops back to the next wait period. | | Give Blotat other 5s :) | Pauses the workflow before re-checking the post status, preventing unnecessary API calls. | 4\. Final Notification | Node Name | Role in Workflow | | :--- | :--- | | Send a confirmation message | Sends a confirmation message and the direct link to the published LinkedIn post. | | Send an error message | Sends a notification if the post failed to upload or encountered an error during processing. | 🛠️ Personalizing Your Content Bot The true power of this n8n workflow lies in its flexibility. You can easily modify key components to match your unique brand voice and technical preferences. 1\. Tweak the Content Creator Prompt The personality, tone, and formatting rules for your LinkedIn content are all defined in the System Prompt. Where to find it: Inside the AI: Draft & Revise Post node, under the System Message setting. What to personalize: Adjust the tone, change the formatting rules (e.g., number of hashtags, required emojis), or insert specific details about your industry or target audience. 2\. Switch the AI Model or Provider You can easily swap the language model used for generation. Where to find it: The OpenAI Chat Model node. What to personalize: Model: Swap out the default model for a more powerful or faster alternative (e.g., gpt-4 family, or models from other providers if you change the node). Provider: You can replace the entire Langchain block (including the AI Model and Window Buffer Memory nodes) with an equivalent block using a different provider's Chat/LLM node (e.g., Anthropic, Cohere, or Google Gemini), provided you set up the corresponding credentials and context flow. 3\. Modify Publishing Behavior (Schedule vs. Post) The final step is currently set to publish immediately, but you might prefer to schedule posts. Where to find it: The Create post with Blotato node. What to personalize: Consult the Blotato documentation for alternative operations. Instead of choosing the "Create Post" operation (which often posts immediately), you can typically select a "Schedule Post" or "Add to Queue" operation within the Blotato node. If scheduling, you will need to add a step (e.g., a Set node or another agent prompt) before publishing to calculate and pass a Scheduled Time parameter to the Blotato node.
by Gabriela Macovei
WhatsApp Receipt OCR & Data Extraction Suite Categories: Accounting Automation • OCR Processing • AI Data Extraction • Business Tools This workflow transforms WhatsApp into a fully automated receipt-processing system using advanced OCR, multi-model AI parsing, and structured data storage. By combining LlamaParse, Claude (OpenRouter), Gemini, Google Sheets, and Twilio, it eliminates manual data entry and delivers instant, reliable receipt digitization for any business. What This Workflow Does When a user sends a receipt photo or PDF via WhatsApp, the automation: Receives the file through Twilio WhatsApp Uploads and parses it with LlamaParse (high-res OCR + invoice preset) Extracts structured data using Claude + Gemini + a strict JSON parser Cleans and normalizes the data (dates, ABN, vendor, tax logic) Uploads the receipt to Google Drive Logs the extracted fields into a Google Sheet Replies to the user on WhatsApp with the extracted details Asks for confirmation via quick-reply buttons Updates the Google Sheet based on user validation The result is a fast, scalable, human-free system for converting raw receipt photos into clean, structured accounting data. Key Benefits No friction for users:** receipts are submitted simply by sending a WhatsApp message. High-accuracy OCR:** LlamaParse extracts text, tables, totals, vendors, tax, and ABN with impressive reliability. Enterprise-grade data validation:** complex logic ensures the correct interpretation of GST, included taxes, or unidentified tax amounts. Multi-model extraction:** Claude and Gemini both analyse the OCR output for more reliable result. We have one primary LLM and a secondary one. Hands-off accounting:** every receipt becomes a standardized row in Google Sheets. Two-way WhatsApp communication:** users can confirm or reject extracted data instantly. Scalable architecture:** perfect for businesses handling dozens or thousands of receipts monthly. How It Works (Technical Overview) 1. Twilio → Webhook Trigger The workflow starts when a WhatsApp message containing a media file hits your Twilio webhook. 2. Initial Google Sheets Logging The MessageSid is appended to your tracking sheet to ensure every receipt is traceable. 3. LlamaParse OCR The file is sent to LlamaParse with the invoice preset, high-resolution OCR, and table extraction enabled. The workflow checks job completion before moving further. 4. LLM Data Extraction The OCR markdown is analyzed using: Claude Sonnet 4.5 (via OpenRouter) Gemini 2.5 Pro A strict structured JSON output parser Custom JS cleanup logic The system extracts: Vendor Cost Tax (with multi-rule Australian GST logic) Currency Date (parsed + normalized) ABN (validated and digit-normalized) 5. Google Drive Integration The uploaded receipt is stored, shared, and linked back to the record in Sheets. 6. Google Sheets Update Fields are appended/updated following a clean schema: Vendor Cost Tax Date Currency ABN Public drive link Status (Confirmed / Not confirmed) 7. User Response Flow The user receives a summary of extracted data via WhatsApp. Buttons allow them to approve or reject accuracy. The Google Sheet updates accordingly. Target Audience This workflow is ideal for: Accounting & bookkeeping firms Outsourced finance departments Small businesses tracking expenses Field workers submitting receipts Automation agencies offering DFY systems CFOs wanting real-time expense visibility Use Cases Expense reconciliation Automated bookkeeping Receipt digitization & compliance Real-time employee expense submission Multi-client automation at accounting agencies Required Integrations Twilio WhatsApp** (Business API number + webhook) LlamaParse API** OpenRouter (Claude Sonnet)** Google Gemini API** Google Drive** Google Sheets** Setup Instructions (High-Level) Import the n8n workflow. Connect your Twilio WhatsApp account. Add API credentials for: LlamaParse OpenRouter Google Gemini Google Drive Google Sheets Create your target Google Sheet. Configure your WhatsApp webhook URL in Twilio. Test with a sample receipt. Why This System Works Users send receipts using a tool they already use daily (WhatsApp). LlamaParse provides state-of-the-art OCR for low-quality receipts. Using multiple LLMs drastically increases accuracy for vendor, ABN, and tax extraction. Advanced normalization logic ensures data is clean and accounting-ready. Google Sheets enables reliable storage, reporting, and future integrations. End-to-end automation replaces hours of manual work with instant processing. Watch My Complete Build Process Want to see exactly how I built this entire AI design system from scratch? I walk through the complete development process on my YouTube channel
by Intuz
This n8n template from Intuz provides a complete solution to automate a powerful, AI-driven 'Chat with your PDF' bot on Telegram. It uses Retrieval-Augmented Generation (RAG) to allow users to upload documents, which are then indexed into a vector database, enabling the bot to answer questions based only on the provided content. Who's this workflow for? Researchers & Students Legal & Compliance Teams Business Analysts & Financial Advisors Anyone needing to quickly find information within large documents How it works This workflow has two primary functions: indexing a new document and answering questions about it. 1. Uploading & Indexing a Document: A user sends a PDF file to the Telegram bot. n8n downloads the document, extracts the text, and splits it into small, manageable chunks. Using Google Gemini, each text chunk is converted into a numerical representation (an "embedding"). These embeddings are stored in a Pinecone vector database, making the document's content searchable. The bot sends a confirmation message to the user that the document has been successfully saved. 2. Asking a Question (RAG): A user sends a regular text message (a question) to the bot. n8n converts the user's question into an embedding using Google Gemini. It then searches the Pinecone database to find the most relevant text chunks from the uploaded PDF that match the question. These relevant chunks (the "context") are sent to the Gemini chat model along with the original question. Gemini generates a new, accurate answer based only on the provided context and sends it back to the user in Telegram. Key Requirements to Use This Template 1. n8n Instance & Required Nodes: An active n8n account (Cloud or self-hosted). This workflow uses the official n8n LangChain integration (@n8n/n8n-nodes-langchain). If you are using a self-hosted version of n8n, please ensure this package is installed. 2. Telegram Account: A Telegram bot created via the BotFather, along with its API token. 3. Google Gemini AI Account: A Google Cloud account with the Vertex AI API enabled and an associated API Key. 4. Pinecone Account: A Pinecone account with an API key. You must have a vector index created in Pinecone. For use with Google Gemini's embedding-001 model, the index must be configured with 768 dimensions. Setup Instructions 1. Telegram Configuration: In the "Telegram Message Trigger" node, create a new credential and add your Telegram bot's API token. Do the same for the "Telegram Response" and "Telegram Response about Database" nodes. 2. Pinecone Configuration: In both "Pinecone Vector Store" nodes, create a new credential and add your Pinecone API key. In the "Index" field of both nodes, enter the name of your pre-configured Pinecone index (e.g., telegram). 3. Google Gemini Configuration: In all three Google Gemini nodes (Embeddings Google Gemini, Embeddings Google Gemini1, and Google Gemini Chat Model), create a new credential and add your Google Gemini (Palm) API key. 4. Activate and Use: Save the workflow and toggle the "Active" switch to ON. To use: First, send a PDF document to your bot. Wait for the confirmation message. Then, you can start asking questions about the content of that PDF. Connect with us Website: https://www.intuz.com/services Email: getstarted@intuz.com LinkedIn: https://www.linkedin.com/company/intuz Get Started: https://n8n.partnerlinks.io/intuz For Custom Workflow Automation Click here- Get Started
by Jasurbek
Overview Automatically anonymize CVs/resumes while preserving professional information. Perfect for recruitment agencies ensuring GDPR compliance and bias-free hiring. Features Supports multiple file formats (PDF, DOCX, etc.) Multi-language support (preserves original language) Removes PII: names, emails, phones, addresses Preserves: skills, experience, dates, achievements Outputs professionally formatted PDF Requirements OpenAI API key (GPT-4 recommended) Stirling PDF service (self-hosted or cloud) n8n version 1.0+ Setup Instructions Configure OpenAI credentials Set up Stirling PDF API endpoint Update API key in HTTP Request nodes Activate workflow Test with sample CV Usage POST to webhook endpoint with CV file as UploadCV field. Use Cases Recruitment agencies (GDPR compliance) HR departments (bias-free screening) Job boards (candidate privacy)
by Fayzul Noor
This workflow is built for digital marketers, sales professionals, influencer agencies, and entrepreneurs who want to automate Instagram lead generation. If you’re tired of manually searching for profiles, copying email addresses, and updating spreadsheets, this automation will save you hours every week. It turns your process into a smart system that finds, extracts, and stores leads while you focus on growing your business. How it works / What it does This n8n automation completely transforms how you collect Instagram leads using AI and API integrations. Here’s a simple breakdown of how it works: Set your targeting parameters using the Edit Fields node. You can specify your platform (Instagram), field of interest such as “beauty & hair,” and target country like “USA.” Generate intelligent search queries with an AI Agent powered by GPT-4o-mini. It automatically creates optimized Google search queries to find relevant Instagram profiles in your chosen niche and location. Extract results from Google using Apify’s Google Search Scraper, which collects hundreds of Instagram profile URLs that match your search criteria. Fetch detailed Instagram profile data using Apify’s Instagram Scraper. This includes usernames, follower counts, and profile bios where contact information usually appears. Use AI to extract emails from the profile biographies with the Information Extractor node powered by GPT-3.5-turbo. It identifies emails even when they are hidden or creatively formatted. Store verified leads in a PostgreSQL database. The workflow automatically adds new leads or updates existing ones with fields like username, follower count, email, and niche. Once everything is set up, the system runs on autopilot and keeps building your database of quality leads around the clock. How to set up Follow these steps to get your Instagram Lead Generation Machine running: Import the JSON file into your n8n instance. Add your API credentials: Apify token for the Google and Instagram scrapers OpenAI API key for the AI-powered nodes PostgreSQL credentials for storing leads Open the Edit Fields node and set your platform, field of interest, and target country. Run the workflow manually using the Manual Trigger node to test it. Once confirmed, replace the manual trigger with a schedule or webhook to run it automatically. Check your PostgreSQL database to ensure the leads are being saved correctly. Requirements Before running the workflow, make sure you have the following: An n8n account or instance (self-hosted or n8n Cloud) An Apify account for accessing the Google and Instagram scrapers OpenAI API access for generating smart search queries and extracting emails A PostgreSQL database to store your leads Basic understanding of how n8n workflows and nodes operate How to customize the workflow This workflow is flexible and can be customized to fit your business goals. Here’s how you can tailor it: Change your niche or location by updating the Edit Fields node. You can switch from “beauty influencers in the USA” to “fitness coaches in Canada” in seconds. Add more data fields to collect additional information such as engagement rates, bio keywords, or profile categories. Just modify the PostgreSQL node and database schema. Connect to your CRM or email system to automatically send introduction emails or add new leads to your marketing pipeline. Use different triggers such as a scheduled cron trigger for daily runs or a webhook trigger to start the workflow through an API call. Filter higher-quality leads by adding logic to capture only profiles with a minimum number of followers or verified emails.
by Muhammad Asadullah
Daily Blog Automation Workflow Fully automated blog creation system using n8n + AI Agents + Image Generation Overview This workflow automates the entire blog creation pipeline—from topic research to final publication. Three specialized AI agents collaborate to produce publication-ready blog posts with custom images, all saved directly to your Supabase database. How It Works 1. Research Agent (Topic Discovery) Triggers**: Runs on schedule (default: daily at 4 AM) Process**: Fetches existing blog titles from Supabase to avoid duplicates Uses Google Search + RSS feeds to identify trending topics in your niche Scrapes competitor content to find content gaps Generates detailed topic briefs with SEO keywords, search intent, and differentiation angles Output**: Comprehensive research document with SERP analysis and content strategy 2. Writer Agent (Content Creation) Triggers**: Receives research from Agent 1 Process**: Writes full blog article based on research brief Follows strict SEO and readability guidelines (no AI fluff, natural tone, actionable content) Structures content with proper HTML markup Includes key sections: hook, takeaways, frameworks, FAQs, CTAs Places image placeholders with mock URLs (https://db.com/image_1, etc.) Output**: Complete JSON object with title, slug, excerpt, tags, category, and full HTML content 3. Image Prompt Writer (Visual Generation) Triggers**: Receives blog content from Agent 2 Process**: Analyzes blog content to determine number and type of images needed Generates detailed 150-word prompts for each image (feature image + content images) Creates prompts optimized for Nano-Banana image model Names each image descriptively for SEO Output**: Structured prompts for 3-6 images per blog post 4. Image Generation Pipeline Process**: Loops through each image prompt Generates images via Nano-Banana API (Wavespeed.ai) Downloads and converts images to PNG Uploads to Supabase storage bucket Generates permanent signed URLs Replaces mock URLs in HTML with real image URLs Output**: Blog HTML with all images embedded 5. Publication Final blog post saved to Supabase blogs table as draft Ready for immediate publishing or review Key Features ✅ Duplicate Prevention: Checks existing blogs before researching new topics ✅ SEO Optimized: Natural language, proper heading structure, keyword integration ✅ Human-Like Writing: No robotic phrases, varied sentence structure, actionable advice ✅ Custom Images: Generated specifically for each blog's content ✅ Fully Structured: JSON output with all metadata (tags, category, excerpt, etc.) ✅ Error Handling: Automatic retries with wait periods between agent calls ✅ Tool Integration: Google Search, URL scraping, RSS feeds for research Setup Requirements 1. API Keys Needed Google Gemini API**: For Gemini 2.5 Pro/Flash models (content generation/writing) Groq API (optional)**: For Kimi-K2-Instruct model (research/writing) Serper.dev API**: For Google Search (2,500 free searches/month) Wavespeed.ai API**: For Nano-Banana image generation Supabase Account**: For database and image storage 2. Supabase Setup Create blogs table with fields: title, slug, excerpt, category, tags, featured_image, status, featured, content Create storage bucket for blog images Configure bucket as public or use signed URLs 3. Workflow Configuration Update these placeholders: RSS Feed URLs**: Replace [your website's rss.xml] with your site's RSS feed Storage URLs**: Update Supabase storage paths in "Upload object" and "Generate presigned URL" nodes API Keys**: Add your credentials to all HTTP Request nodes Niche/Brand**: Customize Research Agent system prompt with your industry keywords Writing Style**: Adjust Writer Agent prompt for your brand voice Customization Options Change Image Provider Replace the "nano banana" node with: Gemini Imagen 3/4 DALL-E 3 Midjourney API Any Wavespeed.ai model Adjust Schedule Modify "Schedule Trigger" to run: Multiple times daily Specific days of week On-demand via webhook Alternative Research Tools Replace Serper.dev with: Perplexity API (included as alternative node) Custom web scraping Different search providers Output Format { "title": "Your SEO-Optimized Title", "slug": "your-seo-optimized-title", "excerpt": "Compelling 2-3 sentence summary with key benefits.", "category": "Your Category", "tags": ["tag1", "tag2", "tag3", "tag4"], "author_name": "Your Team Name", "featured": false, "status": "draft", "content": "...complete HTML with embedded images..." } Performance Notes Average runtime**: 15-25 minutes per blog post Cost per post**: ~$0.10-0.30 (depending on API usage) Image generation**: 10-15 seconds per image with Nano-Banana Retry logic**: Automatically handles API timeouts with 5-15 minute wait periods Best Practices Review Before Publishing: Workflow saves as "draft" status for human review Monitor API Limits: Track Serper.dev searches and image generation quotas Test Custom Prompts: Adjust Research/Writer prompts to match your brand Image Quality: Review generated images; regenerate if needed SEO Validation: Check slugs and meta descriptions before going live Workflow Architecture 3 Main Phases: Research → Writer → Image Prompts (Sequential AI Agent chain) Image Generation → Upload → URL Replacement (Loop-based processing) Final Assembly → Database Insert (Single save operation) Error Handling: Wait nodes between agents prevent rate limiting Retry logic on agent failures (max 2 retries) Conditional checks ensure content quality before proceeding Result: Hands-free blog publishing that maintains quality while saving 3-5 hours per post.
by explorium
Explorium Agent for Slack AI-powered Slack bot for business intelligence queries using Explorium API through MCP. Prerequisites Slack workspace with admin access Anthropic API key (You can replace with other LLM Chat) Explorium API Key 1. Create Slack App Create App Go to api.slack.com/apps Click Create New App → From scratch Give it name (e.g., "Explorium Agent") and select workspace Bot Permissions (OAuth & Permissions) Add these Bot Token Scopes: app_mentions:read channels:history channels:read chat:write emoji:read groups:history groups:read im:history im:read mpim:history mpim:read reactions:read users:read Enable Events Event Subscriptions → Enable Add Request URL (from n8n Slack Trigger node) Subscribe to bot events: app_mention message.channels message.groups message.im message.mpim reaction_added Install App Install App → Install to Workspace Copy Bot User OAuth Token (xoxb-...) 2. Configure n8n Import & Setup Import this JSON template Slack Trigger node: Add Slack credential with Bot Token Copy webhook URL Paste in Slack Event Subscriptions Request URL Anthropic Chat Model node: Add Anthropic API credential Model: claude-haiku-4-5-20251001 (You can replace it with other chat models) MCP Client node: Endpoint: https://mcp.explorium.ai/mcp Header Auth: Add Explorium API key Usage Examples @ExploriumAgent find tech companies in SF with 50-200 employees @ExploriumAgent show Microsoft's technology stack @ExploriumAgent get CMO contacts at healthcare companies `
by Don Jayamaha Jr
Instantly access live OKX Spot Market data directly in Telegram! This workflow integrates the OKX REST v5 API with Telegram and optional GPT-4.1-mini formatting, delivering real-time insights such as latest prices, order book depth, candlesticks, trades, and mark prices — all in clean, structured reports. 🔎 How It Works A Telegram Trigger node listens for incoming user commands. The User Authentication node validates the Telegram ID to allow only authorized users. The workflow creates a Session ID from chat.id to manage session memory. The OKX AI Agent orchestrates data retrieval via HTTP requests to OKX endpoints: Latest Price (/api/v5/market/ticker?instId=BTC-USDT) 24h Stats (/api/v5/market/ticker?instId=BTC-USDT) Order Book Depth (/api/v5/market/books?instId=BTC-USDT&sz=50) Best Bid/Ask (book ticker snapshot) Candlesticks / Klines (/api/v5/market/candles?instId=BTC-USDT&bar=15m) Average / Mark Price (/api/v5/market/mark-price?instType=SPOT&instId=BTC-USDT) Recent Trades (/api/v5/market/trades?instId=BTC-USDT&limit=100) Utility tools refine the data: Calculator → spreads, % change, normalized volumes. Think → reshapes raw JSON into clean text. Simple Memory → stores sessionId, symbol, and state for multi-turn interactions. A message splitter ensures Telegram output stays under 4000 characters. Final results are sent to Telegram in structured, human-readable format. ✅ What You Can Do with This Agent Get latest price and 24h stats for any Spot instrument. Retrieve order book depth with configurable size (up to 400 levels). View best bid/ask snapshots instantly. Fetch candlestick OHLCV data across intervals (1m → 1M). Monitor recent trades (up to 100). Check the mark price as a fair average reference. Receive clean, Telegram-ready reports (auto-split if too long). 🛠️ Setup Steps Create a Telegram Bot Use @BotFather to generate a bot token. Configure in n8n Import OKX AI Agent v1.02.json. Replace the placeholder in User Authentication node with your Telegram ID. Add Telegram API credentials (bot token). Add your OpenAI API key for GPT-4.1-mini. Add your OKX API key optional. Deploy and Test Activate the workflow in n8n. Send a query like BTC-USDT to your bot. Instantly get structured OKX Spot data back in Telegram. 📺 Setup Video Tutorial Watch the full setup guide on YouTube: ⚡ Unlock real-time OKX Spot Market insights directly in Telegram — no private API keys required! 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. 🔗 For support: Don Jayamaha – LinkedIn
by Intuz
This n8n template from Intuz provides a complete solution to automate the extraction of critical information from PDF documents like faxes, or any PDFs. It uses the power of Google Gemini's multimodal capabilities to read the document, identify key fields, and organize the data into a structured format, saving it directly to a Google Sheet. Who's this workflow for? Healthcare Administrators Medical Billing Teams Legal Assistants Data Entry Professionals Office Managers How it works 1. Upload via Web Form: The process starts when a user uploads a fax (as a PDF file) through a simple, secure web form generated by n8n. 2. AI Document Analysis: The PDF is sent directly to Google Gemini's advanced multimodal model, which reads the entire document—including text, tables, and form fields. It extracts all relevant information based on a detailed prompt. 3. AI Data Structuring: The raw extracted text is then passed to a second AI step. This step cleans the information and strictly structures it into a predictable JSON format (e.g., Patient ID, Name, DOB, etc.). 4. Save to Google Sheets: The final, structured data is automatically appended as a new, clean row in your designated Google Sheet, creating an organized and usable dataset from the unstructured fax. Key Requirements to Use This Template 1. n8n Instance & Required Nodes: An active n8n account (Cloud or self-hosted). This workflow uses the official n8n LangChain integration (@n8n/n8n-nodes-langchain). If you are using a self-hosted version of n8n, please ensure this package is installed. 2. Google Accounts: Google Drive Account: For temporarily storing the uploaded file. Google Gemini AI Account: A Google Cloud account with the Vertex AI API (for Gemini models) enabled and an associated API Key. Google Sheets Account: A pre-made Google Sheet with columns that match the data you want to extract. Customer Setup Guide: Here is a detailed, step-by-step guide to help you configure and run this workflow. 1. Before You Begin: Prerequisites Please ensure you have the following ready: The FAX-Content-Extraction.json file we provided. Active accounts for n8n, Google Drive, Google Cloud (for Gemini AI), and Google Sheets. A Google Sheet created with header columns that match the data you want to extract (e.g., Patient ID, Patient Name, Date of Birth, etc.). 2. Step-by-Step Configuration Step 1: Import the Workflow Open your n8n canvas. Click "Import from File" and select the FAX-Content-Extraction.json file. The workflow will appear on your canvas. Step 2: Set Up the Form Trigger The workflow starts with the "On form submission" node. Click on this node. In the settings panel, you will see a "Form URL". Copy this URL. This is the link to the web form where you will upload your fax files. Step 3: Configure the Google Drive Node Click on the "Upload file" (Google Drive) node. Credentials: Select your Google Drive account from the "Credentials" dropdown or click "Create New" to connect your account. Folder ID: In the "Folder ID" field, choose the specific Google Drive folder where you want the uploaded faxes to be saved. Step 4: Configure the Google Gemini AI Nodes (Very Important) This workflow uses AI in two places, and both need to be connected. First AI Call (PDF Reading): Click on the "Call Gemini 2.0 Flash with PDF Capabilities" (HTTP Request) node. Under "Authentication", make sure "Predefined Credential Type" is selected. For "Credential Type", choose "Google Palm API". In the "Credentials" dropdown, select your Google Gemini API key or click "Create New" to add it. Second AI Call (Data Structuring): Click on the "Google Gemini Chat Model" node (it's connected below the "Basic LLM Chain" node). In the "Credentials" dropdown, select the same Google Gemini API key you used before. Step 5: (Optional) Customize What Data is Extracted You have full control over what information the AI looks for. To change the extraction rules: Click on the "Define Prompt" node. You can edit the text in the "Value" field to tell the AI what to look for (e.g., "Extract only the patient's name and medication list"). To change the final output columns: Click on the "Basic LLM Chain" node. In the "Text" field, you can edit the JSON schema to add, remove, or rename the fields you want in your final output. The keys here MUST match the column headers in your Google Sheet. Step 6: Configure the Final Google Sheets Node Click on the "Append row in sheet" node. Credentials: Select your Google Sheets account from the "Credentials" dropdown. Document ID: Select your target spreadsheet from the "Document" dropdown list. Sheet Name: Select the specific sheet within that document. Columns: Ensure that the fields listed here match the columns in your sheet and the schema from the "Basic LLM Chain" node. 4. Running the Workflow Save and Activate: Click "Save" and then toggle the workflow to "Active". Open the Form: Open the Form URL you copied in Step 2 in a new browser tab. Upload a File: Upload a sample fax PDF and submit the form. Check Your Sheet: After a minute, a new row with the extracted data should appear in your Google Sheet. Connect with us Website: https://www.intuz.com/services Email: getstarted@intuz.com LinkedIn: https://www.linkedin.com/company/intuz Get Started: https://n8n.partnerlinks.io/intuz For Custom Worflow Automation Click here- Get Started
by Denis
How it works Multi-modal AI Image Generator powered by Google's Nano Banana (Gemini 2.5 Flash Image) - the latest state-of-the-art image generation model Accepts text, images, voice messages, and PDFs via Telegram for maximum flexibility Uses OpenAI GPT models for conversation and image analysis, then Nano Banana for stunning image generation Features conversation memory for iterative image modifications ("make it darker", "change to blue") Processes different input types: analyzes uploaded images, transcribes voice messages, extracts PDF text All inputs are converted to optimized prompts specifically tuned for Nano Banana's capabilities Set up steps Create Telegram bot via @BotFather and get API token Set up Google Gemini API key from Google AI Studio for Nano Banana image generation (~$0.04/image) Configure OpenAI API key for GPT models (conversation, image analysis, voice transcription) Import workflow and configure all three API credentials in n8n Update bot tokens in HTTP request nodes for file downloads Test with text prompts, image uploads, voice messages, and PDF documents
by Vinay Gangidi
Cash Reconciliation with AI This template automates daily cash reconciliation by comparing your open invoices against bank statement transactions. Instead of manually scanning statements line by line, the workflow uses AI to: Match transactions to invoices and assign confidence scores Flag unapplied or review-needed payments Produce a reconciliation table with clear metrics (match %, unmatched count, etc.) The end result: faster cash application, fewer errors, and better visibility into your cash flow. Good to know Each AI transaction match call will consume credits from your OpenAI account. Check OpenAI pricing for costs. OCR is used to extract data from PDF bank statements, so you’ll need a Mistral OCR API key. This workflow assumes invoices are stored in an Excel or CSV file. You may need to tweak column names to match your file headers. How it works Import files:The workflow pulls your invoice file (Excel/CSV) and daily bank statement (from OneDrive, Google Drive, or local storage). Extract and normalize data: OCR is applied to bank statements if needed. Both data sources are cleaned and aligned into comparable formats. AI matching: The AI agent compares statement transactions against invoice records, assigns a confidence score, and flags items that require manual review. Reconciliation output:A ready-made table shows matched invoices (with amounts and confidence), unmatched items, and summary stats. How to use Start with the manual trigger node to test the flow. Once validated, replace it with a schedule trigger to run daily. Adjust thresholds (like date tolerances or amount variances) in the code nodes to fit your business rules. Review the reconciliation table each day most of the work is automated, you just handle exceptions. Requirements OpenAI API key Mistral OCR API key (for PDF bank statements) Microsoft OneDrive API key and Microsoft Excel API key Access to your invoice file (Excel/CSV) and daily bank statement source Setup steps Connect accounts: Enter your API keys (OpenAI, Mistral OCR, OneDrive, Excel). Configure input nodes: Point the Excel/CSV node to your invoice file. Connect the Get Bank Statement node to your statement storage. Configure AI agent: Add your OpenAI API credentials to the AI node. Customize if needed Update column mappings if your file uses different headers. Adjust matching thresholds and tolerance logic.
by Joseph
Transform meeting transcripts into fully customized, AI-powered presentations automatically. This comprehensive 5-workflow automation system analyzes client conversations and generates professional slide decks complete with personalized content and AI-generated illustrations. 🎯 What This Automation Does This end-to-end solution takes a meeting transcript (Google Docs) and client information as input, then automatically: Creates a presentation from your custom template Generates a strategic presentation plan tailored to the client's needs Creates custom illustrations using AI image generation Populates slides with personalized text content Inserts generated images into the appropriate slides Delivers a client-ready presentation Perfect for sales teams, consultants, agencies, and anyone who needs to create customized presentations at scale. 🔧 How It Works The automation is split into 5 interconnected workflows: Workflow 1: Clone Presentation & Database Setup Form trigger captures client name, transcript URL, and submission time Clones your presentation template via Google Slides API Saves presentation details to Google Sheets for tracking Workflow 2: AI Presentation Plan Generation Analyzes meeting transcript to understand client pain points Generates comprehensive presentation structure and content strategy Saves plan to Google Docs for review and tracking Uses company profile (customizable) to match solutions to client needs Workflow 3: AI Illustration Generation AI agent creates image prompts based on presentation plan Generates illustrations using Flux model via OpenRouter (nanobanana) Uploads images to Google Drive for slide insertion Tracks all generated assets in database Workflow 4: Text Content Population AI agent generates final presentation text from the plan Replaces template placeholders with personalized content Uses Object IDs to target specific text elements in slides Updates slides using native n8n Google Slides node Workflow 5: Image Insertion Retrieves image Object IDs from presentation structure Downloads illustrations from Google Drive Converts images for ImgBB hosting (resolves Google Drive URL limitations) Updates slide images via Google Slides API 📋 Prerequisites Required Accounts & API Keys: Google Workspace (Drive, Slides, Docs) OpenAI API (for AI agents) OpenRouter API (for Flux image generation) ImgBB API (free tier available) Gemini API (optional, for additional AI tasks) Setup Requirements: Google Sheets database (template provided in article and inside the workflow) Google Slides presentation template with standard Object IDs Meeting transcript in Google Docs format 🎨 Customization Options This automation is designed to be flexible: Template Flexibility**: Use any slide template structure Company Profile**: Customize the business context for your use case AI Models**: Swap OpenAI/Gemini agents for your preferred LLM Image Generation**: Replace Flux with DALL-E, Midjourney API, or other models Slide Logic**: Extend to dynamically select slides based on content needs 💡 Key Technical Insights Structured Output Handling**: Uses JavaScript for reliable JSON parsing when AI output structure is complex Object ID System**: Template placeholders use unique IDs for precise element targeting Image Hosting Workaround**: ImgBB resolves Google Drive direct URL limitations in API calls HTTP Request Nodes**: Used for API operations not covered by native n8n nodes (copying presentations, image updates) 🔗 Full Documentation For a detailed breakdown of each workflow, configuration steps, and best practices, read the complete guide on this Medium article 🚀 Use Cases Sales Teams**: Auto-generate pitch decks from discovery calls Consulting Firms**: Create client proposals from needs assessments Marketing Agencies**: Build campaign presentations from strategy sessions Product Teams**: Transform user research into stakeholder presentations Training & Education**: Convert session notes into learning materials ⚠️ Important Notes Template must use consistent Object IDs for automation to work Google Drive images require ImgBB hosting for reliable URL access AI agent output structure is complex; JavaScript parsing recommended Rate limits apply for API services (especially image generation) 📦 Resources & Templates API Services (Get Your Keys Here) OpenRouter** - For Flux (nanobanana) AI image generation ImgBB API** - Free image hosting service OpenAI API** - For AI agents and text generation Google Cloud Console** - Enable Google Slides, Drive, and Docs APIs Google AI Studio** - For Gemini API key Templates & Examples Meeting Transcript Sample** - Example transcript structure Google Sheets Database Template** - Copy this to track your presentations Presentation Template** - Base slide deck with Object IDs 💡 Tip: Make copies of all templates before using them in your workflows! Have questions or improvements? Connect with me: X (Twitter): @juppfy Email: joseph@uppfy.com P.S: I'd love to hear how you adapt this for your workflow!