by Parag Javale
The AI Blog Creator with Gemini, Replicate Image, Supabase Publishing & Slack is a fully automated content generation and publishing workflow designed for modern marketing and SaaS teams. It automatically fetches the latest industry trends, generates SEO-optimized blogs using AI, creates a relevant featured image, publishes the post to your CMS (e.g., Supabase or custom API), and notifies your team via Slack all on a daily schedule. This workflow connects multiple services NewsAPI, Google Gemini, Replicate, Supabase, and Slack into one intelligent content pipeline that runs hands-free once set up. ✨ Features 📰 Fetch Trending Topics — pulls the latest news or updates from your selected industry (via NewsAPI). 🤖 AI Topic Generation — Gemini suggests trending blog topics relevant to AI, SaaS, and Automation. 📝 AI Blog Authoring — Gemini then writes a full 1200-1500 word SEO-optimized article in Markdown. 🧹 Smart JSON Cleaner — A resilient code node parses Gemini’s output and ensures clean, structured data. 🖼️ Auto-Generated Image — Replicate’s Ideogram model creates a blog cover image based on the content prompt. 🌐 Automatic Publishing — Posts are automatically published to your Supabase or custom backend. 💬 Slack Notification — Notifies your team with blog details and live URL. ⏰ Fully Scheduled — Runs automatically every day at your preferred time (default 10 AM IST). ⚙️ Workflow Structure | Step | Node | Purpose | | ---- | ----------------------------------- | ----------------------------------------------- | | 1 | Schedule Trigger | Runs daily at 10 AM | | 2 | Fetch Industry Trends (NewsAPI) | Retrieves trending articles | | 3 | Message a model (Gemini) | Generates trending topic ideas | | 4 | Message a model1 (Gemini) | Writes full SEO blog content | | 5 | Code in JavaScript | Cleans, validates, and normalizes Gemini output | | 6 | HTTP Request (Replicate) | Generates an image using Ideogram | | 7 | HTTP Request1 | Retrieves generated image URL | | 8 | Wait + If | Polls until image generation succeeds | | 9 | Edit Fields | Assembles blog fields into final JSON | | 10 | Publish to Supabase | Posts to your CMS | | 11 | Slack Notification | Sends message to your Slack channel | 🔧 Setup Instructions Import the Workflow in n8n and enable it. Create the following credentials: NewsAPI (Query Auth) — from https://newsapi.org Google Gemini (PaLM API) — use your Gemini API key Replicate (Bearer Auth) — API key from https://replicate.com/account Supabase (Header Auth) — endpoint to your /functions/v1/blog-api (set your key in header) Slack API — create a Slack App token with chat:write permission Edit the NewsAPI URL query parameter to match your industry (e.g., q=AI automation SaaS). Update the Supabase publish URL to your project endpoint if needed. Adjust the Slack Channel name under “Slack Notification”. (Optional) Change the Schedule Trigger time as per your timezone. 💡 Notes & Tips The Code in JavaScript node is robust against malformed or extra text in Gemini output — it sanitizes Markdown and reconstructs clean JSON safely. You can replace Supabase with any CMS or Webhook endpoint by editing the “Publish to Supabase” node. The Replicate model used is ideogram-ai/ideogram-v3-turbo — you can swap it with Stable Diffusion or another model for different aesthetics. Use the slug field in your blog URLs for SEO-friendly links. Test with one manual execution before activating scheduled runs. If Slack notification fails, verify the token scopes and channel permissions. 🧩 Tags #AI #Automation #ContentMarketing #BlogGenerator #n8n #Supabase #Gemini #Replicate #Slack #WorkflowAutomation
by Lidia
Who’s it for Teams who want to automatically generate structured meeting minutes from uploaded transcripts and instantly share them in Slack. Perfect for startups, project teams, or any company that collects meeting transcripts in Google Drive. How it works / What it does This workflow automatically turns raw meeting transcripts into well-structured minutes in Markdown and posts them to Slack: Google Drive Trigger – Watches a specific folder. Any new transcript file added will start the workflow. Download File – Grabs the transcript. Prep Transcript – Converts the file into plain text and passes the transcript downstream. Message a Model – Sends the transcript to OpenAI GPT for summarization using a structured system prompt (action items, decisions, N/A placeholders). Make Minutes – Formats GPT’s response into a Markdown file. Slack: Send a message – Posts a Slack message announcing the auto-generated minutes. Slack: Upload a file – Uploads the full Markdown minutes file into the chosen Slack channel. End result: your Slack channel always has clear, standardized minutes right after a meeting. How to set up Google Drive Create a folder where you’ll drop transcript files. Configure the folder ID in the Google Drive Trigger node. OpenAI Add your OpenAI API credentials in the Message a Model node. Select a supported GPT model (e.g., gpt-4o-mini or gpt-4). Slack Connect your Slack account and set the target channel ID in the Slack nodes. Run the workflow and drop a transcript file into Drive. Minutes will appear in Slack automatically. Requirements Google Drive account (for transcript upload) OpenAI API key (for text summarization) Slack workspace (for message posting and file upload) How to customize the workflow Change summary structure*: Adjust the system prompt inside *Message a Model (e.g., shorter summaries, language other than English). Different output format*: Modify *Make Minutes to output plain text, PDF, or HTML instead of Markdown. New destinations**: Add more nodes to send minutes to email, Notion, or Confluence in parallel. Multiple triggers**: Replace Google Drive trigger with Webhook if you want to integrate with Zoom or MS Teams transcript exports. Good to know OpenAI API calls are billed separately. See OpenAI pricing. Files must be text-based (.txt or .md). For PDFs or docs, add a conversion step before summarization. Slack requires the bot user to be a member of the target channel, otherwise you’ll see a not_in_channel error.
by Paul Roussel
Automated workflow that generates custom AI image backgrounds from text prompts using Gemini's Nano Banana (native image generation), removes video backgrounds, and composites videos on AI-generated scenes. Create any background you can imagine without needing stock images. How it works • Describe background: Provide video URL and text prompt describing desired background scene (e.g., "modern office with city skyline at golden hour") • AI generates image: Gemini creates a background image from your prompt in ~10-20 seconds • Upload to Drive: Generated background is saved to Google Drive and made publicly accessible • Remove & composite: Video background is removed and composited on AI-generated scene with centered template • Save final video: Completed video is uploaded to Google Drive with shareable link Set up steps ⏱️ Total setup time: ~5 minutes • Get Gemini API Key (~1 min): Visit https://aistudio.google.com/apikey, create new API key, add to n8n Settings → Variables as GEMINI_KEY • Get VideoBGRemover API Key (~2 min): Visit https://videobgremover.com/n8n, sign up, add to n8n as VIDEOBGREMOVER_KEY • Connect Google Drive (~2 min): Click "Save Background Image to Drive" node, click "Connect", authorize n8n Use cases: Marketing videos with custom branded environments tailored to your message Product demos with unique AI-generated backgrounds that match your product aesthetic Social media content with creative scenes you can't find in stock libraries AI avatars placed in AI-generated worlds Presentations with custom backgrounds generated for specific topics A/B testing different background variations for the same video Pricing: Gemini: ~$0.03 per generated image VideoBGRemover: $0.50-$2.00 per minute of video Total: ~$0.53-$2.03 per video Triggers: Webhook (for automation) or Manual (for testing) Processing time: Typically 5-7 minutes total Prompt tips: Be descriptive and specific. Instead of "office," try: "A modern minimalist office with floor-to-ceiling windows overlooking a city skyline at golden hour. Warm sunlight, polished concrete floors, sleek wooden desks, green plants."
by Rapiwa
Who is this for? This workflow is designed for online store owners, customer-success teams, and marketing operators who want to automatically verify customers' WhatsApp numbers and deliver order updates or invoice links via WhatsApp. It is built around WooCommerce order WooCommerce Trigger (order.updated) but is easily adaptable to Shopify or other platforms that provide billing and line_items in the WooCommerce Trigger payload. What this Workflow Does / Key Features Listens for WooCommerce order events (example: order.updated) via a Webhook or a WooCommerce trigger. Filters only orders with status "completed" and maps the payload into a normalized object: { data: { customer, products, invoice_link } } using the Code node Order Completed check. Iterates over line items using SplitInBatches to control throughput. Cleans phone numbers (Clean WhatsApp Number code node) by removing all non-digit characters. Verifies whether the cleaned phone number is registered on WhatsApp using Rapiwa's verify endpoint (POST https://app.rapiwa.com/api/verify-whatsapp). If verified, sends a templated WhatsApp message via Rapiwa (POST https://app.rapiwa.com/api/send-message). Appends an audit row to a "Verified & Sent" Google Sheet for successful sends, or to an "Unverified & Not Sent" sheet for unverified numbers. Uses Wait and batching to throttle requests and avoid API rate limits. Requirements HTTP Bearer credential for Rapiwa (example name in flow: Rapiwa Bearer Auth). WooCommerce API credential for the trigger (example: WooCommerce (get customer)) Running n8n instance with nodes: WooCommerce Trigger, Code, SplitInBatches, HTTP Request, IF, Google Sheets, Wait. Rapiwa account and a valid Bearer token. Google account with Sheets access and OAuth2 credentials configured in n8n. WooCommerce store (or any WooCommerce Trigger source) that provides billing and line_items in the payload. How to Use — step-by-step Setup 1) Credentials Rapiwa: Create an HTTP Bearer credential in n8n and paste your token (flow example name: Rapiwa Bearer Auth). Google Sheets: Add an OAuth2 credential (flow example name: Google Sheets). WooCommerce: Add the WooCommerce API credential or configure a Webhook on your store. 3) Configure Google Sheets The exported flow uses spreadsheet ID: 1S3RtGt5xxxxxxxXmQi_s (Sheet gid=0) as an example. Replace with your spreadsheet ID and sheet gid. Ensure your sheet column headers exactly match the mapping keys listed below (case and trailing spaces must match or be corrected in the mapping). 5) Verify HTTP Request nodes Verify endpoint: POST https://app.rapiwa.com/api/verify-whatsapp — sends { number } (uses HTTP Bearer credential). Send endpoint: POST https://app.rapiwa.com/api/send-message — sends number, message_type=text, and a templated message that uses fields from the Clean WhatsApp Number output. Google Sheet Column Structure The Google Sheets nodes in the flow append rows with these column keys. Make sure the spreadsheet headers A Google Sheet formatted like this ➤ sample | Name | Number | Email | Address | Product Title | Product ID | Total Price | Invoice Link | Delivery Status | Validity | Status | |----------------|---------------|-------------------|------------------------------------------|-----------------------------|------------|---------------|--------------------------------------------------------------------------------------------------------------|-----------------|------------|-----------| | Abdul Mannan | 8801322827799 | contact@spagreen.net | mirpur dohs | Air force 1 Fossil 1:1 - 44 | 238 | BDT 5500.00 | Invoice link | completed | verified | sent | | Abdul Mannan | 8801322827799 | contact@spagreen.net | mirpur dohs h#1168 rd#10 av#10 mirpur dohs dhaka | Air force 1 Fossil 1:1 - 44 | 238 | BDT 5500.00 | Invoice link | completed | unverified | not sent | Important Notes Do not hard-code API keys or tokens; always use n8n credentials. Google Sheets column header names must match the mapping keys used in the nodes. Trailing spaces are common accidental problems — trim them in the spreadsheet or adjust the mapping. The IF node in the exported flow compares to the string "true". If the verify endpoint returns boolean true/false, convert to string or boolean consistently before the IF. Message templates in the flow reference $('Clean WhatsApp Number').item.json.data.products[0] — update templates if you need multiple-product support. Useful Links Dashboard:** https://app.rapiwa.com Official Website:** https://rapiwa.com Documentation:** https://docs.rapiwa.com Support & Help WhatsApp**: Chat on WhatsApp Discord**: SpaGreen Community Facebook Group**: SpaGreen Support Website**: https://spagreen.net Developer Portfolio**: Codecanyon SpaGreen
by Khairul Muhtadin
This AI-powered workflow transforms n8n workflow JSON files into publication-ready, SEO-optimized markdown posts for the n8n community. Simply upload your workflow's JSON, and let Google Gemini 2.5 Pro, guided by a LlamaIndex-powered knowledge base of best practices, automatically generate compelling content. Why Use This Workflow? Time Savings: Reduces the time to create a detailed workflow post from over an hour of manual writing to under 2 minutes. Cost Reduction: Eliminates the need for separate AI content subscriptions or outsourcing content creation tasks. Error Prevention: Enforces content quality and structural consistency by using a knowledge base of n8n's official guidelines, minimizing formatting errors. Ideal For n8n Workflow Creators:** To quickly document and share their creations on the community platform without the tedious, time-consuming writing process. Developer Advocates:** To standardize and accelerate the production of technical tutorials and workflow showcases. Content & Marketing Teams:** To streamline the content pipeline for n8n-related blog posts, tutorials, and community engagement initiatives. How It Works Trigger: The process starts when you upload an n8n workflow JSON file via a simple web form. Data Extraction: The workflow automatically extracts the JSON content from the uploaded file. Intelligence Layer: An advanced AI agent, powered by Google Gemini 2.5 Pro, analyzes the structure, nodes, and metadata of your workflow. Knowledge Retrieval: The agent consults a specialized, in-memory knowledge base built from n8n's content guidelines. This knowledge base is created by parsing documents with LlamaIndex and refined with a Cohere Reranker for maximum accuracy. Content Generation: The AI agent synthesizes the technical details from your JSON with the best practices from the knowledge base to write a complete, benefit-driven markdown post. Output & Delivery: The final, polished markdown content is generated as the workflow's output, ready to be copied and pasted into the n8n community platform. Setup Guide Prerequisites | Requirement | Type | Purpose | |-------------|------|---------| | n8n instance | Essential | Workflow execution platform | | Google Gemini API Key | Essential | Powers the core AI content generation | | LlamaIndex Cloud API Key | Essential | Parses documents for the knowledge base | | Cohere API Key | Optional | Improves knowledge base search results | | Google Drive Account | Optional | For automatically updating the knowledge base from a Google Doc | Installation Steps Import the JSON file to your n8n instance. Configure credentials: Google Gemini: In the "GEmini 2.5 pro" node, create and add your Google Gemini API credential. LlamaIndex: In the three HTTP Request nodes named "Parse Document...", "Monitor Document...", and "Retrieve Parsed...", create an HTTP Header Auth credential. The header name is Authorization and the value is Bearer YOUR_LLAMA_INDEX_API_KEY. Cohere: (Optional) In the "Reranker Cohere" node, create and add your Cohere API credential. Google Drive: (Optional) If you plan to auto-update the knowledge base, configure Google Drive OAuth2 credentials for the "Knowledge Base Updated Trigger" and "Download Knowledge Document" nodes. Update environment-specific values: To use the knowledge base auto-update feature, go to the "Knowledge Base Updated Trigger" node and select the Google Drive file containing your content guidelines. Customize settings: The primary system prompt in the "n8ncreator" agent node can be modified to adjust the tone, style, or structure of the generated content. Test execution: Run the workflow manually and use the form to upload a sample n8n workflow JSON file to verify that all connections work correctly. Technical Details Core Nodes | Node | Purpose | Key Configuration | |------|---------|-------------------| | Form Trigger | Initiates the workflow via a file upload. | Set the "Input Json Workflow" field to required. | | Langchain Agent | Orchestrates the entire content creation process. | The system prompt contains all instructions for the AI. | | ChatGoogleGemini | Provides the core generative AI capabilities. | Select your Gemini model of choice (e.g., gemini-2.5-pro). | | VectorStoreInMemory | Acts as the agent's knowledge base tool. | Configured to use embeddings from a Google Gemini model. | | HTTPRequest | Interacts with the LlamaIndex API to parse documents. | Set up with LlamaIndex API endpoint and authentication. | Customization Options Basic Adjustments: Change AI Model:** Replace the ChatGoogleGemini node with another LLM node (e.g., OpenAI, Anthropic) to use a different provider. Adjust System Prompt:** Modify the prompt in the "n8ncreator" node to tailor the output for different platforms (e.g., blog, internal wiki) or change the writing style. Advanced Enhancements: Automated Publishing:** Connect the output of the "n8ncreator" node to a Ghost, WordPress, or GitHub node to automatically publish the generated post. Add Web Search:** Equip the Langchain Agent with a web search tool to allow it to fetch live information about new n8n nodes or services. Batch Processing:** Replace the Form Trigger with a Read Binary Files node to process an entire folder of workflow JSON files in a single run. Performance & Optimization | Metric | Expected Performance | Optimization Tips | |--------|---------------------|-------------------| | Execution time | ~1 minute per run | Largely dependent on the Gemini API response time. | | API calls | 1 LLM call per post | Knowledge base updates trigger LlamaIndex/Google calls separately. | | Error handling | Built-in retry logic for document parsing | Add an error workflow path after the "n8ncreator" node to handle AI generation failures. | Troubleshooting Common Issues: | Problem | Cause | Solution | |---------|-------|----------| | AI output is generic or incomplete | The input JSON file is invalid or lacks key information (e.g., no node names). | Ensure you are uploading a valid, exported n8n workflow JSON. Verify the workflow has been saved with descriptive node names. | | LlamaIndex parsing fails | The LlamaIndex API key is incorrect or the source document is inaccessible. | Double-check your LlamaIndex API credential. Ensure the Google Doc sharing settings allow access. | | Credential Error | API keys are missing or incorrect for Gemini, LlamaIndex, or Cohere. | Go to the specified nodes and verify that the correct credentials have been created and selected. | Created by: khaisa Studio Category: AI Tags: AI, Content Generation, Google Gemini, LlamaIndex, Automation Need custom workflows? Contact us Connect with the creator: Portfolio • Workflows • LinkedIn • Medium • Threads
by Jaruphat J.
⚠️ Note: This template requires a community node and works only on self-hosted n8n installations. It uses the Typhoon OCR Python package, pdfseparate from poppler-utils, and custom command execution. Make sure to install all required dependencies locally. Who is this for? This template is designed for developers, back-office teams, and automation builders (especially in Thailand or Thai-speaking environments) who need to process multi-file, multi-page Thai PDFs and automatically export structured results to Google Sheets. It is ideal for: Government and enterprise document processing Thai-language invoices, memos, and official letters AI-powered automation pipelines that require Thai OCR What problem does this solve? Typhoon OCR is one of the most accurate OCR tools for Thai text, but integrating it into an end-to-end workflow usually requires manual scripting and handling multi-page PDFs. This template solves that by: Splitting PDFs into individual pages Running Typhoon OCR on each page Aggregating text back into a single file Using AI to extract structured fields Automatically saving structured data into Google Sheets What this workflow does Trigger:** Manual execution or any n8n trigger node Load Files:** Read PDFs from a local doc/multipage folder Split PDF Pages:** Use pdfinfo and pdfseparate to break PDFs into pages Typhoon OCR:** Run OCR on each page via Execute Command Aggregate:** Combine per-page OCR text LLM Extraction:** Use AI (e.g., GPT-4, OpenRouter) to extract fields into JSON Parse JSON:** Convert structured JSON into a tabular format Google Sheets:** Append one row per file into a Google Sheet Cleanup:** Delete temp split pages and move processed PDFs into a Completed folder Setup Install Requirements Python 3.10+ typhoon-ocr: pip install typhoon-ocr poppler-utils: provides pdfinfo, pdfseparate qpdf: backup page counting Create folders /doc/multipage for incoming files /doc/tmp for split pages /doc/multipage/Completed for processed files Google Sheet Create a Google Sheet with column headers like: book_id | date | subject | to | attach | detail | signed_by | signed_by2 | contact_phone | contact_email | contact_fax | download_url API Keys Export your TYPHOON_OCR_API_KEY and OPENAI_API_KEY (or use credentials in n8n) How to customize this workflow Replace the LLM provider in the “Structure Text to JSON with LLM” node (supports OpenRouter, OpenAI, etc.) Adjust the JSON schema and parsing logic to match your documents Update Google Sheets mapping to fit your desired fields Add trigger nodes (Dropbox, Google Drive, Webhook) to automate file ingestion About Typhoon OCR Typhoon is a multilingual LLM and NLP toolkit optimized for Thai. It includes typhoon-ocr, a Python OCR package designed for Thai-centric documents. It is open-source, highly accurate, and works well in automation pipelines. Perfect for government paperwork, PDF reports, and multi-language documents in Southeast Asia. Deployment Option You can also deploy this workflow easily using the Docker image provided in my GitHub repository: https://github.com/Jaruphat/n8n-ffmpeg-typhoon-ollama This Docker setup already includes n8n, ffmpeg, Typhoon OCR, and Ollama combined together, so you can run the whole environment without installing each dependency manually.
by Pedro Entringer
🧠 Export Tawk.to Help Center Articles to Google Drive as Markdown Files Transform the way you manage your knowledge base with this fully automated N8N workflow! This automation connects directly to your Tawk.to Help Center, reads all published categories and articles, converts them to Markdown (.md) format, and uploads each file to Google Drive 🔹 Key Benefits 🚀 Complete Extraction Automatically captures all categories and articles from your Tawk.to Help Center, even without direct API integration. 🧩 Automatic Conversion Transforms HTML content into clean Markdown files — perfect for editing, version control, or migration to another CMS. ☁️ Native Google Drive Integration Saves each article with a structured filename, avoids duplicates, and organizes them by category. 🔁 Fully Customizable Easily adapt the workflow to export to Notion, GitHub, Dropbox, or any other platform supported by N8N. 💡 Ideal Use Cases Migrating your Tawk.to Help Center Creating automated content backups Integrating documentation across multiple systems ⚙️ Prerequisites Before running this workflow, make sure you have: An active Tawk.to account with access to your Help Center. A Google Drive account (personal or workspace). Access to N8N (self-hosted or cloud). 🧰 Setup Instructions Import the Workflow Download the JSON file from the provided link or your N8N community instance. In N8N, click Import Workflow and upload the file. Authenticate Google Drive Open the Google Drive node. Click Connect, choose your Google account, and allow access. Configure Output Folder Choose or create a target folder in your Google Drive where articles will be saved. Run the Workflow Click Execute Workflow. The automation will read all Help Center articles, convert them to Markdown, and save them to your Drive.
by Oneclick AI Squad
Automatically creates complete videos from a text prompt—script, voiceover, stock footage, and subtitles all assembled and ready. How it works Send a video topic via webhook (e.g., "Create a 60-second video about morning exercise"). The workflow uses OpenAI to generate a structured script with scenes, converts text to natural-sounding speech, searches Pexels for matching B-roll footage, and downloads everything. Finally, it merges audio with video, generates SRT subtitles, and prepares all components for final assembly. The workflow handles parallel processing—while generating voiceover, it simultaneously searches and downloads stock footage to save time. Setup steps Add OpenAI credentials for script generation and text-to-speech Get a free Pexels API key from pexels.com/api for stock footage access Connect Google Drive for storing the final video output Install FFmpeg (optional) for automated video assembly, or manually combine the components Test the webhook by sending a POST request with your video topic Input format: { "prompt": "Your video topic here", "duration": 60, "style": "motivational" } What you get ✅ AI-generated script broken into scenes ✅ Professional voiceover audio (MP3) ✅ Downloaded stock footage clips (MP4) ✅ Timed subtitles file (SRT) ✅ All components ready for final editing Note: The final video assembly requires FFmpeg or a video editor. All components are prepared and organized by scene number for easy manual editing if needed.
by iamvaar
Youtube Video: https://youtu.be/dEtV7OYuMFQ?si=fOAlZWz4aDuFFovH Workflow Pre-requisites Step 1: Supabase Setup First, replace the keys in the "Save the embedding in DB" & "Search Embeddings" nodes with your new Supabase keys. After that, run the following code snippets in your Supabase SQL editor: Create the table to store chunks and embeddings: CREATE TABLE public."RAG" ( id bigserial PRIMARY KEY, chunk text NULL, embeddings vector(1024) NULL ) TABLESPACE pg_default; Create a function to match embeddings: DROP FUNCTION IF EXISTS public.matchembeddings1(integer, vector); CREATE OR REPLACE FUNCTION public.matchembeddings1( match_count integer, query_embedding vector ) RETURNS TABLE ( chunk text, similarity float ) LANGUAGE plpgsql AS $$ BEGIN RETURN QUERY SELECT R.chunk, 1 - (R.embeddings <=> query_embedding) AS similarity FROM public."RAG" AS R ORDER BY R.embeddings <=> query_embedding LIMIT match_count; END; $$; Step 2: Create Jotform with these fields Your full name email address Upload PDF Document [field where you upload the knowledgebase in PDF] Step 3: Get Together AI API Key Get a Together AI API key and paste it into the "Embedding Uploaded document" node and the "Embed User Message" node. Here is a detailed, node-by-node explanation of the n8n workflow, which is divided into two main parts. Part 1: Ingesting Knowledge from a PDF This first sequence of nodes runs when you submit a PDF through a Jotform. Its purpose is to read the document, process its content, and save it in a specialized database for the AI to use later. JotForm Trigger Type: Trigger What it does: This node starts the entire workflow. It's configured to listen for new submissions on a specific Jotform. When someone uploads a file and submits the form, this node activates and passes the submission data to the next step. Grab New knowledgebase Type: HTTP Request What it does: The initial trigger from Jotform only contains basic information. This node makes a follow-up call to the Jotform API using the submissionID to get the complete details of that submission, including the specific link to the uploaded file. Grab the uploaded knowledgebase file link Type: HTTP Request What it does: Using the file link obtained from the previous node, this step downloads the actual PDF file. It's set to receive the response as a file, not as text. Extract Text from PDF File Type: Extract From File What it does: This utility node takes the binary PDF file downloaded in the previous step and extracts all the readable text content from it. The output is a single block of plain text. Splitting into Chunks Type: Code What it does: This node runs a small JavaScript snippet. It takes the large block of text from the PDF and chops it into smaller, more manageable pieces, or "chunks," each of a predefined length. This is critical because AI models work more effectively with smaller, focused pieces of text. Embedding Uploaded document Type: HTTP Request What it does: This is a key AI step. It sends each individual text chunk to an embeddings API. A specified AI model converts the semantic meaning of the chunk into a numerical list called an embedding or vector. This vector is like a mathematical fingerprint of the text's meaning. Save the embedding in DB Type: Supabase What it does: This node connects to your Supabase database. For every chunk, it creates a new row in a specified table and stores two important pieces of information: the original text chunk and its corresponding numerical embedding (its "fingerprint") from the previous step. Part 2: Answering Questions via Chat This second sequence starts when a user sends a message. It uses the knowledge stored in the database to find relevant information and generate an intelligent answer. When chat message received Type: Chat Trigger What it does: This node starts the second part of the workflow. It listens for any incoming message from a user in a connected chat application. Embend User Message Type: HTTP Request What it does: This node takes the user's question and sends it to the exact same embeddings API and model used in Part 1. This converts the question's meaning into the same kind of numerical vector or "fingerprint." Search Embeddings Type: HTTP Request What it does: This is the "retrieval" step. It calls a custom database function in Supabase. It sends the question's embedding to this function and asks it to search the knowledge base table to find a specified number of top text chunks whose embeddings are mathematically most similar to the question's embedding. Aggregate Type: Aggregate What it does: The search from the previous step returns multiple separate items. This utility node simply bundles those items into a single, combined piece of data. This makes it easier to feed all the context into the final AI model at once. AI Agent & Google Gemini Chat Model Type: LangChain Agent & AI Model What it does: This is the "generation" step where the final answer is created. The AI Agent node is given a detailed set of instructions (a prompt). The prompt tells the Google Gemini Chat Model to act as a professional support agent. Crucially, it provides the AI with the user's original question and the aggregated text chunks from the Aggregate node as its only source of truth. It then instructs the AI to formulate an answer based only on that provided context, format it for a specific chat style, and to say "I don't know" if the answer cannot be found in the chunks. This prevents the AI from making things up.
by Sridevi Edupuganti
Try It Out! Use n8n to extract medical test data from diagnostic reports uploaded to Google Drive, automatically detect abnormal values, and generate personalized health advice. How it works Upload a medical report (PDF or image) to a monitored Google Drive folder Mistral AI extracts text using OCR while preserving document structure GPT-4 parses the extracted text into structured JSON (patient info, test names, results, units, reference ranges) All test results are saved to the "All Values" sheet in Google Sheets JavaScript code compares each result against its reference range to detect abnormalities For out-of-range values, GPT-4 generates personalized dietary, lifestyle, and exercise advice based on patient age and gender Abnormal results with recommendations are saved to the "Out of Range Values" sheet How to use Set up Google Drive folder monitoring and Google Sheets with two tabs: "All Values" and "Out of Range Values" Configure API credentials for Google Drive, Mistral AI, and OpenAI (GPT-4) Upload medical reports to your monitored folder Review extracted data and personalized health advice in Google Sheets Requirements Google Drive and Sheets with OAuth2 authentication Mistral AI API key for OCR OpenAI API key (GPT-4 access required) for intelligent extraction and advice generation Need Help? See the detailed Read Me file at https://drive.google.com/file/d/1Wv7dfcBLsHZlPcy1QWPYk6XSyrS3H534/view?usp=sharing Join the n8n community forum for support
by Margo Rey
AI-Powered Email Generation with MadKudu sent via Outreach.io This workflow researches prospects using MadKudu MCP, generates personalized emails with OpenAI, and syncs them to Outreach with automatic sequence enrollment. Its for SDRs and sales teams who want to scale personalized outreach by automating research and email generation while maintaining quality. ✨ Who it's for Sales Development Representatives (SDRs) doing cold outreach Business Development teams needing personalized emails at scale RevOps teams wanting to automate prospect research workflows Sales teams using Outreach for email sequences 🔧 How it works 1. Input Email & Research: Enter prospect email via chat trigger. Extract email and generate comprehensive account brief using MadKudu MCP account-brief-instructions. 2. Deep Research & Email Generation: AI Agent performs 6 research steps using MadKudu MCP tools: Account details (hiring, partnerships, tech stack, sales motion, risk) Top users in the account (for name-dropping opportunities) Contact details (role, persona, engagement) Contact web search (personal interests, activities) Contact picture web search (LinkedIn profile insights) Company value prop research AI generates 5 different email angles and selects the best one based on relevance. 3. Outreach Integration: Checks if prospect exists in Outreach by email. If exists: Updates custom field (custom49) with generated email. If new: Creates new prospect with email in custom field. Enrolls prospect in specified email sequence (ID 781) using mailbox (ID 51). Waits 30 seconds and verifies successful enrollment. 📋 How to set up Set your OpenAI credentials Required for AI research and email generation. Create a n8n Variable to store your MadKudu API key named madkudu_api_key Used for the MadKudu MCP tool to access account research capabilities. Create a n8n Variable to store your company domain named my_company_domain Used for context in email generation and value prop research. Create an Oauth2 API credential to connect your Outreach account Used to create/update prospects and enroll in sequences. Configure Outreach settings Update Outreach Mailbox ID (currently set to 51) in the "Configure Outreach Settings" node. Update Outreach Sequence ID (currently set to 781) in the same node. Adjust custom field name if using different field than custom49. 🔑 How to connect Outreach In n8n, add a new Oauth2 API credential and copy the callback URL Now go to Outreach developer portal Click "Add" to create a new app In Feature selection add Outreach API (OAuth) In API Access (Oauth) set the redirect URI to the n8n callback Select the following scopes accounts.read, accounts.write, prospects.read, prospects.write, sequences.read Save in Outreach 7.Now enter the Outreach Application ID into n8n Client Id and the Outreach Application Secret into n8n Client secret Save in n8n and connect via Oauth your Outreach Account ✅ Requirements MadKudu account with access to API Key Outreach Admin permissions to create an app OpenAI API Key 🛠 How to customize the workflow Change the research steps Modify the AI Agent prompt to adjust the 6 research steps or add additional MadKudu MCP tools. Update Outreach configuration Change Mailbox ID (51) and Sequence ID (781) in the "Configure Outreach Settings" node. Update custom field mapping if using different field than custom49. Modify email generation Adjust the prompt guidelines, tone, or angle priorities in the "AI Email Generator" node. Change the trigger Swap the chat trigger for a Schedule, Webhook, or integrate with your CRM to automate prospect input.
by Oussama
This n8n template creates an intelligent expense tracking system 🤖 that processes text, voice, and receipt images through Telegram. The assistant automatically categorizes expenses, handles currency conversions 🌍, and maintains financial records in Google Sheets while providing smart spending insights 💡. Use Cases: 🗣️ Personal expense tracking via Telegram chat 🧾 Receipt scanning and data extraction 💱 Multi-currency expense management 📂 Automated financial categorization 🎙️ Voice-to-expense logging 📊 Daily/weekly/monthly spending analysis How it works: Multi-Input Processing: Telegram trigger captures text messages, voice notes, and receipt images. Content Analysis: A Switch node routes different input types (text, audio, images) to appropriate processors. Voice Processing: ElevenLabs converts voice messages to text for expense extraction. Receipt OCR: Google Gemini analyzes receipt images to extract amounts and descriptions. Expense Classification: An LLM determines if the input is an expense or a general query. Expense Parsing: For multiple expenses, the AI splits and normalizes each item. Currency Conversion: An exchange rate API converts foreign currencies to USD. Smart Categorization: The AI agent assigns expenses to predefined categories with emojis. Data Storage: Google Sheets stores all expense records with automatic totals. Intelligent Responses: The agent provides spending summaries, alerts, and financial insights. Requirements: 🌐 Telegram Bot API access 🤖 OpenAI, Gemini, or any other AI model 🗣️ ElevenLabs API for voice processing 📝 Google Sheets API access 💹 Exchange rate API access Good to know: ⚠️ Daily spending alerts trigger when expenses exceed 100 USD. 🏷️ Supports 12 predefined expense categories with emoji indicators. 🔄 Automatic currency detection and conversion to USD. 🎤 Voice messages are processed through speech-to-text. 📸 Receipt images are analyzed using computer vision. Customizing this workflow: ✏️ Modify expense categories in the system prompt. 📈 Adjust spending alert thresholds. 💵 Change the base currency from USD to your preferred currency. ✅ Add additional expense validation rules. 🔗 Integrate with other financial platforms.