by Olaf Titel
Setup & Instructions — fluidX: Create Session, Analyze & Notify Goal: This workflow demonstrates the full fluidX THE EYE integration — starting a live session, inviting both the customer (via SMS) and the service agent (via email), and then accessing the media (photos and videos) created during the session. Captured images are automatically analyzed with AI, uploaded to an external storage (such as Google Drive), and a media summary for the session is generated at the end. The agent receives an email with a link to join the live session. The customer receives an SMS with a link to start sharing their camera. Once both are connected, the agent can view the live feed, and the system automatically stores uploaded images and videos in Google Drive. When the session ends, the workflow collects all media and creates a complete AI-powered session summary (stored and updated in Google Drive). Below is an example screenshot from the customer’s phone: Prerequisites Developer account:* https://live.fluidx.digital (activate the *TEST plan**, €0) API docs (Swagger):** fluidX.digital API 🔐 Required Credentials 1️⃣ fluidX API key (HTTP Header Auth) • Credential name in n8n: fluidx API key • Header name: x-api-key • Header value: YOUR_API_KEY 2️⃣ SMTP account (for outbound email) • Credential name in n8n: SMTP account • Configure host, port, username, and password according to your provider • Enable TLS/SSL as required 3️⃣ Google Drive account • Used to store photos, videos, and automatically update the session summary files. 4️⃣ OpenAI API (for AI analysis & summary) •Used in the Analyze Images (AI) and Generate Summary parts of the workflow. • Credential type: OpenAI • Credential name (suggested): OpenAI account • API Key: your OpenAI API key • Model: e.g. gpt-4.1, gpt-4o, or similar (choose in the OpenAI node settings) ⚙️ Configuration (in the “Set Config” node) BASE_URL: https://live.fluidx.digital company / project / billingcode / sku: adjust as needed emailAgent: set before running (empty in template) phoneNumberUser: set before running (empty in template) Flow Overview Form Trigger → Create Session → Set Session Vars → Send SMS (User) → Send Email (Agent) → Monitor Media → Analyze Images (AI) → Upload Files to Google Drive → Generate Summary → Update Summary File The workflow starts automatically when a Form submission is received. Users enter the customer’s phone number and agent’s email, and the system creates a new fluidX THE EYE session. As media is uploaded during the session, the workflow automatically retrieves, stores, analyzes, and summarizes it — providing a complete end-to-end automation example for remote inspection, support, or field-service use cases. Notes Do not store real personal data inside the template. Manage API keys and secrets via n8n Credentials or environment variables. Log out of https://live.fluidx.digital in the agent’s browser before testing, to ensure a clean invite flow and session creation.
by Madame AI
Generate visual resumes from Telegram inputs using Google Gemini This workflow transforms text-based resume data into visually stunning images by leveraging Google Gemini's reasoning and vision capabilities. It autonomously analyzes the candidate's profile, selects an appropriate design template based on their industry, and renders a high-quality resume image directly in Telegram. Target Audience Job seekers, career coaches, resume writers, and recruitment agencies looking to automate design generation. How it works Classify Input: The workflow starts with a Telegram trigger. A Google Gemini agent analyzes the incoming message to determine if it is a casual chat or a resume generation request. Fetch Context: If it is a resume request, a BrowserAct node triggers a workflow (using the "AI Resume Replicant" template) to fetch necessary external context or data. Ingest Designs (Optional): If a reference image is provided, CloudConvert standardizes the file, and Google Gemini Vision reverse-engineers the layout and style, saving the "Visual DNA" to Google Sheets. Draft Blueprint: The "Resume Writer" AI agent selects a stored design template that matches the candidate's industry (e.g., "Corporate" for Finance, "Creative" for Design) and maps the text content to the layout. Generate Prompt: A "Visualizer" AI agent converts the structured blueprint into a highly detailed natural language prompt for image generation. Render & Deliver: Google Gemini generates the final resume image, which is then sent back to the user via Telegram. How to set up Configure Credentials: Connect your Telegram, Google Gemini, Google Sheets, CloudConvert, and BrowserAct accounts in n8n. Prepare BrowserAct: Ensure the AI Resume Replicant template is saved in your BrowserAct account. Setup Google Sheet: Create a new Google Sheet with the required header (listed below). Connect Sheet: Open the Google Sheets nodes (Clear, Get, Append) and select your new spreadsheet. Configure Telegram: Ensure your Telegram Bot is connected to the Trigger and Message nodes. Google Sheet Headers To use this workflow, create a Google Sheet with the following header: Resume Details Requirements BrowserAct* account (Template: *AI Resume Replicant**). Google Gemini** account. Telegram** account (Bot Token). CloudConvert** account. Google Sheets** account. How to customize the workflow Refine Design Logic: Modify the system prompt in the "Resume Writer" agent to change how the AI matches industries to design styles (e.g., force specific colors for specific roles). Change Output Format: Replace the Telegram response node with a Google Drive node to save the generated images as PDF or PNG files instead of sending them. Switch Image Model: Update the "Generate an image" node to use a different image generation model if preferred (e.g., OpenAI DALL-E). Need Help? How to Find Your BrowserAct API Key & Workflow ID How to Connect n8n to BrowserAct How to Use & Customize BrowserAct Templates Workflow Guidance and Showcase Video I Built a Resume Bot that CLONES Any Template! 🤖 (BrowserAct + n8n + Gemini Tutorial)
by Dr. Firas
💥 Generate product images with NanoBanana Pro to Veo videos and Blotato Who is this for? This workflow is designed for: Content creators and marketers E-commerce and product-based businesses Agencies producing social media visuals and videos Automation builders looking for AI-powered creative pipelines It is ideal for anyone who wants to automate product image and video creation using AI and publish content without manual work. What problem is this workflow solving? / Use case Creating product visuals and marketing videos usually requires multiple tools, manual prompt writing, and repetitive steps. This workflow solves: Manual image and video creation Inconsistent visual quality across assets Time-consuming prompt iteration Manual video publishing to social platforms The workflow automates the entire process from image generation to video publishing using AI. What this workflow does This workflow provides an end-to-end automation pipeline: Generates high-quality product images using NanoBanana Pro Applies Contact Sheet Prompting to explore multiple visual variations Converts selected images into short marketing videos using Veo 3.1 Automatically publishes the final videos via BLOTATO The result is a fully automated creative workflow that turns AI prompts into ready-to-publish video content. Setup To use this workflow, you need the following services and credentials: OpenAI API** Used for image analysis and prompt generation NanoBanana Pro (fal.ai)** Product image generation API: https://fal.ai/models/fal-ai/nano-banana-pro/edit/api Veo 3.1 (fal.ai)** Video generation API: https://fal.ai/models/fal-ai/veo3.1/first-last-frame-to-video Blotato** Video publishing to social platforms Sign up at BLOTATO All credentials must be added in n8n before running the workflow. How to customize this workflow to your needs You can easily adapt this workflow by: Modifying AI prompts to match your brand style Adjusting image composition and realism parameters in NanoBanana Pro Changing video motion, pacing, and aspect ratio in Veo 3.1 Selecting different social platforms or publishing rules in Blotato Replacing or extending individual steps while keeping the same architecture The workflow is modular and can be reused for multiple products or campaigns. 🎥 Watch This Tutorial 👋 Need help or want to customize this? 📩 Contact: LinkedIn 📺 YouTube: @DRFIRASS 🚀 Workshops: Mes Ateliers n8n 📄 Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube / 🚀 Mes Ateliers n8n
by Automate With Marc
🎨 Instagram Carousel & Caption Generator on Autopilot (GPT-5 + Nano Banana + Blotato + Google Sheets) Description Watch the full step-by-step tutorial on YouTube: https://youtu.be/id22R7iBTjo Disclaimer (self-hosted requirement): This template assumes you have valid API credentials for OpenAI, Wavespeed/Nano Banana, Blotato, and Google. If using n8n Self-Hosted, ensure HTTPS access and credentials are set in your instance. How It Works Chat Trigger – Receive a topic/idea (e.g. “5 best podcast tips”). Image Prompt Generator (GPT-5) – Creates 5 prompts using the “Hook → Problem → Insight → Solution → CTA” framework. Structured Output Parser – Formats output into a JSON array. Generate Images (Nano Banana) – Converts prompts into high-quality visuals. Wait for Render – Ensures image generation completes. Fetch Rendered Image URLs – Retrieves image links. Upload to Blotato – Hosts and prepares images for posting. Collect Media URLs – Gathers all uploaded image URLs. Log to Google Sheets – Stores image URLs + timestamps for tracking. Caption Generator (GPT-5) – Writes an SEO-friendly caption. Merge Caption + Images – Combines data. Post Carousel (Blotato) – Publishes directly to Instagram. Step-by-Step Setup Instructions 1) Prerequisites n8n (Cloud or Self-Hosted) OpenAI API Key (GPT-5) Wavespeed API Key (Nano Banana) Blotato API credentials (connected to Instagram) Google Sheets OAuth credentials 2) Add Credentials in n8n OpenAI: Settings → Credentials → Add “OpenAI API” Wavespeed: HTTP Header Auth (e.g. Authorization: Bearer <API_KEY>) Blotato: Add “Blotato API” Google Sheets: Add “Google Sheets OAuth2 API” 3) Configure & Test Run with an idea like “Top 5 design hacks”. Check generated images, caption, and logged sheet entry. Confirm posting works via Blotato. 4) Optional Add a Schedule Trigger for weekly automation. Insert a Slack approval loop before posting. Customization Guide ✏️ Change design style: Modify adjectives in the Image Prompt Generator. 📑 Adjust number of slides: Change Split node loop count. 💬 Tone of captions: Edit Caption Generator’s system prompt. ⏱️ Adjust render wait time: If image generation takes longer, increase the Wait node duration from 30 seconds to 60 seconds or more. 🗂️ Log extra data: Add columns in Google Sheets for campaign or topic. 🔁 Swap posting tool: Replace Blotato with your scheduler or email node. Requirements OpenAI API key (GPT-5 or compatible) Wavespeed API key (Nano Banana) Blotato API credentials Google Sheets OAuth credentials n8n account (Cloud or Self-Hosted)
by Growth AI
N8N UGC Video Generator - Setup Instructions Transform Product Images into Professional UGC Videos with AI This powerful n8n workflow automatically converts product images into professional User-Generated Content (UGC) videos using cutting-edge AI technologies including Gemini 2.5 Flash, Claude 4 Sonnet, and VEO3 Fast. Who's it for Content creators** looking to scale video production E-commerce businesses** needing authentic product videos Marketing agencies** creating UGC campaigns for clients Social media managers** requiring quick video content How it works The workflow operates in 4 distinct phases: Phase 0: Setup - Configure all required API credentials and services Phase 1: Image Enhancement - AI analyzes and optimizes your product image Phase 2: Script Generation - Creates authentic dialogue scripts based on your input Phase 3: Video Production - Generates and merges professional video segments Requirements Essential Services & APIs Telegram Bot Token** (create via @BotFather) OpenRouter API** with Gemini 2.5 Flash access Anthropic API** for Claude 4 Sonnet KIE.AI Account** with VEO3 Fast access N8N Instance** (cloud or self-hosted) Technical Prerequisites Basic understanding of n8n workflows API key management experience Telegram bot creation knowledge How to set up Step 1: Service Configuration Create Telegram Bot Message @BotFather on Telegram Use /newbot command and follow instructions Save the bot token for later use OpenRouter Setup Sign up at openrouter.ai Purchase credits for Gemini 2.5 Flash access Generate and save API key Anthropic Configuration Create account at console.anthropic.com Add credits to your account Generate Claude API key KIE.AI Access Register at kie.ai Subscribe to VEO3 Fast plan Obtain bearer token Step 2: N8N Credential Setup Configure these credentials in your n8n instance: Telegram API Credential Name: telegramApi Bot Token: Your Telegram bot token OpenRouter API Credential Name: openRouterApi API Key: Your OpenRouter key Anthropic API Credential Name: anthropicApi API Key: Your Anthropic key HTTP Bearer Auth Credential Name: httpBearerAuth Token: Your KIE.AI bearer token Step 3: Workflow Configuration Import the Workflow Copy the provided JSON workflow Import into your n8n instance Update Telegram Token Locate the "Edit Fields" node Replace "Your Telegram Token" with your actual bot token Configure Webhook URLs Ensure all Telegram nodes have proper webhook configurations Test webhook connectivity Step 4: Testing & Validation Test Individual Nodes Verify each API connection Check credential configurations Confirm node responses End-to-End Testing Send a test image to your Telegram bot Follow the complete workflow process Verify final video output How to customize the workflow Modify Image Enhancement Prompts Edit the HTTP Request node for Gemini Adjust the prompt text to match your style preferences Test different aspect ratios (current: 1:1 square format) Customize Script Generation Modify the Basic LLM Chain node prompt Adjust video segment duration (current: 7-8 seconds each) Change dialogue style and tone requirements Video Generation Settings Update VEO3 API parameters in HTTP Request1 node Modify aspect ratio (current: 16:9) Adjust model settings and seeds for consistency Output Customization Change final video format in MediaFX node Modify Telegram message templates Add additional processing steps before delivery Workflow Operation Phase 1: Image Reception and Enhancement User sends product image via Telegram System prompts for enhancement instructions Gemini AI analyzes and optimizes image Enhanced square-format image returned Phase 2: Analysis and Script Creation System requests dialogue concept from user AI analyzes image details and environment Claude generates realistic 2-segment script Scripts respect physical constraints of original image Phase 3: Video Generation Two separate videos generated using VEO3 System monitors generation status Videos merged into single flowing sequence Final video delivered via Telegram Troubleshooting Common Issues API Rate Limits**: Implement delays between requests Webhook Failures**: Verify URL configurations and SSL certificates Video Generation Timeouts**: Increase wait node duration Credential Errors**: Double-check all API keys and permissions Error Handling The workflow includes automatic error detection: Failed video generation triggers error message Status checking prevents infinite loops Alternative outputs for different scenarios Advanced Features Batch Processing Modify trigger to handle multiple images Add queue management for high-volume usage Implement user session tracking Custom Branding Add watermarks or logos to generated videos Customize color schemes and styling Include brand-specific dialogue templates Analytics Integration Track usage metrics and success rates Monitor API costs and optimization opportunities Implement user behavior analytics Cost Optimization API Usage Management Monitor token consumption across services Implement caching for repeated requests Use lower-cost models for testing phases Efficiency Improvements Optimize image sizes before processing Implement smart retry mechanisms Use batch processing where possible This workflow transforms static product images into engaging, professional UGC videos automatically, saving hours of manual video creation while maintaining high quality output perfect for social media platforms.
by Sk developer
🚀 LinkedIn Video to MP4 Automation with Google Drive & Sheets | RapidAPI Integration This n8n workflow automatically converts LinkedIn video URLs into downloadable MP4 files using the LinkedIn Video Downloader API, uploads them to Google Drive with public access, and logs both the original URL and Google Drive link into Google Sheets. It leverages the LinkedIn Video Downloader service for fast and secure video extraction. 📝 Node Explanations (Single-Line) 1️⃣ On form submission → Captures LinkedIn video URL from the user via a web form. 2️⃣ HTTP Request → Calls LinkedIn Video Downloader to fetch downloadable MP4 links. 3️⃣ If → Checks for API errors and routes workflow accordingly. 4️⃣ Download mp4 → Downloads the MP4 video file from the API response URL. 5️⃣ Upload To Google Drive → Uploads the downloaded MP4 file to Google Drive. 6️⃣ Google Drive Set Permission → Makes the uploaded file publicly accessible. 7️⃣ Google Sheets → Logs successful conversions with LinkedIn URL and sharable Drive link. 8️⃣ Wait → Delays execution before logging failed attempts. 9️⃣ Google Sheets Append Row → Logs failed video downloads with N/A Drive link. 📄 Google Sheets Columns URL** → Original LinkedIn video URL entered in the form. Drive_URL** → Publicly sharable Google Drive link to the converted MP4 file. (For failed downloads) → Drive_URL will display N/A. 💡 Use Case Automate LinkedIn video downloading and sharing using LinkedIn Video Downloader for social media managers, marketers, and content creators without manual file handling. ✅ Benefits Time-saving* (auto-download & upload), *Centralized tracking* in Sheets, *Easy sharing* via Drive links, and *Error logging* for failed downloads—all powered by *RapidAPI LinkedIn Video Downloader**.
by NanaB
Description This n8n workflow automates the entire process of creating and publishing AI-generated videos, triggered by a simple message from a Telegram bot (YTAdmin). It transforms a text prompt into a structured video with scenes, visuals, and voiceover, stores assets in MongoDB, renders the final output using Creatomate, and uploads the video to YouTube. Throughout the process, YTAdmin receives real-time updates on the workflow’s progress. This is ideal for content creators, marketers, or businesses looking to scale video production using automation and AI. You can see a video demonstrating this template in action here: https://www.youtube.com/watch?v=EjI-ChpJ4xA&t=200s How it Works Trigger: Message from YTAdmin (Telegram Bot) The flow starts when YTAdmin sends a content prompt. Generate Structured Content A Mistral language model processes the input and outputs structured content, typically broken into scenes. Split & Process Content into Scenes The content is split into categorized parts for scene generation. Generate Media Assets For each scene: Images: Generated using OpenAI’s image model. Voiceovers: Created using OpenAI’s text-to-speech. Audio files are encoded and stored in MongoDB. Scene Composition Assets are grouped into coherent scenes. Render with Creatomate A complete payload is generated and sent to the Creatomate rendering API to produce the video. Progress messages are sent to YTAdmin. The flow pauses briefly to avoid rate limits. Render Callback Once Creatomate completes rendering, it sends a callback to the flow. If the render fails, an error message is sent to YTAdmin. If the render succeeds, the flow proceeds to post-processing. Generate Title & Description A second Mistral prompt generates a compelling title and description for YouTube. Upload to YouTube The rendered video is retrieved from Creatomate. It’s uploaded to YouTube with the AI-generated metadata. Final Update A success message is sent to YTAdmin, confirming upload completion. Set Up Steps (Approx. 10–15 Minutes)Step 1: Set Up YTAdmin Bot Create a Telegram bot via BotFather and get your API token. Add this token in n8n's Telegram credentials and link to the "Receive Message from YTAdmin" trigger. Step 2: Connect Your AI Providers Mistral: Add your API key under HTTP Request or AI Model nodes. OpenAI: Create an account at platform.openai.com and obtain an API key. Use it for both image generation and voiceover synthesis. Step 3: Configure Audio File Storage with MongoDB via Custom API Receives the Base64 encoded audio data sent in the request body. Connects to the configured MongoDB instance (connection details are managed securely within the API- code below). Uses the MongoDB driver and GridFS to store the audio data. Returns the unique _id (ObjectId) of the stored file in GridFS as a response. This _id is crucial as it will be used in subsequent steps to generate the download URL for the audio file. My API code can be found here for reference: https://github.com/nanabrownsnr/YTAutomation.git Step 4: Set Up Creatomate Create a Creatomate account, define your video templates, and retrieve your API key. Configure the HTTP request node to match your Creatomate payload requirements. Step 5: Connect YouTube In n8n, add OAuth2 credentials for your YouTube account. Make sure your Google Cloud project has YouTube Data API enabled. Step 6: Deploy and Test Send a message to YTAdmin and monitor the flow in n8n. Verify that content is generated, media is created, and the final video is rendered and uploaded. Customization Options Change the AI Prompts Modify the generation prompts to adjust tone, voice, or content type (e.g., news recaps, product videos, educational summaries). Switch Messaging Platform Replace Telegram (YTAdmin) with Slack, Discord, or WhatsApp by swapping out the trigger and response nodes. Add Subtitles or Effects Integrate Whisper or another speech-to-text tool to generate subtitles. Add overlay or transition effects in the Creatomate video payload. Use Local File Storage Instead of MongoDB Swap out MongoDB upload http nodes with filesystem or S3-compatible storage. Repurpose for Other Platforms Swap YouTube upload with TikTok, Instagram, or Vimeo endpoints for broader publishing. **Need Help or Want to Customize This Workflow? If you'd like assistance setting this up or adapting it for a different use case, feel free to reach out to me at nanabrownsnr@gmail.com. I'm happy to help!**
by Dmitry Mikheev
Telegram Rich Output Helper Workflow Who is this for? Builders of Telegram chat‑bots, AI assistants, or notification services who already run n8n and need to convert long, mixed‑media answers from an LLM (or any upstream source) into Telegram‑friendly messages. Prerequisites A Telegram bot created with @BotFather. The bot’s HTTP API token saved as a Telegram API credential in n8n. n8n ≥ 1.0 with the built‑in Telegram node still installed. A parent workflow that calls this one via Execute Workflow and passes: chatId — the destination chat ID (integer). output — a string that can contain plain text and HTTP links to images, audio, or video. What the workflow does Extract Links – A JavaScript Code node scans output, deduplicates URLs, and classifies each by file extension. Link Path If no media links exist, the text path is used. Otherwise, each link is routed through a Switch node that triggers the correct Telegram call (sendPhoto, sendAudio, sendVideo) so users get inline previews or players. Text Path An IF node checks whether the remaining text exceeds Telegram’s 1 000‑character limit. When it does, a Code node slices the text at line boundaries; SplitInBatches then sends the chunks sequentially so nothing is lost. All branches converge, keeping the whole exchange inside one execution. Customisation tips Adjust the character limit** – edit the first expression in “If text too long”. Filter/enrich links** – extend the regex or add MIME checks before dispatch. Captions & keyboards** – populate additionalFields in the three “Send back” nodes. Throughput vs. order* – tweak the batch size in both *SplitInBatches** nodes. With this template in place, your users receive the complete message, playable media, and zero manual formatting – all within Telegram’s API limits.
by David Harvey
iMessage AI-Powered Smart Calorie Tracker > 📌 What it looks like in use: > This image shows a visual of the workflow in action. Use it for reference when replicating or customizing the template. This n8n template transforms a user-submitted food photo into a detailed, friendly, AI-generated nutritional report — sent back seamlessly as a chat message. It combines OpenAI's visual reasoning, Postgres-based memory, and real-time messaging with Blooio to create a hands-free calorie and nutrition tracker. 🧠 Use Cases Auto-analyze meals based on user-uploaded images. Daily/weekly/monthly diet summaries with no manual input. Virtual food journaling integrated into messaging apps. Nutrition companion for healthcare, fitness, and wellness apps. 📌 Good to Know ⚠️ This uses GPT-4 with image capabilities, which may incur higher usage costs depending on your OpenAI pricing tier. Review OpenAI’s pricing. The model uses visual reasoning and estimation to determine nutritional info — results are estimates and should not replace medical advice. Blooio is used for sending/receiving messages. You will need a valid API key and project set up with webhook delivery. A Postgres database is required for long-term memory (optional but recommended). You can use any memory node with it. ⚙️ How It Works Webhook Trigger The workflow begins when a message is received via Blooio. This webhook listens for user-submitted content, including any image attachments. Image Validation and Extraction A conditional check verifies the presence of attachments. If images are found, their URLs are extracted using a Code node and prepared for processing. Image Analysis via AI Agent Images are passed to an OpenAI-based agent using a custom system prompt that: Identifies the meal, Estimates portion sizes, Calculates calories, macros, fiber, sugar, and sodium, Scores the meal with a health and confidence rating, Responds in a chatty, human-like summary format. Memory Integration A Postgres memory node stores user interactions for recall and contextual continuity, allowing day/week/month reports to be generated based on cumulative messages. Response Aggregation & Summary Messages are aggregated and summarized by a second AI agent into a single concise message to be sent back to the user via Blooio. Message Dispatch The final message is posted back to the originating conversation using the Blooio Send Message API. 🚀 How to Use The included webhook can be triggered manually or programmatically by linking Blooio to a frontend chat UI. You can test the flow using a manual POST request containing mock Blooio payloads. Want to use a different messages app? Replace the Blooio nodes with your preferred messaging API (e.g., Twilio, Slack, Telegram). ✅ Requirements OpenAI API access with GPT-4 Vision or equivalent multimodal support. Blooio account with access to incoming and outgoing message APIs. Optional: Postgres DB (e.g., via Neon) for tracking message context over time. 🛠️ Customising This Workflow Prompt Tuning** Tailor the system prompt in the AI Agent node to fit specific diets (e.g., keto, diabetic), age groups, or regionally-specific foods. Analytics Dashboards** Hook up your Postgres memory to a data visualization tool for nutritional trends over time. Multilingual Support** Adjust the response prompt to translate messages into other languages or regional dialects. Image Preprocessing** Insert a preprocessing node before sending images to the model to resize, crop, or enhance clarity for better results.
by Cooper
Chat with thing This n8n template lets you build a smart AI chat assistant that can handle text, images, and PDFs — using OpenAI's GPT-4o multimodal model. It supports dynamic conversations and file analysis, making it great for AI-driven support bots, personal assistants, or embedded chat widgets. 🔍 How it Works The chat trigger node kicks off a session using n8n's hosted chat UI. Users can send text or upload images or PDFs — the workflow checks if a file was included. If an image is uploaded, the file is converted to base64 and analyzed using GPT-4o's vision capabilities. GPT-4o generates a natural language description of the image and responds to the user's question in context. A memory buffer keeps track of the conversation thread, so follow-up questions are handled intelligently. OpenAI’s chat model handles both text-only and mixed media input seamlessly. 🧪 How to Use You can embed this in a website or use it with your own webhook/chat interface. The logic is modular — just swap out the chatTrigger node for another input (e.g. form or API). To use with documents, you can modify the logic to pass PDF content to GPT-4 directly. You can extend it with action nodes, e.g. saving results to Notion, Airtable, or sending replies via email or Slack. 🔐 Requirements Your OpenAI GPT-4o API key Set File Upload on the chat 🚀 Use Cases PDF explainer bot Internal knowledge chat with media support Personal assistant for mixed content
by David Roberts
The workflow first populates a Pinecone index with vectors from a Bitcoin whitepaper. Then, it waits for a manual chat message. When received, the chat message is turned into a vector and compared to the vectors in Pinecone. The most similar vectors are retrieved and passed to OpenAI for generating a chat response. Note that to use this template, you need to be on n8n version 1.19.4 or later.
by Gulfiia
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Automated Data-Driven UX Persona Creation – Try It Out!* About You can create personas based on your website, region, and industry. Unlike traditional persona creation, this process uses reliable data sources and can estimate market size for each persona. UX personas have a wide range of applications: use them to better define your target users during product development, align your team around user goals during workshops, or inspire new features and ideas by deeply understanding user needs and behaviors. How It Works The flow is triggered via a web form Perplexity analyzes the market and creates a data foundation for the personas An AI agent transforms the data into detailed persona descriptions and publishes them in a Google Doc We use DALL·E 3 to generate an image for each persona, which is saved to your Google Drive How To Use Import the package into your N8N interface Set up the credentials in each node to access the necessary tools Wait for the process to run (it takes just a few seconds) Check the final output in Google Docs and your Google Drive Requirements Perplexity for research OpenAI for LLM and Image generation Google Doc Google Drive to upload images