by Evoort Solutions
🖼️ Text-to-Image Generator using n8n + Flux AI This n8n workflow automates image generation from text prompts using the Text-to-Image Flux AI API. It reads prompts from Google Sheets, generates images via API, uploads them to Google Drive, and logs the outcome. 🌟 Key Features Integrates with Text-to-Image Flux AI on RapidAPI Converts base64 image data to downloadable files Stores images on Google Drive Updates logs and errors back into Google Sheets Skips prompts already processed 📄 Google Sheet Column Structure Your source Google Sheet should include the following columns: | Column Name | Description | |-------------------|--------------------------------------------------| | Prompt | The text prompt to generate an image from | | drive path | (Optional) File path or URL of saved image | | Generated Date | Date/time the image was generated | | Base64 | Base64 string or error message (for logging) | Only rows with a non-empty Prompt and empty drive path will be processed. 📌 Use Case Perfect for: Bulk AI image generation for content marketing Creative automation with prompt-based image creation Building image assets based on structured datasets Any workflow where prompts are tracked via Google Sheets Uses the Text-to-Image Flux AI API to generate high-quality images on demand. 🔧 Workflow Summary | Step | Node | Description | |------|------|-------------| | 1 | Manual Trigger | Manually start the workflow | | 2 | Google Sheets2 | Reads prompts from Google Sheets | | 3 | Loop Over Items | Processes rows one by one | | 4 | If2 | Skips rows that already have images | | 5 | HTTP Request1 | Calls Text-to-Image Flux AI via RapidAPI | | 6 | Code1 | Converts base64 image to binary file | | 7 | Google Drive1 | Uploads the image file to a Drive folder | | 8 | Google Sheets1 | Logs base64 result and timestamp back | | 9 | If1 | Handles errors from the API | | 10 | Google Sheets4 | Logs errors to the sheet | | 11 | Wait | Adds delay between batches to prevent rate-limiting | 🚀 RapidAPI: Text-to-Image Flux AI This flow is powered by Text-to-Image Flux AI. Be sure to: Sign up at RapidAPI and subscribe to the API. Copy your API Key. Replace "your key" in the HTTP Request1 node’s x-rapidapi-key header. You can test the API directly here before connecting it to n8n. ✅ Tips for Setup Ensure you’ve set up a Google Service Account with access to both Sheets and Drive. Fill only the Prompt column — leave drive path and Base64 empty for new prompts. Monitor your RapidAPI dashboard for usage and quota. Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n Save time, stay consistent, and grow your LinkedIn presence effortlessly!
by inderjeet Bhambra
Who is this for? This workflow is designed for travel bloggers, content creators, social media managers, and anyone who wants to transform their travel photos into engaging written narratives. It's perfect for travelers looking to create compelling stories from their photo collections without spending hours crafting content manually, families wanting to document memorable trips, and digital nomads who need to produce travel content efficiently. What problem is this workflow solving? Converting travel photos into engaging stories is time-consuming and requires both creative writing skills and the ability to analyze visual content meaningfully. This workflow solves the challenge of: Transforming visual memories into compelling written narratives Organizing photos chronologically to create logical story flow Generating professional-quality travel content without writing expertise Analyzing photo content to extract meaningful themes and emotions Creating day-by-day structured narratives from unorganized photo collections Reducing the time spent on manual content creation for travel documentation What this workflow does This AI-powered photo storyteller takes your travel photos and automatically generates immersive, first-person travel narratives. The workflow: Accepts multiple photos through a webhook endpoint Uses OpenAI Vision API (GPT-4o) to analyze each photo's content, emotions, and themes Automatically organizes photos chronologically by date and timestamp Groups photos by travel days and extracts daily themes Leverages GPT-4.1 (minimum required) to craft engaging, first-person travel stories with creative day titles Generates structured narratives with sensory details, cultural observations, and emotional insights Outputs JSON formatted content ready for formatting Creates day-by-day story structure with memorable moments and reflective conclusions Setup Required Credentials: OpenAI API key configured in n8n for both Vision Analysis and Story Generation nodes Ensure you have sufficient OpenAI credits for image analysis and text generation Webhook Configuration: The workflow creates a webhook endpoint at /tripteller-upload Configure your photo upload interface to POST photos array to this endpoint Photos should be sent as base64 encoded data with filename and metadata Photo Requirements: Supported formats: Standard image formats (JPEG, PNG, etc.) Photos should include timestamp metadata for chronological organization Caution Do not upload all photos at once. Start with a small number of photos, like 5 at a time. How to customize this workflow to your needs Story Style Customization: Modify the system prompt in the "Generate Travel Story" node to adjust writing tone (nostalgic, adventurous, poetic, etc.) Customize the story structure by editing the output format requirements Add specific cultural or geographical context prompts for location-specific storytelling Photo Analysis Enhancement: Adjust the Vision Analysis node prompt to focus on specific elements (architecture, food, people, landscapes) Modify the grouping logic in the "Group Photos by Day" node for different time-based organization Add location extraction from EXIF data for geographical context Output Format Adjustment: Customize the final response structure in the "Format Final Response" node Add integration with publishing platforms (blog APIs, social media, etc.) Include additional metadata like location tags, travel duration, or trip statistics Performance Optimization: Adjust the execution timeout based on your typical photo volume Modify the parallel processing approach for large photo collections Add progress tracking for longer processing workflows
by Muhammad Shahzaib Shahid
Who is this for? This template is designed for internal support teams, product specialists, and knowledge managers in technology companies who want to automate ingestion of product documentation and enable AI-driven, retrieval-augmented question answering via WhatsApp. What problem is this workflow solving? Support agents often spend too much time manually searching through lengthy documentation, leading to inconsistent or delayed answers. This solution automates importing, chunking, and indexing product manuals, then uses retrieval-augmented generation (RAG) to answer user queries accurately and quickly with AI via WhatsApp messaging. What these workflows do Workflow 1: Document Ingestion & Indexing Manually triggered to import product documentation from Google Docs. Automatically splits large documents into chunks for efficient searching. Generates vector embeddings for each chunk using OpenAI embeddings. Inserts the embedded chunks and metadata into a MongoDB Atlas vector store, enabling fast semantic search. Workflow 2: AI-Powered Query & Response via WhatsApp Listens for incoming WhatsApp user messages, supporting various types: Text messages: Plain text queries from users. Audio messages: Voice notes transcribed into text for processing. Image messages: Photos or screenshots analyzed to provide contextual answers. Document messages: PDFs, spreadsheets, or other files parsed for relevant content. Converts incoming queries to vector embeddings and performs similarity search on the MongoDB vector store. Uses OpenAI’s GPT-4o-mini model with retrieval-augmented generation to produce concise, context-aware answers. Maintains conversation context across multiple turns using a memory buffer node. Routes different message types to appropriate processing nodes to maximize answer quality. **Setup Setting up vector embeddings** 1- Authenticate Google Docs and connect your Google Docs URL containing the product documentation you want to index. 2- Authenticate MongoDB Atlas and connect the collection where you want to store the vector embeddings. Create a search index on this collection to support vector similarity queries. 3- Ensure the index name matches the one configured in n8n (data_index). See the example MongoDB search index template below for reference. Setting up chat 1- Authenticate the WhatsApp node with your Meta account credentials to enable message receiving and sending. 2- Connect the MongoDB collection containing embedded product documentation to the MongoDB Vector Search node used for similarity queries. 3- Set up the system prompt in the Knowledge Base Agent node to reflect your company’s tone, answering style, and any business rules, ensuring it references the connected MongoDB collection for context retrieval. Make sure Both MongoDB nodes (in ingestion and chat workflows) are connected to the same collection with: An embedding field storing vector data, Relevant metadata fields (e.g., document ID, source), and The same vector index name configured (e.g., data_index).
by Nadia Privalikhina
This n8n template offers a free and automated way to convert images from a Google Drive folder into a single PDF document. It uses Google Slides as an intermediary, allowing you to control the final PDF's page size and orientation. If you're looking for a no-cost solution to batch convert images to PDF and need flexibility over the output dimensions (like A4, landscape, or portrait), this template is for you! It's especially handy for creating photo albums, visual reports, or simple portfolios directly from your Google Drive. How it works The workflow first copies a Google Slides template you specify. The page setup of this template (e.g., A4 Portrait) dictates your final PDF's dimensions. It then retrieves all images from a designated Google Drive folder, sorts them by creation date. Each image is added to a new slide in the copied presentation. Finally, the entire Google Slides presentation is converted into a PDF and saved back to your Google Drive. How to use Connect your Google Drive and Google Slides accounts in the relevant nodes. In the "Set Pdf File Name" node, define the name for your output PDF. In the "CopyPdfTemplate" node: Select your Google Slides template file (this sets the PDF page size/orientation). Choose the Google Drive folder containing your source images. Ensure your images are in the specified folder. For best results, images should have an aspect ratio similar to your chosen Slides template. Run the workflow to generate your PDF by clicking 'Test Workflow' Requirements Google Drive account. Google Slides account. Google Slides Template stored on your Google Drive Customising this workflow Adjust the "Filter: Only Images" node if you use image formats other than PNG (e.g., image/jpeg for JPGs). Modify the image sorting logic in the "Sort by Created Date" node if needed.
by Tomas Lubertino
This template monitors a Google Drive folder, converts PDF documents into clean text chunks with Unstructured, generates OpenAI embeddings, and upserts vectors into Pinecone. It’s a practical, production-ready starting point for Retrieval-Augmented Generation (RAG) that you can plug into a chatbot, semantic search, or internal knowledge tools. How it works 1) Google Drive Trigger detects new files in a selected folder and downloads them. 2) The files are sent to Unstructured where they are split into smaller pieces (chunks). 3) The chunks are prepared to be sent to OpenAI where they are converted into vectors (embeddings). 4) The embeddings are recombined with their original data and the payload is prepared for upsert into the Pinecone index. Set up steps 1) In Pinecone, create an index with 1536 dimensions and configure it for text-embedding-3-small. 2) Copy the host url and paste it on the 'Pinecone Upsert' node. It should look something like this: https://{your-index-name}.pinecone.io/vectors/upsert. 3) Add Google Drive, OpenAI and Pinecone credentials in n8n. 4) Point the trigger to your ingest folder (you can use this article for demo). 5) Click the 'Open chat' button and enter the following: Which Git provider do the authors use?
by VEED
Create AI screencast videos with VEED and automated slides Overview This n8n workflow automatically generates presentation-style "screen recording" videos with AI-generated slides and a talking head avatar overlay. You provide a topic and intention, and the workflow handles everything: scriptwriting, slide generation, avatar creation, voiceover, and video composition. Output: Horizontal (16:9) AI-generated videos with animated slides as the main content and a lip-synced avatar in picture-in-picture, ready for YouTube, LinkedIn, or professional presentations. What It Does Topic + Intention → Claude writes script → Parallel processing: ├── OpenAI generates avatar → ElevenLabs voiceover → VEED lip-sync └── FAL Flux Pro generates slides → Creatomate composites everything → Saved to Google Drive + logged to Sheets Pipeline Breakdown | Step | Tool | What Happens | |------|------|--------------| | 1. Script Generation | Claude Sonnet 4 | Creates hook, script (25-40 sec), slide prompts, caption, and avatar description | | 2. Avatar Generation | OpenAI gpt-image-1 | Generates photorealistic portrait image (1024×1536) | | 3. Slide Generation | FAL Flux Pro | Creates 5-7 professional slides (1920×1080) with text overlays | | 4. Voiceover | ElevenLabs | Converts script to natural speech (multiple voice options) | | 5. Talking Head | VEED Fabric 1.0 | Lip-syncs avatar to audio, creates 9:16 talking head video | | 6. Video Composition | Creatomate | Combines slides + avatar in 16:9 PiP layout | | 7. Storage | Google Drive | Uploads final MP4 | | 8. Logging | Google Sheets | Records all metadata (script, caption, URLs, timestamps) | Required Connections API Keys (entered in Configuration node) | Service | Key Type | Where to Get | |---------|----------|--------------| | Anthropic | API Key | https://console.anthropic.com/settings/keys | | OpenAI | API Key | https://platform.openai.com/api-keys | | ElevenLabs | API Key | https://elevenlabs.io/app/settings/api-keys | | FAL.ai | API Key | https://fal.ai/dashboard/keys | | Creatomate | API Key | https://creatomate.com/dashboard/settings | > ⚠️ OpenAI Note: gpt-image-1 requires organization verification. Go to https://platform.openai.com/settings/organization/general to verify. n8n Credentials (connect in n8n) | Node | Credential Type | Purpose | |------|-----------------|---------| | 🎬 Generate Talking Head (VEED) | FAL.ai API | VEED video rendering | | 📤 Upload to Drive | Google Drive OAuth2 | Store final videos | | 📝 Log to Sheets | Google Sheets OAuth2 | Track all generated content | Configuration Options Edit the ⚙️ Workflow Configuration node to customize: { // 📝 CONTENT SETTINGS topic: "How AI is transforming content creation", intention: "informative", // informative, lead_generation, disruption brand_name: "YOUR_BRAND_NAME", target_audience: "sales teams and marketers", trending_hashtags: "#AIvideo #ContentCreation #VideoMarketing", // 🎨 SLIDE STYLE slide_style: "vibrant_colorful", // See slide styles below // 🎥 VIDEO SETTINGS video_resolution: "720p", // VEED only supports 720p seconds_per_slide: 6, // How long each slide shows // 🖼️ BACKGROUND (Optional) background: "", // URL, gradient array, or empty // 🔑 API KEYS (Required) anthropic_api_key: "YOUR_ANTHROPIC_API_KEY", openai_api_key: "YOUR_OPENAI_API_KEY", elevenlabs_api_key: "YOUR_ELEVENLABS_API_KEY", creatomate_api_key: "YOUR_CREATOMATE_API_KEY", fal_api_key: "YOUR_FAL_API_KEY", // 🎤 VOICE SELECTION voice_selection: "susie", // cristina, enrique, susie, jeff, custom // 🎨 AVATAR OPTIONS (Optional) custom_avatar_description: "", // Leave empty for AI-generated custom_avatar_image_url: "", // Direct URL to use existing image // 📝 CUSTOM SCRIPT (Optional) custom_script: "" // Leave empty for AI-generated } Slide Style Options | Style | Description | Best For | |-------|-------------|----------| | dark_professional | Dark gradients, white text, sleek look | Tech, SaaS, premium brands | | light_modern | Light backgrounds, dark text, clean | Corporate, educational | | vibrant_colorful | Bold colors, energetic, eye-catching | Social media, startups | | minimalist | Lots of whitespace, simple, elegant | Luxury, professional services | | tech_corporate | Blue tones, geometric shapes | Enterprise, B2B | Background Options | Type | Example | Description | |------|---------|-------------| | None | "" | Full bleed layout, slides take 78% width | | URL | "https://example.com/bg.jpg" | Image background with margins | | Gradient | ["#ff6b6b", "#feca57", "#48dbfb"] | Gradient background with margins | Voice Options | Voice | Language | Description | |-------|----------|-------------| | cristina | Spanish | Female voice | | enrique | Spanish | Male voice | | susie | English | Female voice (default) | | jeff | English | Male voice | | custom | Any | Use your ElevenLabs voice clone ID | Intention Types | Intention | Content Style | Best For | |-----------|---------------|----------| | informative | Educational, value-driven, builds trust | Thought leadership, tutorials | | lead_generation | Creates curiosity, soft CTA | Product awareness, funnels | | disruption | Bold, provocative, scroll-stopping | Viral potential, brand awareness | Custom Avatar & Script Options Custom Avatar Description Leave custom_avatar_description empty to let Claude decide, or provide your own: custom_avatar_description: "female marketing influencer, cool, working in tech" Examples: "a woman in her 20s with gym clothes" "a bearded man in his 30s wearing a hoodie" "a professional woman with glasses in business casual" Custom Avatar Image URL Skip avatar generation entirely by providing a direct URL: custom_avatar_image_url: "https://example.com/my-avatar.png" > Image should be portrait orientation, high quality, with the subject looking at camera. Custom Script Leave custom_script empty to let Claude write it, or provide your own: custom_script: "This is my custom script. AI is changing how we create content..." Guidelines for custom scripts: Keep it 25-40 seconds when read aloud (60-100 words) Avoid special characters for TTS compatibility Write naturally, as if speaking Behavior Matrix | custom_avatar_description | custom_avatar_image_url | custom_script | What Claude Generates | |---------------------------|-------------------------|---------------|----------------------| | Empty | Empty | Empty | Avatar + Script + Slides + Caption | | Provided | Empty | Empty | Script + Slides + Caption | | Empty | Provided | Empty | Script + Slides + Caption | | Empty | Empty | Provided | Avatar + Slides + Caption | | Provided | Provided | Provided | Slides + Caption only | Video Layout The final video uses a picture-in-picture (PiP) layout: Without Background (Full Bleed) ┌─────────────────────────────────┬──────┐ │ │ │ │ │ │ │ SLIDES (78%) │AVATAR│ │ │(22%) │ │ │ │ │ │ │ └─────────────────────────────────┴──────┘ With Background (Margins + Rounded Corners) ┌─────────────────────────────────────────┐ │ BG ┌───────────────────────────┐ ┌────┐ │ │ │ │ │ │ │ │ │ SLIDES (74%) │ │AVA │ │ │ │ │ │TAR │ │ │ │ │ │20% │ │ │ └───────────────────────────┘ └────┘ │ └─────────────────────────────────────────┘ Output Per Video Generated | Asset | Format | Location | |-------|--------|----------| | Final Video | MP4 (1920×1080, 60fps) | Google Drive folder | | Avatar Image | PNG (1024×1536) | tmpfiles.org (temporary) | | Slide Images | PNG (1920×1080) | FAL CDN (temporary) | | Voiceover | MP3 | tmpfiles.org (temporary) | | Metadata | Row entry | Google Sheets | Google Sheets Columns | Column | Description | |--------|-------------| | topic | Video topic | | intention | Content intention used | | brand_name | Brand mentioned | | slide_style | Visual style used | | content_theme | 2-3 word theme summary | | script | Full voiceover script | | caption | Ready-to-post caption with hashtags | | num_slides | Number of slides generated | | video_url | Google Drive link to final video | | avatar_video_url | VEED talking head video URL | | audio_url | Temporary audio URL | | status | done/error | | created_at | Timestamp | Estimated Costs Per Video | Service | Usage | Approximate Cost | |---------|-------|------------------| | Claude Sonnet 4 | 2K tokens | $0.01 | | OpenAI gpt-image-1 | 1 image (1024×1536) | ~$0.04-0.08 | | FAL Flux Pro | 5-7 images (1920×1080) | ~$0.10-0.15 | | ElevenLabs | 100 words | $0.01-0.02 | | VEED/FAL.ai | 1 video render | ~$0.10-0.20 | | Creatomate | 1 video composition | ~$0.10-0.20 | | Total | | ~$0.35-0.65 per video | > Costs vary based on script length and current API pricing. Setup Checklist Step 1: Import Workflow [ ] Import create-ai-screencast-videos-with-veed-and-automated-slides.json into n8n Step 2: Configure API Keys [ ] Open the ⚙️ Workflow Configuration node [ ] Replace all YOUR_*_API_KEY placeholders with your actual API keys [ ] Verify your OpenAI organization at https://platform.openai.com/settings/organization/general Step 3: Connect n8n Credentials [ ] Click on 🎬 Generate Talking Head (VEED) node → Add FAL.ai credential [ ] Click on 📤 Upload to Drive node → Add Google Drive OAuth2 credential [ ] Click on 📝 Log to Sheets node → Add Google Sheets OAuth2 credential Step 4: Configure Storage [ ] Update the 📤 Upload to Drive node with your Google Drive folder URL [ ] Update the 📝 Log to Sheets node with your Google Sheets URL [ ] Create column headers in your Google Sheet (see Output section) Step 5: Customize Content [ ] Update topic, brand_name, target_audience, and trending_hashtags [ ] Choose your preferred slide_style and voice_selection [ ] Optionally configure background, custom_avatar_description, and/or custom_script Step 6: Test [ ] Execute the workflow [ ] Check Google Drive for the output video [ ] Verify metadata was logged to Google Sheets MCP Integration (Optional) This workflow can be exposed to Claude Desktop via n8n's Model Context Protocol (MCP) integration. To enable MCP: Add a Webhook Trigger node to the workflow (in addition to the Manual Trigger) Connect it to the ⚙️ Workflow Configuration node Go to Settings → Instance-level MCP → Enable the workflow Configure Claude Desktop with your n8n MCP server URL Claude Desktop Configuration (Windows): { "mcpServers": { "n8n-mcp": { "command": "supergateway", "args": [ "--streamableHttp", "https://YOUR_N8N_INSTANCE.app.n8n.cloud/mcp-server/http", "--header", "authorization:Bearer YOUR_MCP_ACCESS_TOKEN" ] } } } > Note: Install supergateway globally first: npm install -g supergateway Limitations & Notes Technical Limitations tmpfiles.org**: Temporary file URLs expire after ~1 hour. Final videos are safe in Google Drive. VEED processing**: Takes 1-3 minutes for the talking head. Creatomate processing**: Takes 30-60 seconds for composition. Total workflow time**: ~3-5 minutes per video. Content Considerations Scripts are optimized for 25-40 seconds (TTS-friendly) Avatar images are AI-generated (not real people) Slides are dynamically generated based on script length Slide count: 5-7 slides depending on script duration Best Practices Start simple: Test with default settings before customizing Review scripts: Claude generates good content but review before posting Monitor costs: Check API usage dashboards weekly Use backgrounds: Adding a background image creates a more polished look Match voice to content: Use Spanish voices for Spanish content Troubleshooting | Issue | Solution | |-------|----------| | "Organization must be verified" | Verify at platform.openai.com/settings/organization/general | | VEED authentication error | Re-add FAL.ai credential to VEED node | | Google Drive "no binary field" | Ensure Download Video outputs to binary field | | JSON parse error from Claude | Workflow has fallback content; check Claude node output | | Slides not matching script | Increase seconds_per_slide for fewer slides | | Avatar cut off in PiP | Avatar is designed for right-side placement | | MCP "Server disconnected" | Install supergateway globally: npm install -g supergateway | | Render timeout | Increase wait time in "⏳ Wait for Render" node | Version History | Version | Date | Changes | |---------|------|---------| | 2.1 | Jan 2026 | Renamed workflow, improved documentation with section sticky notes, consolidated setup information | | 2.0 | Jan 2026 | Added dynamic slide count, background options, FAL Flux Pro for slides, improved PiP layout | | 1.0 | Jan 2026 | Initial release with fixed slide count, basic composition | Credits Built with: n8n** - Workflow automation Anthropic Claude** - Script & slide prompt generation OpenAI** - Avatar image generation FAL.ai** - Slide image generation (Flux Pro) ElevenLabs** - Voice synthesis VEED Fabric** - AI lip-sync video rendering Creatomate** - Video composition Google Workspace** - Storage & logging
by Lakshit Ukani
One-way sync between Telegram, Notion, Google Drive, and Google Sheets Who is this for? This workflow is perfect for productivity-focused teams, remote workers, virtual assistants, and digital knowledge managers who receive documents, images, or notes through Telegram and want to automatically organize and store them in Notion, Google Drive, and Google Sheets—without any manual work. What problem is this workflow solving? Managing Telegram messages and media manually across different tools like Notion, Drive, and Sheets can be tedious. This workflow automates the classification and storage of incoming Telegram content, whether it’s a text note, an image, or a document. It saves time, reduces human error, and ensures that media is stored in the right place with metadata tracking. What this workflow does Triggers on a new Telegram message** using the Telegram Trigger node. Classifies the message type** using a Switch node: Text messages are appended to a Notion block. Images are converted to base64, uploaded to imgbb, and then added to Notion as toggle-image blocks. Documents are downloaded, uploaded to Google Drive, and the metadata is logged in Google Sheets. Sends a completion confirmation** back to the original Telegram chat. Setup Telegram Bot: Set up a bot and get the API token. Notion Integration: Share access to your target Notion page/block. Use the Notion API credentials and block ID where content should be appended. Google Drive & Sheets: Connect the relevant accounts. Select the destination folder and spreadsheet. imgbb API: Obtain a free API key from imgbb. Replace placeholder credential IDs and asset URLs as needed in the imported workflow. How to customize this workflow to your needs Change Storage Locations**: Update the Notion block ID or Google Drive folder ID. Switch Google Sheet to log in a different file or sheet. Add More Filters**: Use additional Switch rules to handle other Telegram message types (like videos or voice messages). Modify Response Message**: Personalize the Telegram confirmation text based on the file type or sender. Use a different image hosting service** if you don’t want to use imgbb.
by Nasser
For Who? Content Creators Youtube Automation Marketing Team How it works? 1 - Every week, retrieve the keywords you want to track 2 - Thanks to Apify, scrape videos from YouTube Search related to these keywords, filtered by relevance 3 - Wait until the dataset is completed 4 - Get the information contained in the dataset 5 - For each video, clean and summarize the script 6 - Upload everything to your Airtable database 📺 YouTube Video Tutorial: Setup (~5min) Scheduled Trigger: Select the frequency you want. If you change it, update the data accordingly in the "Create Videos Dataset" HTTP Request node in Body ➡️ JSON ➡️ dateFilter. Setup Keywords: Enter keywords related to the niche you want. If you change the number of keywords, update the data accordingly in the "Create Videos Dataset" HTTP Request node in Body ➡️ JSON ➡️ searchQueries. Create Videos Dataset: Refer to the Apify documentation for more: https://docs.apify.com/api/v2/getting-started APIs: For all HTTP Request nodes in the URL field, replace [YOUR_API_TOKEN] with your API token. 👨💻 More Workflows : https://n8n.io/creators/nasser/
by NovaNode
Who is this for? This template is designed for internal support teams, product specialists, and knowledge managers in technology companies who want to automate ingestion of product documentation and enable AI-driven, retrieval-augmented question answering via WhatsApp. What problem is this workflow solving? Support agents often spend too much time manually searching through lengthy documentation, leading to inconsistent or delayed answers. This solution automates importing, chunking, and indexing product manuals, then uses retrieval-augmented generation (RAG) to answer user queries accurately and quickly with AI via WhatsApp messaging. What these workflows do Workflow 1: Document Ingestion & Indexing Manually triggered to import product documentation from Google Docs. Automatically splits large documents into chunks for efficient searching. Generates vector embeddings for each chunk using OpenAI embeddings. Inserts the embedded chunks and metadata into a MongoDB Atlas vector store, enabling fast semantic search. Workflow 2: AI-Powered Query & Response via WhatsApp Listens for incoming WhatsApp user messages, supporting various types: Text messages: Plain text queries from users. Audio messages: Voice notes transcribed into text for processing. Image messages: Photos or screenshots analyzed to provide contextual answers. Document messages: PDFs, spreadsheets, or other files parsed for relevant content. Converts incoming queries to vector embeddings and performs similarity search on the MongoDB vector store. Uses OpenAI’s GPT-4o-mini model with retrieval-augmented generation to produce concise, context-aware answers. Maintains conversation context across multiple turns using a memory buffer node. Routes different message types to appropriate processing nodes to maximize answer quality. Setup Setting up vector embeddings Authenticate Google Docs and connect your Google Docs URL containing the product documentation you want to index. Authenticate MongoDB Atlas and connect the collection where you want to store the vector embeddings. Create a search index on this collection to support vector similarity queries. Ensure the index name matches the one configured in n8n (data_index). See the example MongoDB search index template below for reference. Setting up chat Authenticate the WhatsApp node with your Meta account credentials to enable message receiving and sending. Connect the MongoDB collection containing embedded product documentation to the MongoDB Vector Search node used for similarity queries. Set up the system prompt in the Knowledge Base Agent node to reflect your company’s tone, answering style, and any business rules, ensuring it references the connected MongoDB collection for context retrieval. Make sure Both MongoDB nodes (in ingestion and chat workflows) are connected to the same collection with: An embedding field storing vector data, Relevant metadata fields (e.g., document ID, source), and The same vector index name configured (e.g., data_index). Search Index Example: { "mappings": { "dynamic": false, "fields": { "_id": { "type": "string" }, "text": { "type": "string" }, "embedding": { "type": "knnVector", "dimensions": 1536, "similarity": "cosine" }, "source": { "type": "string" }, "doc_id": { "type": "string" } } } }
by Jimleuk
This n8n template demonstrates a simple approach to using AI to automate the generation of blog content which aligns to your organisation's brand voice and style by using examples of previously published articles. In a way, it's quick and dirty "training" which can get your automated content generation strategy up and running for very little effort and cost whilst you evaluate our AI content pipeline. How it works In this demonstration, the n8n.io blog is used as the source of existing published content and 5 of the latest articles are imported via the HTTP node. The HTML node is extract the article bodies which are then converted to markdown for our LLMs. We use LLM nodes to (1) understand the article structure and writing style and (2) identify the brand voice characteristics used in the posts. These are then used as guidelines in our final LLM node when generating new articles. Finally, a draft is saved to Wordpress for human editors to review or use as starting point for their own articles. How to use Update Step 1 to fetch data from your desired blog or change to fetch existing content in a different way. Update Step 5 to provide your new article instruction. For optimal output, theme topics relevant to your brand. Requirements A source of text-heavy content is required to accurately breakdown the brand voice and article style. Don't have your own? Maybe try your competitors? OpenAI for LLM - though I recommend exploring other models which may give subjectively better results. Wordpress for blog but feel free to use other preferred publishing platforms. Customising this workflow Ideally, you'd want to "train" your agent on material which is similar to your output ie. your social media post may not get the best results from your blog content due to differing formats. Typically, this brand voice extraction exercise should run once and then be cached somewhere for reuse later. This would save on generation time and overall cost of the workflow.
by Alexey from Mingles.ai
AI Image Generator & Editor with GPT-4 Vision - Complete Workflow Template Description Transform text prompts into stunning images or edit existing visuals using OpenAI's latest GPT-4 Vision model through an intuitive web form interface. This comprehensive n8n automation provides three powerful image generation modes: 🎨 Text-to-Image Generation Simply enter a descriptive prompt and generate high-quality images from scratch using OpenAI's gpt-image-1 model. Perfect for creating original artwork, concepts, or visual content. 🖼️ Image-to-Image Editing Upload an existing image file and transform it based on your text prompt. The AI analyzes your input image and applies modifications while maintaining the original structure and context. 🔗 URL-Based Image Editing Provide a direct URL to any online image and edit it with AI. Great for quick modifications of web images or collaborative workflows. Key Features Smart Input Processing Flexible Form Interface**: User-friendly web form with authentication Multiple Input Methods**: File upload, URL input, or text-only generation Quality Control**: Selectable quality levels (low, medium, high) Format Support**: Accepts PNG, JPG, and JPEG formats Advanced AI Integration Latest GPT-4 Vision Model**: Uses gpt-image-1 for superior results Intelligent Switching**: Automatically detects input type and routes accordingly Context-Aware Editing**: Maintains image coherence during modifications Customizable Parameters**: Control size (1024x1024), quality, and generation settings Dual Storage Options Google Drive Integration**: Automatic upload with public sharing permissions ImgBB Hosting**: Alternative cloud storage for instant public URLs File Management**: Organized storage with timestamp-based naming Instant Telegram Delivery Real-time Notifications**: Results sent directly to your Telegram chat Rich Media Messages**: Includes generated image with prompt details Quick Access Links**: Direct links to view and download results Markdown Formatting**: Clean, professional message presentation Technical Workflow Form Submission → User submits prompt and optional image Smart Routing → System detects input type (text/file/URL) AI Processing → OpenAI generates or edits image based on mode Binary Conversion → Converts base64 response to downloadable file Cloud Upload → Stores in Google Drive or ImgBB with public access Telegram Delivery → Sends result with viewing links and metadata Perfect For Content Creators**: Generate unique visuals for social media and marketing Designers**: Quick concept development and image variations Developers**: Automated image processing for applications Teams**: Collaborative image editing and sharing workflows Personal Use**: Transform ideas into visual content effortlessly Setup Requirements OpenAI API Key**: Access to GPT-4 Vision model Google Drive API** (optional): For Google Drive storage ImgBB API Key** (optional): For alternative image hosting Telegram Bot**: For result delivery Basic Auth Credentials**: For form security What You Get ✅ Complete image generation and editing pipeline ✅ Secure web form with authentication ✅ Dual cloud storage options ✅ Instant Telegram notifications ✅ Professional result formatting ✅ Flexible input methods ✅ Quality control settings ✅ Automated file management Start creating AI-powered images in minutes with this production-ready template! Tags: #AI #ImageGeneration #OpenAI #GPT4 #ImageEditing #Telegram #GoogleDrive #Automation #ComputerVision #ContentCreation
by giangxai
Overview This workflow automatically creates AI product review videos from a product image and short description using n8n and Veo 3. It connects content generation, image creation, video rendering, video merging, and publishing into a single automated flow. Once configured, the workflow runs end to end with minimal manual input. The workflow is designed for creators, marketers, and affiliate builders who want a reliable and repeatable way to produce short-form product review videos without manual editing. What can this workflow do? Automatically generate AI product review videos from product images Create review scripts and structured prompts using an AI model Generate product images and video scenes with AI services Merge multiple video scenes into a single final video Publish videos automatically to social platforms Track publishing results and errors in Google Sheets This workflow helps reduce manual work while keeping the video production process structured and scalable. How it works You start by submitting a product image and basic product information through a form. The workflow analyzes the image to understand visual context and key product features. An AI Agent then generates a review script along with structured image and video prompts. Next, image generation APIs create product visuals, and video generation APIs such as Veo 3 render short video scenes. All generated scenes are automatically merged into one final product review video. The finished video is then uploaded and published to platforms like TikTok, Facebook Reels, and YouTube Shorts. Publishing results are logged to Google Sheets for monitoring. Setup steps Connect an AI model (Gemini or OpenRouter) for script and prompt generation. Add image and video generation API keys (Veo 3 or compatible providers). Configure the video merge step (custom request or ffmpeg-based API). Add Blotato API credentials for automated publishing. Connect Google Sheets to log publishing results. Once set up, the workflow runs automatically without manual intervention. Documentation For a full walkthrough and advanced customization ideas, watch the detailed tutorial on YouTube.