by simonscrapes
Use Case Automate image replacement in Google Docs: You need to update document images dynamically You want to create multiple versions of a template with different images You need to batch process document images from a URL database You want to generate shareable documents with custom images What this Workflow Does The workflow automates image replacement in Google Docs: Accepts image URLs from your database Finds and replaces images in template documents Creates new document copies with updated images Optionally converts to PDF and makes documents shareable Setup Connect your image URL database (column name must be "url") Set up Google Docs OAuth 2 API credentials Optional: Create a template document in Google Drive with placeholder images Optional: Configure Google Drive authentication for additional features How to Adjust it to Your Needs Remove template copying for single document processing Adjust image ID selection for documents with multiple images Configure sharing settings and download formats Customize file naming and storage location More templates and n8n workflows >>> @simonscrapes
by Jonathan
How it works This template uses a slack app to connect with your google calendar, generate an instant google meet link and post it as a message in a slack channel Setup steps Firstly, you'll need to create a slack app Authenticate and connect your slack account Connect and choose the Google calendar you want to generate Google meet links for Customize your slack message Then using a /meet command in slack, you can instantly generate and post your Google meet links
by Jimleuk
This n8n workflow is a proof-of-concept template exploring how we might work with multimodal LLMs and their multi-image analysis capabilities. In this demo, we compare 2 screenshots of a webpage taken at different timestamps and pass both to our multimodal LLM for a visual comparison of differences. Handling multiple binary inputs (ie. images) in an AI request is supported by n8n's basic LLM node. How it works This template is intended to run as 2 parts: first to generate the base screenshots and next to run the visual regression test which captures fresh screenshots. Starting with a list of webpages captured in a Google sheet, base screenshots are captured for each using a external web scraping service called Apify.com (I prefer Apify but feel free to use whichever web scraping service available to you) These base screenshots are uploaded to Google Drive and will be referenced later when we run our testing. Phase 2 of the workflow, we'll use a scheduled trigger to fire sometime in the future which will reuse our web scraping service to generate fresh screenshots of our desired webpages. Next, re-download our base screenshots in parallel and with both old and new captures, we'll pass these to our LLM node. In the LLM node's options, we'll define 2 "user message" inputs with the type of binary (data) for our images. Finally, we'll prompt our LLM with our testing criteria and capture the regressions detected. Note, results will vary depending on which LLM you use. A final report can be generated using the LLM's output and is uploaded to Linear. Requirements Apify.com API key for web screenshotting service Google Drive and Sheets access to store list of webpages and captures Customising this workflow Have your own preferred web screenshotting service? Feel free to swap out Apify with your service of choice. If the web screenshot is too large, it may prove difficult for the LLM to spot differences with precision. Try splitting up captures into smaller images instead.
by Hubschrauber
What this workflow does This (set of) workflow(s) shows how to start multiple sub-workflows, asynchronously, in parallel, and then wait for all of them to complete. Normally sub-workflows would need to be run synchronously, in series, or, if they are executed asynchronously (to run concurrently, in parallel), there is no easy way to merge/wait for an arbitrary number of them to complete. This is a "design pattern" template to show one approach for running multiple, data-driven instances of a sub-workflow "asynchronously," in parallel (instead of running them one at a time in series), but still prevent the later steps in the workflow from continuing until all of the sub-workflows have reported back that they are finished, via callback URL. There are other techniques involving messaging services, database tables, or other external "flow manager" helpers, but this technique accomplishes the goal fully within n8n. Setup To implement this pattern, examine the nodes in the template and modify the incoming data leading to: A split-out loop to acynchronously execute a sub-workflow multiple times, in parallel. For instance, each sub-workflow might process one of a list of incoming documents. The resumeUrl for the main/parent workflow is provided to all of the sub-workflow executions, along with a unique identifier that can be counted later (e.g. a document file-name). A "wait-for-all" loop that checks whether all sub-workflows have reported back (if node) and builds a unique list of identifiers from the callbacks received from each execution of the sub-workflow. The sub-workflow should be designed to respond immediately (async) and later send a callback request when it has finished processing. The callback request should include the unique identifier value received when the sub-workflow it was started. This is meant to be a possible answer to questions like this one about running things in parallel, maybe this one about waiting for things to finish, this one about managing sub-batches of things by waiting for each batch, or this one about running things in parallel. The topic of how to do this comes up A LOT, and this is one of the only techniques that (so far) seems to work.
by Jimleuk
This n8n template demonstrates a simple approach to using AI to automate the generation of blog content which aligns to your organisation's brand voice and style by using examples of previously published articles. In a way, it's quick and dirty "training" which can get your automated content generation strategy up and running for very little effort and cost whilst you evaluate our AI content pipeline. How it works In this demonstration, the n8n.io blog is used as the source of existing published content and 5 of the latest articles are imported via the HTTP node. The HTML node is extract the article bodies which are then converted to markdown for our LLMs. We use LLM nodes to (1) understand the article structure and writing style and (2) identify the brand voice characteristics used in the posts. These are then used as guidelines in our final LLM node when generating new articles. Finally, a draft is saved to Wordpress for human editors to review or use as starting point for their own articles. How to use Update Step 1 to fetch data from your desired blog or change to fetch existing content in a different way. Update Step 5 to provide your new article instruction. For optimal output, theme topics relevant to your brand. Requirements A source of text-heavy content is required to accurately breakdown the brand voice and article style. Don't have your own? Maybe try your competitors? OpenAI for LLM - though I recommend exploring other models which may give subjectively better results. Wordpress for blog but feel free to use other preferred publishing platforms. Customising this workflow Ideally, you'd want to "train" your agent on material which is similar to your output ie. your social media post may not get the best results from your blog content due to differing formats. Typically, this brand voice extraction exercise should run once and then be cached somewhere for reuse later. This would save on generation time and overall cost of the workflow.
by Grzegorz Hanus
Summarize YouTube Videos & Chat About Content with GPT-4o-mini via Telegram Description This n8n workflow automates the process of summarizing YouTube video transcripts and enables users to interact with the content through AI-powered question answering via Telegram. It leverages the GPT-4o-mini model to generate summaries and provide insights based on the video’s transcript. How It Works Input: The workflow starts by receiving a YouTube video URL. This can be submitted through: A Telegram chat message. A webhook (e.g., triggered by a shortcut on Apple devices). Transcript Extraction: The URL is processed to extract the video transcript using the custom youtubeTranscripter community node (available here). The transcript is concatenated into a single text and stored in a Google Docs document. Summarization: The GPT-4o-mini AI model analyzes the transcript and generates a structured summary, including: A general overview. Key moments. Instructions (if applicable). The summary is then sent back to the user via Telegram. Interactive Q&A: Users can ask questions about the video content via Telegram. The AI retrieves the stored transcript from Google Docs and provides accurate, context-based answers, which are sent back through Telegram. Setup Instructions To configure this workflow, follow these steps: Import the Workflow: Download the provided JSON template and import it into your n8n instance. Install the Community Node: Install the youtubeTranscripter community node via npm: npm install n8n-nodes-youtube-transcription-kasha Important: This node requires a self-hosted n8n instance due to its external dependencies. Configure Nodes: Webhook: Set up the webhook to receive YouTube URLs. Alternatively, configure the Telegram node if using Telegram as the input method. Google Docs: Provide valid credentials to enable writing the transcript to a Google Docs document. AI Model: Set up the GPT-4o-mini model for summarization and Q&A functionality. Test the Workflow: Send a YouTube URL via your chosen input method (Telegram or webhook) and confirm that the summary is generated and delivered correctly. Customization Language**: Adjust the AI prompts to generate summaries and answers in any desired language. Output Format**: Modify the summary structure by editing the prompt in the summarization node. Input Methods**: Replace the Telegram node with another messaging or input node to adapt the workflow to different platforms. Who Can Benefit? This template is perfect for: Content Creators**: Quickly summarize video content for repurposing or review. Students and Researchers**: Extract key insights from educational or informational videos efficiently. General Users**: Interact with video content via AI without needing to watch the full video. Problem Solved This workflow simplifies video content consumption by: Automating the extraction and summarization of key points. Enabling interactive Q&A to address specific questions without rewatching the video. Additional Notes Disclaimer**: The youtubeTranscripter community node is required and only works on self-hosted n8n instances due to its reliance on external services. Apple Users**: Enhance your experience with a custom shortcut to share YouTube videos directly to the workflow. Download the shortcut here.
by Luke
Automatically backs up your workflows to Github and generates documentation in a Notion database. Weekly run, uses the "internal-infra" tag to look for new or recently modified workflows Uses a Notion database page to hold the workflow summary, last updated date, and a link to the workflow Uses OpenAI's 4o-mini to generate a summarization of what the workflow does Stores a backup of the workflow in GitHub (recommend a private repo) Sends notification to Slack channel for new or updated workflows Who is this for Anyone seeking backup of their most important workflows Anyone seeking version control for their most important workflows Credentials required N8N: You will need an N8N credential created so the workflow can query the N8N instance to find all active workflows with the "internal-infra" tag Notion: You will need an Notion credential created OpenAI: You will need an OpenAI credential, unless you intend on rewiring this with your AI of choice (ollama, openrouter, etc.) GitHub: You will need an GitHub credential Slack: You will require an Slack credential, recommend a Bot / access token configuration Setup Notion Create a database with the following columns. Column type is specified in [type]. Workflow Name [text] isActive (dev) [checkbox] Error workflow setup [checkbox] AI Summary [text] Record last update [date/time] URL (dev) [text/url] Workflow created at [date/time] Workflow updated at [date/time] Slack Create a channel for updates to be posted into Github Create a private repo for your workflows to be exported into N8N Download & install the template Configure the blocks to use your N8N, Notion, OpenAI & Slack credentials for your own Edit the "Set Fields" block and change the URL to that of your N8N instance (cloud or self-hosted) Edit the "Add to Notion" action and specify the Database page you wish to update Edit the Slack actions to specify the Channel you want slack notifications posted to Edit the GitHub actions to specify the Repository Owner & Repository Name Sample output in Notion Workflow diagram
by Derek Cheung
How it works: Using a Crew of AI agents (Senior Researcher, Visionary, and Senior Editor), this crew will automatically determine the right questions to ask to produce a detailed fundamental stock analysis. This application has two components: a front-end and a Stock Q&A engine. The front end is the team of agents automatically figuring out the questions to ask, and the back-end part is the ability to answer those questions with the SEC 10K data. This template implements the Stock Q&A engine. For the front-end of the application, you can choose one of two options: using CrewAI with the Replit environment (code approach) fully visual approach with n8n template (AI-powered automated stock analysis) Setup steps: Use first workflow in template to upsert a company annual report PDF (such as from SEC 10K filling) Get URL for Webhook in second workflow template CrewAI front-end: Youtube overview video Fork this AI Agent environment Crew Agent Environment Set the webhook URL into N8N_WEBHOOK_URL variable Set OpenAI_API_KEY variable
by Lakshit Ukani
One-way sync between Telegram, Notion, Google Drive, and Google Sheets Who is this for? This workflow is perfect for productivity-focused teams, remote workers, virtual assistants, and digital knowledge managers who receive documents, images, or notes through Telegram and want to automatically organize and store them in Notion, Google Drive, and Google Sheets—without any manual work. What problem is this workflow solving? Managing Telegram messages and media manually across different tools like Notion, Drive, and Sheets can be tedious. This workflow automates the classification and storage of incoming Telegram content, whether it’s a text note, an image, or a document. It saves time, reduces human error, and ensures that media is stored in the right place with metadata tracking. What this workflow does Triggers on a new Telegram message** using the Telegram Trigger node. Classifies the message type** using a Switch node: Text messages are appended to a Notion block. Images are converted to base64, uploaded to imgbb, and then added to Notion as toggle-image blocks. Documents are downloaded, uploaded to Google Drive, and the metadata is logged in Google Sheets. Sends a completion confirmation** back to the original Telegram chat. Setup Telegram Bot: Set up a bot and get the API token. Notion Integration: Share access to your target Notion page/block. Use the Notion API credentials and block ID where content should be appended. Google Drive & Sheets: Connect the relevant accounts. Select the destination folder and spreadsheet. imgbb API: Obtain a free API key from imgbb. Replace placeholder credential IDs and asset URLs as needed in the imported workflow. How to customize this workflow to your needs Change Storage Locations**: Update the Notion block ID or Google Drive folder ID. Switch Google Sheet to log in a different file or sheet. Add More Filters**: Use additional Switch rules to handle other Telegram message types (like videos or voice messages). Modify Response Message**: Personalize the Telegram confirmation text based on the file type or sender. Use a different image hosting service** if you don’t want to use imgbb.
by Jimleuk
This n8n template takes a video and extracts frames from it which are used with a multimodal LLM to generate a script. The script is then passed to the same multimodal LLM to generate a voiceover clip. This template was inspired by Processing and narrating a video with GPT's visual capabilities and the TTS API How it works Video is downloaded using the HTTP node. Python code node is used to extract the frames using OpenCV. Loop node is used o batch the frames for the LLM to generate partial scripts. All partial scripts are combined to form the full script which is then sent to OpenAI to generate audio from it. The finished voiceover clip is uploaded to Google Drive. Sample the finished product here: https://drive.google.com/file/d/1-XCoii0leGB2MffBMPpCZoxboVyeyeIX/view?usp=sharing Requirements OpenAI for LLM Ideally, a mid-range (16GB RAM) machine for acceptable performance! Customising this workflow For larger videos, consider splitting into smaller clips for better performance Use a multimodal LLM which supports fully video such as Google's Gemini.
by Yang
📽️ What this workflow does This workflow turns a user-submitted form with country or animal names into a cinematic video with animated scenes and immersive ambient audio. Using GPT-4 for prompt generation, Dumpling AI for visual creation,& Replicate for motion animation, ElevenLabs for sound generation, and Creatomate for video stitching, it fully automates video production — from raw idea to rendered file. 🎯 What problem is this solving? Creating engaging multimedia content can take hours. This workflow automates the entire process of ideation, design, and rendering of high-quality cinematic clips, eliminating the need for manual video editing or audio production. 👥 Who is this for? Content creators and educators Digital artists and storytellers Marketers or YouTubers creating short-form visual content No-code/AI automation enthusiasts ⚙️ Setup Instructions ✅ Step 1: Google Sheet Create a Google Sheet with two columns: Title Generated videos Update the Sheet ID and tab name in the final node. ✅ Step 2: Google Drive Create two folders: One for ambient audio tracks One for final generated videos Update the folder IDs in both Google Drive nodes. ✅ Step 3: Credentials Setup Make sure all your API tokens are saved as credentials in n8n. This workflow uses the following integrations: OpenAI (GPT-4) Dumpling AI (via HTTP header) Replicate.com ElevenLabs Google Drive Google Sheets Creatomate ✅ Step 4: Form Fields Ensure your trigger form includes these fields: Title Country 1, Country 2, Country 3, Country 4 Style (e.g., cinematic, epic, fantasy, noir, etc.) 🧩 How it works User Form Submission Kicks off the workflow with the required inputs. Format Inputs Combines all 4 countries/animals into a single array. GPT-4: Generate Visual Prompts Uses GPT-4 to create rich cinematic descriptions per animal/country. Dumpling AI: Create Images Each description becomes a high-quality visual. GPT-4: Create Motion Prompts Each image prompt is rewritten into motion-based video prompts. Replicate: Animate Prompts and images are sent to Replicate’s model for animation. GPT-4: Generate Sound Prompt Based on the style, GPT-4 creates an ambient sound idea. ElevenLabs: Create Ambient Audio Audio is generated and uploaded to Google Drive. Creatomate: Stitch All Media All 4 motion videos and the audio track are stitched into one cinematic output. Upload to Google Drive + Log to Sheet Final video is saved in Drive and logged in Sheets with its title and link. 🛠️ How to Customize 🎨 Modify GPT prompts for different themes (e.g., horror, fantasy, sci-fi). 🧠 Swap animals for characters, objects, or locations. 🎧 Replace ambient sound with ElevenLabs voiceovers or music. 📂 Add metadata logging (generation time, duration, tags). 🧪 Try using alternative video tools like Pika Labs or Runway ML. ✅ Requirements n8n self-hosted or cloud instance Active accounts for: OpenAI, Dumpling AI, Replicate, ElevenLabs, Creatomate Google credentials set up for Drive + Sheets This is a perfect end-to-end automation that showcases the power of AI + automation for video storytelling.
by Nick Saraev
AI Facebook Ad Spy Tool with Apify, OpenAI, Gemini & Google Sheets Categories: Competitive Intelligence, Marketing Automation, AI Analysis This workflow creates a comprehensive Facebook ad spy tool that scrapes competitor ads from Facebook's ad library and generates detailed analysis with rewritten versions. The system processes text, image, and video ads using different AI models, providing strategic intelligence for PPC agencies and marketers. Built to be sold as a premium service for $2,000+, this tool combines web scraping, multi-modal AI analysis, and competitor intelligence into one powerful automation. Benefits Complete Competitive Intelligence** - Analyze competitor strategies across all ad formats (text, image, video) Multi-Modal AI Analysis** - Uses GPT-4 Vision for images and Gemini for video content understanding Automated Ad Rewriting** - Generates inspired variations of successful competitor ads Quality Filtering** - Targets high-performing advertisers with significant page likes Scalable Processing** - Handle hundreds of competitor ads with detailed strategic analysis Premium Service Potential** - Easily sold to agencies and marketers for $2,000+ implementations How It Works Facebook Ad Library Scraping: Connects to Facebook's public ad library through Apify's specialized scraper Searches for active ads using customizable keywords and targeting parameters Extracts comprehensive ad data including creative assets, targeting info, and engagement metrics Filters results to focus on high-quality advertisers with substantial page followings Intelligent Content Routing: Automatically categorizes ads into text-only, image-based, or video content types Routes each ad type to specialized processing pipelines optimized for that content format Ensures appropriate AI models are used for each type of creative analysis Maintains data integrity while processing different content formats simultaneously Advanced Video Analysis Pipeline: Downloads video ads directly from Facebook's content delivery network Uploads videos to Google Drive for temporary storage and processing Initiates Gemini AI video upload sessions for multi-modal analysis Uses Gemini's advanced video understanding to generate detailed content descriptions Processes video narrative, visual elements, messaging strategy, and target audience insights Image and Text Processing: Analyzes image ads using GPT-4 Vision for comprehensive visual content understanding Processes text-only ads using GPT-4 for messaging strategy and copywriting analysis Identifies key persuasion techniques, target demographics, and messaging frameworks Generates detailed competitive intelligence reports for each ad format Strategic Intelligence Generation: Creates comprehensive summaries analyzing competitor messaging strategies and target audiences Generates rewritten ad copy that captures successful elements while avoiding direct copying Produces recreation prompts for images and videos that can be used with AI generation tools Organizes all insights in structured Google Sheets database for easy analysis and reporting Required Setup Configuration Apify Integration: Sign up for Apify account and obtain API key Replace <your-apify-api-key-here> in "Run Ad Library Scraper" node Customize Facebook Ad Library search URLs with your target keywords and regions AI Service Configuration: OpenAI API**: Set up for text analysis and image understanding with GPT-4 Vision Gemini API**: Configure for advanced video content analysis and description Replace <your-gemini-api-key-here> in all Gemini-related nodes Google Services Setup: Google Drive**: Configure OAuth for temporary video storage during Gemini processing Google Sheets**: Create results database with proper column structure for ad intelligence storage Facebook Ad Library Search Configuration: Customize the search parameters in the Apify scraper Google Sheets Database Structure: Create a sheet with these columns: ad_archive_id - Unique Facebook ad identifier page_id - Advertiser's Facebook page ID page_name - Advertiser's business name page_url - Link to advertiser's Facebook page type - Ad format (text, image, or video) date_added - When ad was analyzed summary - Detailed competitive intelligence analysis rewritten_ad_copy - AI-generated inspired version image_prompt - Description for recreating image ads video_prompt - Description for recreating video ads Business Use Cases PPC Agencies - Offer comprehensive competitor analysis services to clients for strategic advantage Marketing Teams - Research competitor strategies and messaging before launching new campaigns E-commerce Businesses - Analyze successful ads in your industry for creative inspiration SaaS Companies - Study how competitors position their products and target audiences Course Creators - Research educational content marketing approaches and messaging strategies Affiliate Marketers - Identify successful promotional strategies and high-converting ad formats Difficulty Level: Advanced Estimated Build Time: 3-4 hours Monthly Operating Cost: ~$200 (Apify + OpenAI + Gemini + Google Workspace APIs) Watch My Complete Build Process Want to see exactly how I built this entire Facebook ad spy system from scratch? I walk through the complete development process live, including API integrations, multi-modal AI setup, error handling, and the exact business strategy for selling this as a premium service. 🎥 Watch My Live Build: "Build A Facebook Ads Spy Tool With N8N (Sell for $2k+)" This comprehensive tutorial shows the real development process - including complex API orchestration, multi-modal AI integration, and proven strategies for monetizing competitive intelligence systems. Set Up Steps Apify Scraper Configuration: Set up Apify account and configure Facebook Ad Library scraper Customize search parameters for your target industries and regions Configure result limits and filtering parameters for quality control Test scraper with sample searches to verify data quality Multi-Modal AI Setup: Configure OpenAI API credentials for text and image analysis Set up Gemini API access for advanced video content understanding Configure appropriate rate limits and error handling for API stability Test AI analysis with sample ads to optimize prompt quality Google Services Integration: Set up Google Drive OAuth for temporary video storage during processing Create Google Sheets database with proper column structure for intelligence storage Configure sharing permissions and access controls for team collaboration Test complete data flow from scraping to final intelligence reports Quality Control and Filtering: Configure page likes threshold in "Filter For Likes" node (recommend 1,000+ for quality) Adjust content routing logic in Switch node based on your analysis needs Set up error handling and retry logic for reliable large-scale processing Test complete workflow with various ad types to ensure proper routing Advanced Customization: Customize AI prompts for your specific industry analysis needs Configure additional filtering criteria beyond page likes Set up automated scheduling for regular competitor monitoring Add custom fields to database for tracking specific competitive metrics Advanced Features Scale the system with additional capabilities: Industry-Specific Analysis - Customize prompts and filters for different verticals Trend Tracking - Monitor messaging changes over time for strategic insights Performance Correlation - Cross-reference ad engagement with business outcomes Alert Systems - Notify when competitors launch new campaign types Custom Reporting - Generate client-ready intelligence reports automatically Integration Extensions - Connect to CRM and marketing platforms for strategic workflow Important Considerations API Rate Limits - Built-in delays and error handling prevent service interruptions Content Rights - System generates inspired variations, not direct copies, for legal compliance Data Storage - Organize intelligence database for easy client reporting and analysis Scalability - Batch processing handles hundreds of ads efficiently without blocking Quality Assurance - Filtering logic ensures analysis focuses on successful, high-quality advertisers Why This System Works The competitive advantage lies in comprehensive multi-modal analysis: Complete format coverage - analyzes text, image, and video ads with appropriate AI models Strategic depth - goes beyond basic scraping to provide actionable intelligence Automation scale - processes competitor research that would take weeks manually Premium positioning - advanced AI analysis justifies higher service pricing Immediate value - clients receive actionable insights within hours of setup Check Out My Channel For more advanced automation systems that generate real business results and premium service opportunities, explore my YouTube channel where I share proven strategies for building profitable automation businesses.