by Lucas Walter
Transform simple ideas into viral-ready Bigfoot vlogs! This automated workflow creates charming 8-scene video content featuring "Sam" the Bigfoot - a lovable, outdoorsy character inspired by popular YouTube adventure channels. How It Works The workflow transforms your creative concept into professional video content through three automated stages: Story Generation - AI creates an 8-scene narrative arc featuring Sam the Bigfoot, complete with character-consistent dialogue and engaging plot development Human Approval - Review and approve the generated storyline via Slack before proceeding to video production Video Production - Each scene is automatically converted into 8-second video clips using Google's VEO 3 AI, then uploaded to Google Drive for easy access Required Credentials Anthropic API - Add your Claude API key for story generation FAL API - Configure your FAL.ai key for VEO 3 video generation Slack OAuth - Set up Slack app with channel permissions for approvals Google Drive OAuth - Connect your Google Drive for video storage Configuration Steps Import the workflow into your n8n instance Update Slack channel ID in the notification nodes to match your desired channel Set Google Drive folder - Update the folder ID where videos should be stored Test the form trigger - The workflow starts with a web form for video ideas Customize character (optional) - Modify Sam's personality in the narrative prompts
by n8n Team
Who this template is for This template is for researchers, students, professionals, or content creators who need to quickly extract and summarize key insights from PDF documents using AI-powered analysis. Use case Converting lengthy PDF documents into structured, digestible summaries organized by topic with key insights. This is particularly useful for processing research papers, reports, whitepapers, or any document where you need to quickly understand the main topics and extract actionable insights without reading the entire document. How this workflow works Document Upload: Receives PDF files through a POST endpoint at /ai_pdf_summariser File Validation: Checks that the PDF is under 10MB and has fewer than 20 pages to meet API limits Content Extraction: Extracts text content from the PDF file AI Analysis: Uses OpenAI's GPT-4o-mini to analyze the document and break it down into distinct topics Insight Generation: For each topic, generates 3 key insights with titles and detailed explanations Format Response: Converts the structured data into markdown format for easy reading Return Results: Provides the formatted summary along with document metadata (file hash) Set up steps Configure OpenAI API: Set up your OpenAI credentials for the GPT-4o-mini model Deploy Webhook: The workflow automatically creates a POST endpoint at /ai_pdf_summariser Test Upload: Send a PDF file to the endpoint using a multipart/form-data request Adjust Limits: Modify the file size (10MB) and page count (20) validation limits if needed based on your requirements Customize Prompts: Update the system prompt in the Information Extractor node to change how topics and insights are generated The workflow includes comprehensive error handling for file validation failures (400 error) and processing errors (500 error), ensuring reliable operation even with problematic documents.
by Lugnicca
Spotify to YouTube Playlist Synchronization A workflow that maintains a YouTube playlist in sync with a Spotify playlist, featuring smart video matching and persistent synchronization. Key Features One-way Sync**: Spotify playlist → YouTube playlist (additions and deletions) Continuous Monitoring**: Automatic synchronization (every hour by default, but you can put any time you want) Smart Video Matching**: Considers video length and content relevance Auto-Recovery**: Automatically handles deleted YouTube videos Database Backup**: Persistent storage using Supabase Prerequisites Supabase project with the following table structure: CREATE TABLE IF NOT EXISTS musics ( id TEXT PRIMARY KEY, title TEXT NOT NULL, artist TEXT NOT NULL, duration INT8 NOT NULL, youtube_video_id TEXT, to_delete BOOLEAN DEFAULT FALSE ); Empty YouTube playlist (recommended as duplicates are not handled) Configured credentials for YouTube, Spotify, and Supabase APIs Properly set variables in all "variables" nodes (variables, variables1, variables2, variables3, variables4 (all the same)) Activate the workflow !
by SalmonRK-AI
📘 Multi-Photo Facebook Post (Windows Directory) – How to Use ✅ Requirements To run this automation, make sure you have the following: ✅ n8n installed on your local Windows machine ✅ Cloudinary or any other file hosting service for uploading image files ✅ Facebook Page Access Token with the required permissions (pages_manage_posts, pages_read_engagement, pages_show_list, etc.) 🚀 How to Use Import the provided n8n workflow template into your n8n instance. Verify the image directory path – ensure that the images you want to post are stored in a local folder (e.g. E:\Autopost-media\YourPage\Images). Check the caption and hashtag files – this includes: description.txt (for the post message) hashtag.txt (for additional tags) Set your Facebook credentials – insert your Facebook Page Access Token in the designated credential field in the workflow. ⚙️ How It Works (Workflow Logic) Read Text Files The workflow reads description.txt and hashtag.txt from the local directory. These are combined to form the message body for the Facebook post. Select Images to Post The Limit node defines how many images to post per run (e.g. 3 images). Selected image files are uploaded to a file server (like Cloudinary) to obtain public URLs. Post to Facebook (Multi-Photo) A multi-photo post is created using the uploaded image URLs and the composed message. Move Posted Images After the post is successfully published, the original image files are moved to a new folder. The destination folder is automatically created using the current date (e.g. E:\Autopost-media\YourPage\Images\20250614).
by Marko
**Content engine that ships fresh, SEO-ready articles every single day. ** Workflow: ⸻ Layout Blueprint • Purpose: Define content structure before writing begins. • What’s Included: • Search intent mapping • Internal link planning • Call-to-action (CTA) placement • Benefit: Ensures consistency, SEO alignment, and content goals are baked in early. ⸻ AI-Assisted Drafting • Tool: GPT generates the first draft. • Editor’s Role: • Focus on depth and accuracy • Align tone and style with existing site content • Context-Aware: Pulls insights from top-ranking articles already live on the site. ⸻ SEO Validation • Automated Checks for: • Keyword coverage • Readability scoring • Schema markup • Internal/external link quality • Outcome: Each piece is validated before hitting publish. ⸻ Media Production • Process: AI auto-generates relevant images. • Delivery: Visual assets are automatically added to the CMS library. ⸻ Optional Human Review: Team feedback via Slack or Teams if needed. ⸻ Automated Publishing • Action: Instantly publishes content to Webflow once approved. • Result: A fully streamlined pipeline from draft to live with minimal manual steps.
by Chris Jadama
YouTube Chapter Auto-Description with AI This n8n template automatically adds structured timestamp chapters to your latest YouTube video’s description using your RSS feed, SupaData for transcript extraction, and an AI tool for chapter generation. Ideal for creators who want every video to include chapter markers without doing it manually. Good to Know SupaData extracts full transcripts from YouTube videos via URL. The AI chapter generator converts long transcripts into formatted timestamps with short titles. This workflow edits the existing video description and appends the chapters to the bottom. How It Works The RSS Feed Trigger detects new uploads from your YouTube channel. The workflow checks Airtable to prevent duplicate processing. Transcript is fetched using SupaData API. The total video duration is extracted from the transcript. AI is prompted to generate well-formatted chapter timestamps. The existing description is fetched from YouTube. The chapters are appended and pushed back via the YouTube API. How to Use Start with the Manual Trigger to test the setup. Replace it with the RSS Trigger once you're ready for automation. Chapters are added only if the video hasn't been processed before. Requirements YouTube OAuth2** credentials in n8n SupaData API Key** Airtable account** (for optional deduplication logic) Customizing This Workflow Change the chapter format, or instruct the AI to use emojis, bold titles, or include sections like "sponsor" or "Q&A". Replace the RSS Trigger with a webhook if using a different publishing process.
by Kean
How it works Input your proposal basics - Manually enter the core details and key points for your proposal Dual AI processing - OpenAI expands your inputs into a comprehensive draft, then Claude refines it for clarity and readability Automated document output - The workflow copies your Google Doc template, replaces all variables with the AI-generated content, and delivers your finished proposal Set up steps Estimated time: 10-15 minutes Create an OpenRouter account - Sign up at OpenRouter to get API access for Claude Set up your Google Doc template - Create a template document with placeholder variables (variable names are listed in the 'Update proposal' node) Configure API credentials - Add your OpenAI and OpenRouter API keys to the workflow Connect Google Drive - Authenticate your Google account to enable document creation 💡 Detailed configuration instructions and variable naming conventions can be found in the sticky notes within the workflow. `
by Michael A Putra
🧠 Automated Resume & Cover Letter Generator This project is an automation workflow that generates a personalized resume and cover letter for each job listing. 🚀 Features Automated Resume Crafting Generates an HTML resume from your data. Hosts it live on GitHub Pages. Converts it to PDF using Gotenberg and saves it to Google Drive. Automated Cover Letter Generation Uses an LLM to create a tailored cover letter for each job listing. Simple Input Database Agent Stores your experience in an n8n Data Table with the following fields: role, summary, task, skills, tools, industry. The main agent pulls this data using RAG (Retrieval-Augmented Generation) to personalize the outputs. One-Time GitHub Setup Initializes a blank GitHub repository to host HTML files online, allowing Gotenberg to access and convert them. 🧩 Tech Stack Gotenberg** – Converts HTML to PDF GitHub Pages** – Hosts live HTML files n8n** – Handles data tables and workflow automation LLM (OpenAI / Cohere / etc.)** – Generates cover letters Google Drive** – Stores the final PDFs ⚙️ Installation & Setup 1. Create a GitHub Repository This repo will host your HTML resume through GitHub Pages. 2. Set the Webhook URL In the notify-n8n.yml file, replace: role | summary | task | skills | tools | industry 3. Create the n8n Data Table Add the following columns: role | summary | task | skills | tools | industry 4. Create a Google Spreadsheet Add these columns: company | cover_letter | resume 5. Install Gotenberg Follow the installation instructions on the Gotenberg GitHub repository: https://github.com/thecodingmachine/gotenberg 6. Customize the HTML Template Modify the HTML resume to your liking. You can use an LLM to locate and edit specific sections. 7. Add Authentication and Link Your GitHub Repo Ensure your workflow has permission to push updates to your GitHub Pages branch. 8. Run the Workflow Once everything is connected, trigger the workflow to automatically generate and save personalized resumes and cover letters. 📝 How to Use Copy and paste the job listing description into the Telegram bot. Wait for the "Done" notification before submitting another job. Do not use the bot again until the notification appears. The process usually takes a few minutes to complete. ✅ Notes This workflow is designed to save time and personalize your job applications efficiently. By combining n8n automation, LLMs, and open-source tools like Gotenberg, you can maintain full control over your data while generating high-quality resumes and cover letters for every job opportunity.
by Richard Black
Generate GitHub Release Notes with AI Automatically generate GitHub release notes using AI. This workflow compares your latest two GitHub releases, summarises the changes, and produces a clean, ready-to-paste changelog entry. It’s ideal for automating GitHub Releases, versioning workflows, and keeping your documentation or CHANGELOG.md up to date without manual editing. What this workflow does Listens for newly published GitHub Releases. Fetches and compares the latest two GitHub release versions. Uses an AI Chat Model to summarise changes and generate structured release notes. Outputs clean, reusable release note content for GitHub, documentation, or CI/CD pipelines. How it works GitHub Trigger detects a new published release. Release detail nodes extract the latest tag, body, and repository metadata. Comparison logic fetches the previous release and prepares a diff. Chat Model nodes (via OpenRouter) generate both a summary and a final, formatted release note. Requirements / Connections GitHub OAuth credential configured in n8n. OpenRouter API key connected to the Chat Model nodes. Setup instructions Import the template. Select your GitHub OAuth connection in all GitHub nodes. Add your OpenRouter credential to the Chat Model nodes. (Optional) Adjust the AI prompts to customise tone or formatting. Output The workflow produces: A concise summary of differences between the last two GitHub releases. A polished AI-generated GitHub release note ready to publish. Customisation ideas Push generated notes directly into a CHANGELOG.md or documentation repo. Send release summaries to Slack or Teams. Include commit messages, PR titles, or labels for deeper analysis.
by Nijan
This workflow turns Slack into your content control hub and automates the full blog creation pipeline — from sourcing trending headlines, validating topics, drafting posts, and preparing content for your CMS. With one command in Slack, you can source news from RSS feeds, refine them with Gemini AI, generate high-quality blog posts, and get publish-ready output — all inside a single n8n workflow. ⸻ ⚙️ How It Works 1.Trigger in Slack Type start in a Slack channel to fetch trending headlines. Headlines are pulled from your configured RSS feeds. 2.Topic Generation (Gemini AI) Gemini rewrites RSS headlines into unique, non-duplicate topics. Slack displays these topics in a numbered list (e.g., reply with 2 to pick topic 2). 3.Content Validation When you reply with a number, Gemini validates and slightly rewrites the topic to ensure originality. Slack confirms the selected topic back to you. 4.Content Creation Gemini generates a LinkedIn/blog-style draft: Strong hook introduction 3–5 bullet insights A closing takeaway and CTA Optionally suggests asset ideas (e.g., image, infographic). 5.CMS-Ready Output Final draft is structured for publishing (markdown or plain text). You can expand this workflow to automatically send the output to your CMS (WordPress, Ghost, Notion, etc.). ⸻ 🛠 Setup Instructions Connect your Slack Bot to n8n. Configure your RSS Read nodes with feeds relevant to your niche. Add your Gemini API credentials in the AI node. Run the workflow: Type start in Slack → see trending topics. Reply with a number (e.g., gen 3) → get a generated blog draft in the same Slack thread. ⸻ 🎛 Customization Options • Change RSS sources to match your industry. • Adjust Gemini prompts for tone (educational, casual, professional). • Add moderation filters (skip sensitive or irrelevant topics). • Connect the final output step to your CMS, Notion, or Google Docs for publishing. ⸻ ✅ Why Use This Workflow? • One-stop flow: Sourcing → Validation → Writing → Publishing. • Hands-free control: Everything happens from Slack. • Flexible: Easily switch feeds, tone, or target CMS. • Scalable: Extend to newsletters, social posts, or knowledge bases.
by Budi SJ
Automated Invoice Collection & Data Extraction Using Vision API and LLM This workflow automates the process of collecting uploaded invoices, extracting text using Google Vision API, and processing the extracted text with an LLM to produce structured data containing key transaction details such as date, voucher number, transaction detail, vendor, and transaction value. The final data is saved to Google Sheets and a notification is sent to Telegram in real time. ✨ Key Features Invoice Upload Form** Users can upload invoice images through a provided form. Google Drive Integration** Files are stored in a specified Google Drive folder with a shareable preview link. OCR via Google Vision API** Converts invoice images to text using TEXT_DETECTION. Data Structuring via LLM** Uses LLM model to parse and structure data. Structured Output Parser** Ensures consistent output with required columns. Data Cleaning** Cleans and formats numeric values without currency symbols. Google Sheets Sync** Appends or updates transaction data in Google Sheets (matched by file ID). Template: Google Sheets Telegram Notification** Sends a transaction summary directly to a Telegram chat/group. 🔐 Required Credentials Google Vision API Key** → for OCR processing. OpenRouter API Key** → to access the Gemini Flash LLM. Google Drive OAuth2** → to upload and download invoice files. Google Sheets OAuth2** → to write or update spreadsheet data. Telegram Bot Token** → to send notifications to Telegram. Telegram Chat ID** → target chat/group for notifications. 🎁 Benefits Fully automated** from invoice upload to structured reporting. Time-saving** by eliminating manual transaction data entry. Real-time integration** with Google Sheets for reporting and auditing. Instant notifications** via Telegram for quick transaction monitoring. Duplicate prevention** using file ID as a matching key. Flexible** for accounting, finance, or administrative teams.
by Jay Emp0
AI-Powered Chart Generation from Web Data This n8n workflow automates the process of: Scraping real-time data from the web using GPT-4o with browsing capability Converting markdown tables into Chart.js-compatible JSON Rendering the chart using QuickChart.io Uploading the resulting image directly to your WordPress media library 🚀 Use Case Ideal for content creators, analysts, or automation engineers who need to: Automate generation of visual reports Create marketing-ready charts from live data Streamline research-to-publish workflows 🧠 How It Works 1. Prompt Input Trigger the workflow manually or via another workflow with a prompt string, e.g.: Generate a graph of apple's market share in the mobile phone market in Q1 2025 2. Web Search + Table Extraction The Message a model node uses GPT-4o with search to: Perform a real-time query Extract data into a markdown table Return the raw table + citation URLs 3. Chart Generation via AI Agent The Generate Chart AI Agent: Interprets the table Picks an appropriate chart type (bar, line, doughnut, etc.) Outputs valid Chart.js JSON using a strict schema 4. QuickChart API Integration The Create QuickChart node: Sends the Chart.js config to QuickChart.io Renders the chart into a PNG image 5. WordPress Image Upload The Upload image node: Uploads the PNG to your WordPress media library using REST API Uses proper headers for filename and content-type Returns the media GUID and full image URL 🧩 Nodes Used Manual Trigger or Execute Workflow Trigger OpenAI Chat Model (GPT-4o) LangChain Agent (Chart Generator) LangChain OutputParserStructured HTTP Request (QuickChart API + WordPress Upload) Code (Final result formatting) 🗂 Output Format The final Code node returns: { "research": { ...raw markdown table + citations... }, "graph_data": { ...Chart.js JSON... }, "graph_image": { ...WordPress upload metadata... }, "result_image_url": "https://your-wordpress.com/wp-content/uploads/...png" } ⚙️ Requirements OpenAI credentials (GPT-4o or GPT-4o-mini) WordPress REST API credentials with media write access QuickChart.io (free tier works) n8n v1.25+ recommended 📌 Notes Chart style and format are determined dynamically based on your table structure and AI interpretation. Make sure your OpenAI and WordPress credentials are connected properly. Outputs are schema-validated to ensure reliable rendering. 🖼 Sample Output