by Nijan
This workflow turns Slack into your content control hub and automates the full blog creation pipeline — from sourcing trending headlines, validating topics, drafting posts, and preparing content for your CMS. With one command in Slack, you can source news from RSS feeds, refine them with Gemini AI, generate high-quality blog posts, and get publish-ready output — all inside a single n8n workflow. ⸻ ⚙️ How It Works 1.Trigger in Slack Type start in a Slack channel to fetch trending headlines. Headlines are pulled from your configured RSS feeds. 2.Topic Generation (Gemini AI) Gemini rewrites RSS headlines into unique, non-duplicate topics. Slack displays these topics in a numbered list (e.g., reply with 2 to pick topic 2). 3.Content Validation When you reply with a number, Gemini validates and slightly rewrites the topic to ensure originality. Slack confirms the selected topic back to you. 4.Content Creation Gemini generates a LinkedIn/blog-style draft: Strong hook introduction 3–5 bullet insights A closing takeaway and CTA Optionally suggests asset ideas (e.g., image, infographic). 5.CMS-Ready Output Final draft is structured for publishing (markdown or plain text). You can expand this workflow to automatically send the output to your CMS (WordPress, Ghost, Notion, etc.). ⸻ 🛠 Setup Instructions Connect your Slack Bot to n8n. Configure your RSS Read nodes with feeds relevant to your niche. Add your Gemini API credentials in the AI node. Run the workflow: Type start in Slack → see trending topics. Reply with a number (e.g., gen 3) → get a generated blog draft in the same Slack thread. ⸻ 🎛 Customization Options • Change RSS sources to match your industry. • Adjust Gemini prompts for tone (educational, casual, professional). • Add moderation filters (skip sensitive or irrelevant topics). • Connect the final output step to your CMS, Notion, or Google Docs for publishing. ⸻ ✅ Why Use This Workflow? • One-stop flow: Sourcing → Validation → Writing → Publishing. • Hands-free control: Everything happens from Slack. • Flexible: Easily switch feeds, tone, or target CMS. • Scalable: Extend to newsletters, social posts, or knowledge bases.
by Budi SJ
Automated Invoice Collection & Data Extraction Using Vision API and LLM This workflow automates the process of collecting uploaded invoices, extracting text using Google Vision API, and processing the extracted text with an LLM to produce structured data containing key transaction details such as date, voucher number, transaction detail, vendor, and transaction value. The final data is saved to Google Sheets and a notification is sent to Telegram in real time. ✨ Key Features Invoice Upload Form** Users can upload invoice images through a provided form. Google Drive Integration** Files are stored in a specified Google Drive folder with a shareable preview link. OCR via Google Vision API** Converts invoice images to text using TEXT_DETECTION. Data Structuring via LLM** Uses LLM model to parse and structure data. Structured Output Parser** Ensures consistent output with required columns. Data Cleaning** Cleans and formats numeric values without currency symbols. Google Sheets Sync** Appends or updates transaction data in Google Sheets (matched by file ID). Template: Google Sheets Telegram Notification** Sends a transaction summary directly to a Telegram chat/group. 🔐 Required Credentials Google Vision API Key** → for OCR processing. OpenRouter API Key** → to access the Gemini Flash LLM. Google Drive OAuth2** → to upload and download invoice files. Google Sheets OAuth2** → to write or update spreadsheet data. Telegram Bot Token** → to send notifications to Telegram. Telegram Chat ID** → target chat/group for notifications. 🎁 Benefits Fully automated** from invoice upload to structured reporting. Time-saving** by eliminating manual transaction data entry. Real-time integration** with Google Sheets for reporting and auditing. Instant notifications** via Telegram for quick transaction monitoring. Duplicate prevention** using file ID as a matching key. Flexible** for accounting, finance, or administrative teams.
by Jay Emp0
AI-Powered Chart Generation from Web Data This n8n workflow automates the process of: Scraping real-time data from the web using GPT-4o with browsing capability Converting markdown tables into Chart.js-compatible JSON Rendering the chart using QuickChart.io Uploading the resulting image directly to your WordPress media library 🚀 Use Case Ideal for content creators, analysts, or automation engineers who need to: Automate generation of visual reports Create marketing-ready charts from live data Streamline research-to-publish workflows 🧠 How It Works 1. Prompt Input Trigger the workflow manually or via another workflow with a prompt string, e.g.: Generate a graph of apple's market share in the mobile phone market in Q1 2025 2. Web Search + Table Extraction The Message a model node uses GPT-4o with search to: Perform a real-time query Extract data into a markdown table Return the raw table + citation URLs 3. Chart Generation via AI Agent The Generate Chart AI Agent: Interprets the table Picks an appropriate chart type (bar, line, doughnut, etc.) Outputs valid Chart.js JSON using a strict schema 4. QuickChart API Integration The Create QuickChart node: Sends the Chart.js config to QuickChart.io Renders the chart into a PNG image 5. WordPress Image Upload The Upload image node: Uploads the PNG to your WordPress media library using REST API Uses proper headers for filename and content-type Returns the media GUID and full image URL 🧩 Nodes Used Manual Trigger or Execute Workflow Trigger OpenAI Chat Model (GPT-4o) LangChain Agent (Chart Generator) LangChain OutputParserStructured HTTP Request (QuickChart API + WordPress Upload) Code (Final result formatting) 🗂 Output Format The final Code node returns: { "research": { ...raw markdown table + citations... }, "graph_data": { ...Chart.js JSON... }, "graph_image": { ...WordPress upload metadata... }, "result_image_url": "https://your-wordpress.com/wp-content/uploads/...png" } ⚙️ Requirements OpenAI credentials (GPT-4o or GPT-4o-mini) WordPress REST API credentials with media write access QuickChart.io (free tier works) n8n v1.25+ recommended 📌 Notes Chart style and format are determined dynamically based on your table structure and AI interpretation. Make sure your OpenAI and WordPress credentials are connected properly. Outputs are schema-validated to ensure reliable rendering. 🖼 Sample Output
by Dr. Christoph Schorsch
Rename Workflow Nodes with AI for Clarity This workflow automates the tedious process of renaming nodes in your n8n workflows. Instead of manually editing each node, it uses an AI language model to analyze its function and assign a concise, descriptive new name. This ensures your workflows are clean, readable, and easy to maintain. Who's it for? This template is perfect for n8n developers and power users who build complex workflows. If you often find yourself struggling to understand the purpose of different nodes at a glance or spend too much time manually renaming them for documentation, this tool will save you significant time and effort. How it works / What it does The workflow operates in a simple, automated sequence: Configure Suffix: A "Set" node at the beginning allows you to easily define the suffix that will be appended to the new workflow's name (e.g., "- new node names"). Fetch Workflow: It then fetches the JSON data of a specified n8n workflow using its ID. AI-Powered Renaming: The workflow's JSON is sent to an AI model (like Google Gemini or Anthropic Claude), which has been prompted to act as an n8n expert. The AI analyzes the type and parameters of each node to understand its function. Generate New Names: Based on this analysis, the AI proposes new, meaningful names and returns them in a structured JSON format. Update and Recreate: A Code Node processes these suggestions, updates all node names, and correctly rebuilds the connections and expressions. Create & Activate New Workflow: Finally, it creates a new workflow with the updated name, deactivates the original to avoid confusion, and activates the new version.
by Guillaume Duvernay
Move beyond generic AI-generated content and create articles that are high-quality, factually reliable, and aligned with your unique expertise. This template orchestrates a sophisticated "research-first" content creation process. Instead of simply asking an AI to write an article from scratch, it first uses an AI planner to break your topic down into logical sub-questions. It then queries a Super assistant—which you've connected to your own trusted knowledge sources like Notion, Google Drive, or PDFs—to build a comprehensive research brief. Only then is this fact-checked brief handed to a powerful AI writer to compose the final article, complete with source links. This is the ultimate workflow for scaling expert-level content creation. Who is this for? Content marketers & SEO specialists:** Scale the creation of authoritative, expert-level blog posts that are grounded in factual, source-based information. Technical writers & subject matter experts:** Transform your complex internal documentation into accessible public-facing articles, tutorials, and guides. Marketing agencies:** Quickly generate high-quality, well-researched drafts for clients by connecting the workflow to their provided brand and product materials. What problem does this solve? Reduces AI "hallucinations":** By grounding the entire writing process in your own trusted knowledge base, the AI generates content based on facts you provide, not on potentially incorrect information from its general training data. Ensures comprehensive topic coverage:** The initial AI-powered "topic breakdown" step acts like an expert outliner, ensuring the final article is well-structured and covers all key sub-topics. Automates source citation:** The workflow is designed to preserve and integrate source URLs from your knowledge base directly into the final article as hyperlinks, boosting credibility and saving you manual effort. Scales expert content creation:** It effectively mimics the workflow of a human expert (outline, research, consolidate, write) but in an automated, scalable, and incredibly fast way. How it works This workflow follows a sophisticated, multi-step process to ensure the highest quality output: Decomposition: You provide an article title and guidelines via the built-in form. An initial AI call then acts as a "planner," breaking down the main topic into an array of 5-8 logical sub-questions. Fact-based research (RAG): The workflow loops through each of these sub-questions and queries your Super assistant. This assistant, which you have pre-configured and connected to your own knowledge sources (Notion pages, Google Drive folders, PDFs, etc.), finds the relevant information and source links for each point. Consolidation: All the retrieved question-and-answer pairs are compiled into a single, comprehensive research brief. Final article generation: This complete, fact-checked brief is handed to a final, powerful AI writer (e.g., GPT-5). Its instructions are clear: write a high-quality article using only the provided information and integrate the source links as hyperlinks where appropriate. Implementing the template Set up your Super assistant (Prerequisite): First, go to Super, create an assistant, connect it to your knowledge sources (Notion, Drive, etc.), and copy its Assistant ID and your API Token. Configure the workflow: Connect your AI provider (e.g., OpenAI) credentials to the two Language Model nodes (GPT 5 mini and GPT 5 chat). In the Query Super Assistant (HTTP Request) node, paste your Assistant ID in the body and add your Super API Token for authentication (we recommend using a Bearer Token credential). Activate the workflow: Toggle the workflow to "Active" and use the built-in form to generate your first fact-checked article! Taking it further Automate publishing:* Connect the final *Article result* node to a *Webflow* or *WordPress** node to automatically create a draft post in your CMS. Generate content in bulk:* Replace the *Form Trigger* with an *Airtable* or *Google Sheet** trigger to automatically generate a whole batch of articles from your content calendar. Customize the writing style:* Tweak the system prompt in the final *New content - Generate the AI output** node to match your brand's specific tone of voice, add SEO keywords, or include specific calls-to-action.
by Michael A Putra
🧠 Automated Resume & Cover Letter Generator This project is an automation workflow that generates a personalized resume and cover letter for each job listing. 🚀 Features Automated Resume Crafting Generates an HTML resume from your data. Hosts it live on GitHub Pages. Converts it to PDF using Gotenberg and saves it to Google Drive. Automated Cover Letter Generation Uses an LLM to create a tailored cover letter for each job listing. Simple Input Database Agent Stores your experience in an n8n Data Table with the following fields: role, summary, task, skills, tools, industry. The main agent pulls this data using RAG (Retrieval-Augmented Generation) to personalize the outputs. One-Time GitHub Setup Initializes a blank GitHub repository to host HTML files online, allowing Gotenberg to access and convert them. 🧩 Tech Stack Gotenberg** – Converts HTML to PDF GitHub Pages** – Hosts live HTML files n8n** – Handles data tables and workflow automation LLM (OpenAI / Cohere / etc.)** – Generates cover letters Google Drive** – Stores the final PDFs ⚙️ Installation & Setup 1. Create a GitHub Repository This repo will host your HTML resume through GitHub Pages. 2. Set the Webhook URL In the notify-n8n.yml file, replace: role | summary | task | skills | tools | industry 3. Create the n8n Data Table Add the following columns: role | summary | task | skills | tools | industry 4. Create a Google Spreadsheet Add these columns: company | cover_letter | resume 5. Install Gotenberg Follow the installation instructions on the Gotenberg GitHub repository: https://github.com/thecodingmachine/gotenberg 6. Customize the HTML Template Modify the HTML resume to your liking. You can use an LLM to locate and edit specific sections. 7. Add Authentication and Link Your GitHub Repo Ensure your workflow has permission to push updates to your GitHub Pages branch. 8. Run the Workflow Once everything is connected, trigger the workflow to automatically generate and save personalized resumes and cover letters. 📝 How to Use Copy and paste the job listing description into the Telegram bot. Wait for the "Done" notification before submitting another job. Do not use the bot again until the notification appears. The process usually takes a few minutes to complete. ✅ Notes This workflow is designed to save time and personalize your job applications efficiently. By combining n8n automation, LLMs, and open-source tools like Gotenberg, you can maintain full control over your data while generating high-quality resumes and cover letters for every job opportunity.
by Davide
This workflow is a simple yet brilliant automation designed to generate time-coded SRT subtitles starting directly from a video URL using ElevenLabs. With just a single video link, the workflow automatically extracts the audio, transcribes it using AI speech recognition, and converts the transcription into a properly formatted SRT subtitle file with accurate timestamps. This workflow automates the creation of SRT subtitle files for YouTube videos using AI speech recognition, eliminating the need for manual captioning and saving creators hours of work. It’s a fast, reliable, and fully automated solution, perfect for YouTube creators, video editors, and content producers who want to improve accessibility, engagement, and SEO with minimal effort. With just one input (a video link), the workflow: Downloads the video Automatically transcribes the audio using AI speech-to-text Intelligently splits the transcription into readable subtitle segments Generates a perfectly formatted SRT file with accurate timestamps Uploads the final subtitle file to Google Drive, ready to use It’s a lightweight, no-friction workflow that turns a raw video into professional subtitles in a fully automated way. Key Advantages 1. ✅ Extremely Simple, Yet Powerful This workflow proves that automation doesn’t need to be complex to be effective. A minimal number of nodes delivers a complete end-to-end subtitle generation process. 2. ✅ Automatic Time-Based SRT Generation Subtitles are not just plain text: they are properly time-aligned, making them immediately compatible with YouTube, video editors, and media players. 3. ✅ Smart Subtitle Splitting The workflow intelligently splits text based on punctuation and length, producing subtitles that are: Easy to read Well-paced Aligned with natural speech flow 4. ✅ Perfect for Video Creators This workflow is ideal for: YouTube creators** Content marketers Educators Podcasters Social video producers It dramatically reduces the time needed to add subtitles, improving: Accessibility Engagement SEO and watch time 5. ✅ Fully Automatable & Scalable Once set up, it can be reused endlessly: One video or hundreds Manual trigger or automated pipelines Easy to extend with translations, publishing, or notifications This workflow automates the creation of SRT subtitle files from YouTube videos using AI speech recognition. The process begins when the workflow is manually triggered, requiring a YouTube video URL as input. The system first fetches the video content via HTTP request, then sends the audio to ElevenLabs for transcription. The AI returns timestamped text segments which are intelligently split into readable subtitle chunks based on punctuation and length constraints. These segments are formatted into standard SRT (SubRip) format with precise timing, converted to a binary file, and finally uploaded to a specified Google Drive folder as a ready-to-use subtitle file. Set up Steps Configure Video Source: In the "Set Video Url" node, replace the placeholder value with a valid YouTube video URL or set up a method to dynamically provide URLs API Credentials Setup: Configure ElevenLabs API credentials in the "Transcribe audio or video" node with your API key Set up Google Drive OAuth2 credentials in the "Upload file" node with appropriate folder permissions Customize Output: Adjust the SRT generation parameters in the "From Elevenlabs to Srt" node if different subtitle formatting is needed Destination Folder: Verify the Google Drive folder ID in the upload node points to your desired destination Execution: Trigger the workflow manually and provide a video URL when prompted to generate and upload subtitles 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Erfan Iranshad
Who is this for? Content creators, media teams, and bloggers who run a YouTube channel and want to automatically repurpose video content into SEO-ready blog posts — without manual writing. Ideal for anyone publishing news or educational content in any language. What it does This workflow runs three fully automated pipelines that take a YouTube video all the way to a published WordPress post: Pipeline 1 — Transcript Collector runs on a schedule, fetches new videos from your YouTube playlist via the YouTube Data API, retrieves their full transcripts via RapidAPI, saves each transcript to a Google Doc, and logs metadata to Google Sheets. Pipeline 2 — AI Blog Generator picks up unprocessed transcripts, sends them to a Gemini AI Agent that reads the transcript and your existing published posts (for internal linking), then generates structured blog content: title, body (HTML), summary, tags, Telegram caption, image prompt, and publish priority. Results are saved to a second Google Sheet as pending. Pipeline 3 — Publisher runs every 3 hours, selects the highest-priority pending post (urgent > normal > evergreen), publishes it to WordPress, generates a featured image via an AI image API, uploads and attaches it to the post, then announces it to a Telegram channel. How to set up Import this workflow into n8n. Create two Google Sheets tabs: youtubeVideos and blogsAndNewsUploaded (column structures in the sticky notes). Configure all credentials: Google (Sheets, Docs), YouTube API key, RapidAPI key (youtube-transcript3), Gemini API, WordPress, Telegram Bot, and your AI image generation API. Set your YouTube Playlist ID in the first HTTP node. Set your Google Drive Folder ID for transcript storage. Activate all three schedule triggers independently. Requirements YouTube Data API v3 key (Google Cloud Console) RapidAPI subscription to youtube-transcript3 Google Gemini API key WordPress site with Application Password Telegram Bot token + channel AI image generation API (compatible with OpenAI images format) How to customize Adjust the Gemini system prompt in the AI Agent node to change content language, tone, or structure. Change publish_priority logic in the JS node to control posting frequency. Swap the image generation API with any provider (DALL-E, Stability AI, etc.). Add a Filter node before publishing to require manual approval of pending posts.
by Jimleuk
Generating contextual summaries is an token-intensive approach for RAG embeddings which can quickly rack up costs if your inference provider charges by token usage. Featherless.ai is an inference provider with a different pricing model - they charge a flat subscription fee (starting from $10) and allows for unlimited token usage instead. If you're typically spending over $10 - $25 a month, you may find Featherless to be a cheaper and more manageable option for your projects or team. For this template, Featherless's unlimited token usage is well suited for generating contextual summaries at high volumes for a majority of RAG workloads. LLM: moonshotai/Kimi-K2-Instruct Embeddings: models/gemini-embedding-001 How it works A large document is imported into the workflow using the HTTP node and its text extracted via the Extract from file node. For this demonstration, the UK highway code is used an an example. Each page is processed individually and a contextual summary is generated for it. The contextual summary generation involves taking the current page, preceding and following pages together and summarising the contents of the current page. This summary is then converted to embeddings using Gemini-embedding-001 model. Note, we're using a http request to use the Gemini embedding API as at time of writing, n8n does not support the new API's schema. These embeddings are then stored in a Qdrant collection which can then be retrieved via an agent/MCP server or another workflow. How to use Replace the large document import with your own source of documents such as google drive or an internal repo. Replace the manual trigger if you want the workflow to run as soon as documents become available. If you're using Google Drive, check out my Push notifications for Google Drive template. Expand and/or tune embedding strategies to suit your data. You may want to additionally embed the content itself and perform multi-stage queries using both. Requirements Featherless.ai Account and API Key Gemini Account and API Key for Embeddings Qdrant Vector store Customising this workflow Sparse Vectors were not included in this template due to scope but should be the next step to getting the most our of contextual retrieval. Be sure to explore other models on the Featherless.ai platform or host your own custom/finetuned models.
by AttenSys AI
🧥 Virtual Try-On Image & Video Generation (VLM Run) 📌 Overview This n8n workflow enables a Virtual Try-On experience where users upload a dress image and the system: Combines it with a fashion model image Generates a realistic try-on image Generates a fashion walking video Automatically shares results via: Telegram Discord YouTube 🚀 Use Cases Virtual fashion try-on AI fashion marketing Clothing e-commerce previews Social media fashion automation Influencer & brand demo pipelines ✨ Key Features 🖼️ Image-based virtual try-on (model wearing the dress) 🎥 AI-generated fashion video 🔗 Multi-platform publishing (Telegram, Discord, YouTube) 🧩 Modular, extensible workflow design 🧠 Workflow Architecture 🟨 Input Dress Image** – Uploaded by user (Form Trigger) Model Image** – Downloaded from predefined URL Prompt** – Auto-constructed inside workflow 🟦 Output 🖼️ Try-On Image 🎥 Fashion Walk Video 📤 Shared to: Telegram (image/video) Discord (image) YouTube (video upload) 🔐 Required Credentials You must configure the following credentials in n8n: | Service | Credential Type | | -------- | ------------------ | | VLM Run | VLM Run API | | Telegram | Telegram Bot API | | Discord | Discord OAuth2 | | YouTube | YouTube OAuth2 | ⚠️ Community Node Warning > Important: This workflow uses a Community Node > @vlm-run/n8n-nodes-vlmrun What this means: This node is NOT installed by default in n8n You must manually install it before using the workflow 📦 Installation Run the following command in your n8n environment: npm install @vlm-run/n8n-nodes-vlmrun Then restart n8n. 📖 Community Nodes Documentation: https://docs.n8n.io/integrations/community-nodes/
by Navneet Singh Arora
Automated Job Search & AI Relevance Evaluator Overview This n8n template automates the entire job hunting process by cross-referencing a candidate's PDF resume with live job listings from the JSearch API. It automatically filters for fresh, unapplied roles, uses Google Gemini AI to critically evaluate each job's relevance against the candidate's specific experience, and logs highly tailored matches directly into a Notion database for seamless tracking. 🚀 How it works Context & Extraction: The workflow fetches existing applications from your Notion database to prevent duplicate tracking, then reads and extracts plain text directly from a local PDF resume. Role Discovery: A Google Gemini node isolates the candidate's current job title to formulate a precise search query. This query is sent to the JSearch API (via RapidAPI) to pull live job listings. Smart Filtering: Natively filters out jobs posted more than 14 days ago and jobs that already exist in your Notion tracker, ensuring only fresh, unseen postings are processed. AI Evaluation: The core of the workflow! Google Gemini acts as an expert technical recruiter, comparing the candidate's resume against each job description. It generates a "Relevance Score" (1-100), a "Skill Match Score", extracts remote/salary info, and summarizes why the job is a good fit. Notion Logging: Structured insights for each matched role are formatted and pushed directly as a rich database page into your Notion tracking board. 🎮 How to use API Credentials: Add your Google Gemini API Key and your RapidAPI key (subscribed to the JSearch API) in their respective nodes. Notion Setup: Connect your Notion credential and update the two Notion nodes with your specific target Database ID. File Path: Update the File Selector to point to your PDF resume (e.g., /home/node/.n8n-files/My-Resume.pdf). Search Customization: Open the "Search for Jobs via RapidAPI" node to manually tweak your target location, industry keywords, or pagination limits. ⚙️ Requirements Google Gemini API Key RapidAPI Key (for JSearch API) Notion Account (with a pre-configured Job Tracker database) n8n Environment: Designed for self-hosted instances with local file access. 🎯 Use Cases Automated Job Hunting: Wake up to a pre-vetted, automatically scored list of highly relevant job openings perfectly matched to your exact resume. Recruiting Pipelines: Scale candidate sourcing by automatically comparing an inbound candidate's resume against thousands of active job board posts. Freelance Lead Generation: Independent contractors or agencies can use this to find companies actively hiring for the exact technical skills they offer.
by Dahiana
Description Who's it for: Content creators, marketers, and businesses who publish on both YouTube and blog platforms. What it does: Monitors your YouTube channel for new videos and automatically creates SEO-optimized blog posts using AI, then publishes to WordPress or Webflow. How it works: RSS Feed Trigger polls YouTube videos (every X amount of time) Extracts video metadata (title, description, thumbnail) YouTube node extracts full description for extra context Uses OpenAI (you can choose any model) to generate 600-800 word blog post Publishes to WordPress AND/OR Webflow with error handling Sends notifications to Telegram if publishing fails Requirements: YouTube channel ID (avoid tutorial channels for better results) OpenAI API key (or similar) WordPress OR Webflow credentials Telegram bot (optional, for error notifications) Setup steps: Replace YOUR_CHANNEL_ID in RSS Feed Trigger Add OpenAI credentials in AI generation node Configure WordPress and/or Webflow credentials Add Telegram bot for error notifications (optional). If you choose to set up Telegram, you need to input your channel ID. Test with manual execution first Customization: Modify AI prompt for different content styles Adjust polling frequency (30-60 minutes recommended) Add more CMS platforms Add content verification (is content larger than 600 characters? if not, improve)