by ลukasz
What Is This? This workflow is an automated system that tracks End-of-Life (EOL) dates for software and technologies used across your projects. It eliminates the need to manually monitor EOL dates in spreadsheets or calendars by automatically fetching the latest EOL information and sending Slack notifications when software versions are approaching their end of support, end of life, or when new subversions are released. Who Is It For? Designed for DevOps teams, IT managers, project managers, CTOs, and technical leads responsible for maintaining software infrastructure. This workflow empowers anyone who needs to ensure their projects stay secure and compliant by using supported software versions. Development agencies managing multiple client projects, internal IT teams overseeing infrastructure maintenance, and compliance officers tracking software support lifecycles will all find this solution invaluable. By automatically monitoring EOL dates from endoflife.date and cross-referencing them with your project inventory, the workflow ensures you never miss critical upgrade windows. Whether you manage a handful of internal applications or oversee dozens of client projects with varied technology stacks, this automated pipeline delivers timely, actionable alerts without manual date tracking. How Does It Work? This end-to-end EOL monitoring automation consists of five main stages: 1. Software Inventory Collection Retrieves your list of software to monitor from the NocoDB EOL Software table, then processes each software individually through the pipeline. 2. EOL Data Fetching Connects to the free endoflife.date API to fetch comprehensive EOL information for each software, including version cycles, latest subversions, release dates, support end dates, and EOL dates for over 400 products. 3. Data Normalization & Storage Renames API fields to human-readable formats, converts boolean values to proper nulls, and intelligently updates or creates records in the EOL Dates table based on existing data. 4. Project Analysis Uses JavaScript to analyze all projects and their associated software versions, categorizing them into three priority groups: Past EOL (already end-of-life), EOL Today (ending today), and EOL in X days (approaching within configured threshold). 5. Intelligent Notifications Filters out projects without EOL concerns and sends beautifully formatted Slack messages to your designated channel, clearly organizing alerts by severity with complete version and date information. How To Set It Up? Prerequisites: An active n8n account or self-hosted instance A NocoDB account with API access, might be self-hosted A Slack workspace with appropriate permissions Required NocoDB Tables: You need to create three tables with specific schemas. Luckily, our workflow automates the set up as much as possible to provide better experience. EOL Software Table (manually populated): Title: Single line text - must use valid software identifiers from endoflife.date EOL Dates Table (automatically populated): key: Single line text Software: Single line text Version: Single line text Latest Subversion: Single line text Subversion Release: Date field End Of Life: Date field End Of Support: Date field Long Term Support: Checkbox EOL Projects Table (manually populated): Project Name: Single line text EOLDates: Link field - Many-to-Many relationship to EOL Dates table Configuration: In the "Config" node, set Days before EOL to your desired notification threshold (default: 31 days). Credentials Setup: Configure NocoDB API token in the NocoDB nodes Set up Slack OAuth2 credentials in the "Send a message" node Update the Slack channel ID to your desired notification channel Scheduling: The workflow runs automatically daily at 7:00 AM via the Schedule Trigger node. For initial setup, run it manually by pressing "Execute workflow" to populate the EOL Dates table. Initial Data Entry: First run: Manually populate the EOL Software table with software you want to track Execute the workflow manually to populate the EOL Dates table Populate the EOL Projects table by linking your project names to the relevant software versions from EOL Dates The workflow will now automatically monitor and notify you daily What's More? Rate Limiting Protection: The workflow includes a Wait node between API calls to respect endoflife.date's rate limits, ensuring reliable long-term operation. Smart Update Logic: The workflow intelligently checks if EOL data already exists before deciding whether to update existing records or insert new ones, preventing duplicates and maintaining data integrity. Flexible Alerting: Configure your notification window from 1 to any number of days before EOL events, adapting to your team's upgrade planning cycles. Thank You, Perfect! Visit my profile for other free business automations. And if you're looking for dedicated software development or custom n8n workflow solutions, don't hesitate to reach out at developers@sailingbyte.com or on sailingbyte.com!
by Dr. Christoph Schorsch
Rename Workflow Nodes with AI for Clarity This workflow automates the tedious process of renaming nodes in your n8n workflows. Instead of manually editing each node, it uses an AI language model to analyze its function and assign a concise, descriptive new name. This ensures your workflows are clean, readable, and easy to maintain. Who's it for? This template is perfect for n8n developers and power users who build complex workflows. If you often find yourself struggling to understand the purpose of different nodes at a glance or spend too much time manually renaming them for documentation, this tool will save you significant time and effort. How it works / What it does The workflow operates in a simple, automated sequence: Configure Suffix: A "Set" node at the beginning allows you to easily define the suffix that will be appended to the new workflow's name (e.g., "- new node names"). Fetch Workflow: It then fetches the JSON data of a specified n8n workflow using its ID. AI-Powered Renaming: The workflow's JSON is sent to an AI model (like Google Gemini or Anthropic Claude), which has been prompted to act as an n8n expert. The AI analyzes the type and parameters of each node to understand its function. Generate New Names: Based on this analysis, the AI proposes new, meaningful names and returns them in a structured JSON format. Update and Recreate: A Code Node processes these suggestions, updates all node names, and correctly rebuilds the connections and expressions. Create & Activate New Workflow: Finally, it creates a new workflow with the updated name, deactivates the original to avoid confusion, and activates the new version.
by Guillaume Duvernay
Move beyond generic AI-generated content and create articles that are high-quality, factually reliable, and aligned with your unique expertise. This template orchestrates a sophisticated "research-first" content creation process. Instead of simply asking an AI to write an article from scratch, it first uses an AI planner to break your topic down into logical sub-questions. It then queries a Super assistantโwhich you've connected to your own trusted knowledge sources like Notion, Google Drive, or PDFsโto build a comprehensive research brief. Only then is this fact-checked brief handed to a powerful AI writer to compose the final article, complete with source links. This is the ultimate workflow for scaling expert-level content creation. Who is this for? Content marketers & SEO specialists:** Scale the creation of authoritative, expert-level blog posts that are grounded in factual, source-based information. Technical writers & subject matter experts:** Transform your complex internal documentation into accessible public-facing articles, tutorials, and guides. Marketing agencies:** Quickly generate high-quality, well-researched drafts for clients by connecting the workflow to their provided brand and product materials. What problem does this solve? Reduces AI "hallucinations":** By grounding the entire writing process in your own trusted knowledge base, the AI generates content based on facts you provide, not on potentially incorrect information from its general training data. Ensures comprehensive topic coverage:** The initial AI-powered "topic breakdown" step acts like an expert outliner, ensuring the final article is well-structured and covers all key sub-topics. Automates source citation:** The workflow is designed to preserve and integrate source URLs from your knowledge base directly into the final article as hyperlinks, boosting credibility and saving you manual effort. Scales expert content creation:** It effectively mimics the workflow of a human expert (outline, research, consolidate, write) but in an automated, scalable, and incredibly fast way. How it works This workflow follows a sophisticated, multi-step process to ensure the highest quality output: Decomposition: You provide an article title and guidelines via the built-in form. An initial AI call then acts as a "planner," breaking down the main topic into an array of 5-8 logical sub-questions. Fact-based research (RAG): The workflow loops through each of these sub-questions and queries your Super assistant. This assistant, which you have pre-configured and connected to your own knowledge sources (Notion pages, Google Drive folders, PDFs, etc.), finds the relevant information and source links for each point. Consolidation: All the retrieved question-and-answer pairs are compiled into a single, comprehensive research brief. Final article generation: This complete, fact-checked brief is handed to a final, powerful AI writer (e.g., GPT-5). Its instructions are clear: write a high-quality article using only the provided information and integrate the source links as hyperlinks where appropriate. Implementing the template Set up your Super assistant (Prerequisite): First, go to Super, create an assistant, connect it to your knowledge sources (Notion, Drive, etc.), and copy its Assistant ID and your API Token. Configure the workflow: Connect your AI provider (e.g., OpenAI) credentials to the two Language Model nodes (GPT 5 mini and GPT 5 chat). In the Query Super Assistant (HTTP Request) node, paste your Assistant ID in the body and add your Super API Token for authentication (we recommend using a Bearer Token credential). Activate the workflow: Toggle the workflow to "Active" and use the built-in form to generate your first fact-checked article! Taking it further Automate publishing:* Connect the final *Article result* node to a *Webflow* or *WordPress** node to automatically create a draft post in your CMS. Generate content in bulk:* Replace the *Form Trigger* with an *Airtable* or *Google Sheet** trigger to automatically generate a whole batch of articles from your content calendar. Customize the writing style:* Tweak the system prompt in the final *New content - Generate the AI output** node to match your brand's specific tone of voice, add SEO keywords, or include specific calls-to-action.
by Guy
๐ฏGeneral Principles This workflow automates the import of leads into the Company table of a CRM built with Airtable. Its originality lies in leveraging the new "Data Table" node (an internal table within n8n) to generate an execution report. ๐ Why Data Tables: This approach eliminates the need for reading/writing operations on a Google Sheet file or an external database. ๐งฉ It is structured on 3 main key steps: Reading leads for which email address validity has been verified. Creating or updating company information. Generating of execution report. This workflow enables precise tracking of marketing actions while facilitating the historical record of interactions with prospects and clients. Prerequisites Leads file: A prior validation check on email address accuracy is required. Airtable: Must contain at least a Company table with the following fields: Company: company name Business Leader: name of the executive Activity: business sector (notary, accountant, plumber, electrician, etc.) Address: main company address Zip Code: postal code City: city Phone Number: phone number Email: email address of a manager URL Site: company website URL Opt-in: companyโs consent for commercial prospecting Campaign: reserved for future marketing campaigns Valid Email: indicator confirming email verification โ๏ธ Step-by-Step Description 1๏ธโฃ Initialization and Lead Selection Data Table Initialization: An internal n8n table is created to build the execution report. Lead Selection: The workflow selects leads from the Google Sheet file (Sheet1 tab) where the condition "Valid Email" is equal to OK. 2๏ธโฃ Iterative Loop Company Existence Check: The Search Company node is configured with Always Output Data enabled. A JavaScript code node distinguishes three possibilities: Company does not exist: create a new record and increment the created records counter. Company exists once: update the record and increment the updated records counter. Company appears multiple times: log the issue in the Leads file under the Logs tab, requiring a data quality procedure. 3๏ธโฃ Execution Report Generation An execution report is generated and emailed, example format: Leads Import Report: Number of records read: 2392 Number of records created: 2345 Number of records updated: 42 If the sum of records created and updated differs from the total records read, it indicates the presence of duplicates. A counter for duplicated companies could be added. โ Benefits of this template Exception Management and Logging: Identification and traceability of inconsistencies during import with dedicated logs for issues. Data Quality and Structuring: Built-in checks for duplicate detection, validation, and mapping to ensure accurate analysis and compliance. Automated Reporting: Systematic production and delivery of a detailed execution report covering records read, created, and updated. ๐ฌ Contact Need help customizing this (e.g., expanding Data Tables, connecting multiple surveys, or automating follow-ups)? ๐ง smarthome.smartelec@gmail.com ๐ guy.salvatore ๐ smarthome-smartelec.fr
by Samir Saci
Tags: Logistics, Supply Chain, Warehouse Operations, Paperless processes, Quality Management Context Hi! Iโm Samir โ Supply Chain Engineer, Data Scientist based in Paris, and founder of LogiGreen. > Let us use n8n to help small companies digitalise their logistics and supply chain! This workflow helps warehouse operators generate a complete damage report without needing to write anything manually. In warehouse operations, damaged pallets must be reported quickly and consistently. You can automate the entire process using AI to analyse photos of the damages. ๐ฌ For business inquiries, you can find me on LinkedIn Example of damage report The process starts with instructions sent with the operator: A photo of the damaged pallets is shared with the bot: A complete report is generated and sent by email: ๐ฅ Tutorial A complete tutorial (with explanations of every node) is available on YouTube: Who is this template for? This template is ideal for companies with limited IT ressources: Warehouse operators** who need a fast reporting tool Quality teams** who want consistent and structured reports 3PLs and logistics providers** looking to digitalise damage claims Manufacturers and retailers** with high inbound pallet volumes Anyone using Telegram** on the warehouse floor for quick interactions What does this workflow do? This workflow acts as an AI-powered damaged goods reporting assistant using Telegram, OpenAI Vision and Gmail. A operator sends a picture of the damaged pallet via Telegram. The workflow downloads the image and sends it to GPT-4o for damage analysis. The bot replies and asks for a photo of the pallet barcode. The barcode picture is processed by GPT-4o Mini to extract the pallet number. The workflow combines both results (damage analysis + pallet ID). It generates an HTML email report with: damage summary, observed issues, severity level and recommended actions The report is automatically sent via Gmail to the configured recipient. The operator receives a confirmation message in Telegram. The processes does not require any data input form the operator, only to take pictures! Next Steps Before running the workflow, follow the sticky notes and configure: Connect your Telegram Bot API Add your OpenAI API Key in the AI nodes Connect your Gmail credentials Update the recipient email in the โSend Report by Emailโ node Submitted: 20 November 2025 Template designed with n8n version 1.116.2
by txampa_n8n
Insert Notion Database Fields from a Public URL via WhatsApp How it works WhatsApp Trigger receives a message containing a public URL. The workflow extracts the URL and retrieves the page content (via Apify). The content is parsed and transformed into structured fields. A new record is created in Notion, mapping the extracted fields to your database properties. Setup steps Configure your WhatsApp credentials in the WhatsApp Trigger node. In the Search / URL Extraction step, adjust the input logic if your message format differs. Configure your Apify credentials (and actor/task) to scrape the target page. Connect your Notion database and map the extracted values in Properties. Customization Default example: Amazon/Goodreads/Casa del Libro book pages. Update the scraping/parsing logic to match your target sources (e.g., books, products, articles, recipes, news, or LinkedIn profiles). If you change the data model in Notion, update the Properties mapping accordingly in the final node.
by Davide
This workflow is a simple yet brilliant automation designed to generate time-coded SRT subtitles starting directly from a video URL using ElevenLabs. With just a single video link, the workflow automatically extracts the audio, transcribes it using AI speech recognition, and converts the transcription into a properly formatted SRT subtitle file with accurate timestamps. This workflow automates the creation of SRT subtitle files for YouTube videos using AI speech recognition, eliminating the need for manual captioning and saving creators hours of work. Itโs a fast, reliable, and fully automated solution, perfect for YouTube creators, video editors, and content producers who want to improve accessibility, engagement, and SEO with minimal effort. With just one input (a video link), the workflow: Downloads the video Automatically transcribes the audio using AI speech-to-text Intelligently splits the transcription into readable subtitle segments Generates a perfectly formatted SRT file with accurate timestamps Uploads the final subtitle file to Google Drive, ready to use Itโs a lightweight, no-friction workflow that turns a raw video into professional subtitles in a fully automated way. Key Advantages 1. โ Extremely Simple, Yet Powerful This workflow proves that automation doesnโt need to be complex to be effective. A minimal number of nodes delivers a complete end-to-end subtitle generation process. 2. โ Automatic Time-Based SRT Generation Subtitles are not just plain text: they are properly time-aligned, making them immediately compatible with YouTube, video editors, and media players. 3. โ Smart Subtitle Splitting The workflow intelligently splits text based on punctuation and length, producing subtitles that are: Easy to read Well-paced Aligned with natural speech flow 4. โ Perfect for Video Creators This workflow is ideal for: YouTube creators** Content marketers Educators Podcasters Social video producers It dramatically reduces the time needed to add subtitles, improving: Accessibility Engagement SEO and watch time 5. โ Fully Automatable & Scalable Once set up, it can be reused endlessly: One video or hundreds Manual trigger or automated pipelines Easy to extend with translations, publishing, or notifications This workflow automates the creation of SRT subtitle files from YouTube videos using AI speech recognition. The process begins when the workflow is manually triggered, requiring a YouTube video URL as input. The system first fetches the video content via HTTP request, then sends the audio to ElevenLabs for transcription. The AI returns timestamped text segments which are intelligently split into readable subtitle chunks based on punctuation and length constraints. These segments are formatted into standard SRT (SubRip) format with precise timing, converted to a binary file, and finally uploaded to a specified Google Drive folder as a ready-to-use subtitle file. Set up Steps Configure Video Source: In the "Set Video Url" node, replace the placeholder value with a valid YouTube video URL or set up a method to dynamically provide URLs API Credentials Setup: Configure ElevenLabs API credentials in the "Transcribe audio or video" node with your API key Set up Google Drive OAuth2 credentials in the "Upload file" node with appropriate folder permissions Customize Output: Adjust the SRT generation parameters in the "From Elevenlabs to Srt" node if different subtitle formatting is needed Destination Folder: Verify the Google Drive folder ID in the upload node points to your desired destination Execution: Trigger the workflow manually and provide a video URL when prompted to generate and upload subtitles ๐ Subscribe to my new YouTube channel. Here Iโll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Stephan Koning
Recruiter Mirror is a proofโofโconcept ATS analysis tool for SDRs/BDRs. Compare your LinkedIn or CV to job descriptions and get recruiterโready insights. By comparing candidate profiles against job descriptions, it highlights strengths, flags missing keywords, and generates actionable optimization tips. Designed as a practical proof of concept for breaking into tech sales, it shows how automation and AI prompts can turn LinkedIn into a recruiterโready magnet. Got it โ โ based on your workflow (Webhook โ LinkedIn CV/JD fetch โ GhostGenius API โ n8n parsing/transform โ Groq LLM โ Output to Webhook), hereโs a clear list of tools & APIs required to set up your Recruiter Mirror (Proof of Concept) project: ๐ง Tools & APIs Required 1. n8n (Automation Platform) Either n8n Cloud or selfโhosted n8n instance. Used to orchestrate the workflow, manage nodes, and handle credentials securely. 2. Webhook Node (Form Intake) Captures LinkedIn profile (LinkedIn_CV) and job posting (LinkedIn_JD) links submitted by the user. Acts as the starting point for the workflow. 3. GhostGenius API Endpoints Used: /v2/profile โ Scrapes and returns structured CV/LinkedIn data. /v2/job โ Scrapes and returns structured job description data. Auth**: Requires valid credentials (e.g., API key / header auth). 4. Groq LLM API (via n8n node) Model Used: moonshotai/kimi-k2-instruct (via Groq Chat Model node). Purpose: Runs the ATS Recruiter Check, comparing CV JSON vs JD JSON, then outputs a structured JSON per the ATS schema. Auth**: Groq account + saved API credentials in n8n. 5. Code Node (JavaScript Transformation) Parses Groqโs JSON output safely (JSON.parse). Generates clean, recruiterโready HTML summaries with structured sections: Status Reasoning Recommendation Matched keywords / Missing keywords Optimization tips 6. n8n Native Nodes Set & Aggregate Nodes** โ Rebuild structured CV & JD objects. Merge Node** โ Combine CV data with job description for comparison. If Node** โ Validates LinkedIn URL before processing (fallback to error messaging). Respond to Webhook Node** โ Sends back the final recruiterโready insights in JSON (or HTML). โ ๏ธ Important Notes Credentials**: Store API keys & auth headers securely inside n8n Credentials Manager (never hardcode inside nodes). Proof of Concept: This workflow demonstrates feasibility but is **not productionโready (scraping stability, LinkedIn terms of use, and API limits should be considered before real deployments).
by Jimleuk
Generating contextual summaries is an token-intensive approach for RAG embeddings which can quickly rack up costs if your inference provider charges by token usage. Featherless.ai is an inference provider with a different pricing model - they charge a flat subscription fee (starting from $10) and allows for unlimited token usage instead. If you're typically spending over $10 - $25 a month, you may find Featherless to be a cheaper and more manageable option for your projects or team. For this template, Featherless's unlimited token usage is well suited for generating contextual summaries at high volumes for a majority of RAG workloads. LLM: moonshotai/Kimi-K2-Instruct Embeddings: models/gemini-embedding-001 How it works A large document is imported into the workflow using the HTTP node and its text extracted via the Extract from file node. For this demonstration, the UK highway code is used an an example. Each page is processed individually and a contextual summary is generated for it. The contextual summary generation involves taking the current page, preceding and following pages together and summarising the contents of the current page. This summary is then converted to embeddings using Gemini-embedding-001 model. Note, we're using a http request to use the Gemini embedding API as at time of writing, n8n does not support the new API's schema. These embeddings are then stored in a Qdrant collection which can then be retrieved via an agent/MCP server or another workflow. How to use Replace the large document import with your own source of documents such as google drive or an internal repo. Replace the manual trigger if you want the workflow to run as soon as documents become available. If you're using Google Drive, check out my Push notifications for Google Drive template. Expand and/or tune embedding strategies to suit your data. You may want to additionally embed the content itself and perform multi-stage queries using both. Requirements Featherless.ai Account and API Key Gemini Account and API Key for Embeddings Qdrant Vector store Customising this workflow Sparse Vectors were not included in this template due to scope but should be the next step to getting the most our of contextual retrieval. Be sure to explore other models on the Featherless.ai platform or host your own custom/finetuned models.
by Yaron Been
Monitor CRM accounts for hiring spikes by enriching HubSpot companies with PredictLeads job data and alerting your team via Slack. This workflow pulls all companies from your HubSpot CRM, checks each one against the PredictLeads Job Openings API for target roles (sales, engineering, marketing, product, data), compares the current count to historical data stored in Google Sheets, and flags any company where hiring jumped more than 50%. Flagged companies get updated in HubSpot with a hiring signal and trigger a Slack alert so your sales team can act fast. How it works: Schedule trigger runs the workflow daily at 9 AM. Retrieves all companies from HubSpot CRM (domain, name, ID). Loops through each company and fetches job openings from PredictLeads. Filters jobs to target roles (sales, engineering, marketing, product, data). Reads the previous job count for that company from Google Sheets. Calculates percentage change between current and historical counts. If hiring increased more than 50%, flags it as a spike. Updates the HubSpot company record with a hiring signal property. Sends a Slack alert with the company name, role count, and percentage change. Updates Google Sheets with the latest count regardless of spike status. Setup: Connect your HubSpot CRM (OAuth2) with company read/write access. Create a Google Sheet with a "HistoricalCounts" tab containing columns: domain, company_name, job_count, previous_count, percent_change, check_date. Connect a Slack bot to the channel where you want hiring alerts. Add your PredictLeads API credentials (X-Api-Key and X-Api-Token headers). Requirements: HubSpot CRM account with OAuth2 credentials. Google Sheets OAuth2 credentials. Slack OAuth2 credentials (bot with chat:write permission). PredictLeads API account (https://docs.predictleads.com). Notes: The 50% spike threshold can be adjusted in the IF node. Target roles are configured in the Filter Target Roles code node -- add or remove roles as needed. The workflow updates historical data on every run, so spike detection improves over time. PredictLeads Job Openings API docs: https://docs.predictleads.com
by AttenSys AI
๐งฅ Virtual Try-On Image & Video Generation (VLM Run) ๐ Overview This n8n workflow enables a Virtual Try-On experience where users upload a dress image and the system: Combines it with a fashion model image Generates a realistic try-on image Generates a fashion walking video Automatically shares results via: Telegram Discord YouTube ๐ Use Cases Virtual fashion try-on AI fashion marketing Clothing e-commerce previews Social media fashion automation Influencer & brand demo pipelines โจ Key Features ๐ผ๏ธ Image-based virtual try-on (model wearing the dress) ๐ฅ AI-generated fashion video ๐ Multi-platform publishing (Telegram, Discord, YouTube) ๐งฉ Modular, extensible workflow design ๐ง Workflow Architecture ๐จ Input Dress Image** โ Uploaded by user (Form Trigger) Model Image** โ Downloaded from predefined URL Prompt** โ Auto-constructed inside workflow ๐ฆ Output ๐ผ๏ธ Try-On Image ๐ฅ Fashion Walk Video ๐ค Shared to: Telegram (image/video) Discord (image) YouTube (video upload) ๐ Required Credentials You must configure the following credentials in n8n: | Service | Credential Type | | -------- | ------------------ | | VLM Run | VLM Run API | | Telegram | Telegram Bot API | | Discord | Discord OAuth2 | | YouTube | YouTube OAuth2 | โ ๏ธ Community Node Warning > Important: This workflow uses a Community Node > @vlm-run/n8n-nodes-vlmrun What this means: This node is NOT installed by default in n8n You must manually install it before using the workflow ๐ฆ Installation Run the following command in your n8n environment: npm install @vlm-run/n8n-nodes-vlmrun Then restart n8n. ๐ Community Nodes Documentation: https://docs.n8n.io/integrations/community-nodes/
by Dahiana
Description Who's it for: Content creators, marketers, and businesses who publish on both YouTube and blog platforms. What it does: Monitors your YouTube channel for new videos and automatically creates SEO-optimized blog posts using AI, then publishes to WordPress or Webflow. How it works: RSS Feed Trigger polls YouTube videos (every X amount of time) Extracts video metadata (title, description, thumbnail) YouTube node extracts full description for extra context Uses OpenAI (you can choose any model) to generate 600-800 word blog post Publishes to WordPress AND/OR Webflow with error handling Sends notifications to Telegram if publishing fails Requirements: YouTube channel ID (avoid tutorial channels for better results) OpenAI API key (or similar) WordPress OR Webflow credentials Telegram bot (optional, for error notifications) Setup steps: Replace YOUR_CHANNEL_ID in RSS Feed Trigger Add OpenAI credentials in AI generation node Configure WordPress and/or Webflow credentials Add Telegram bot for error notifications (optional). If you choose to set up Telegram, you need to input your channel ID. Test with manual execution first Customization: Modify AI prompt for different content styles Adjust polling frequency (30-60 minutes recommended) Add more CMS platforms Add content verification (is content larger than 600 characters? if not, improve)