by Richard Nijsten
Automate daily Jenkins test reports with AI and HTTP Requests As a test automation engineer, staying on top of daily test runs in Jenkins is essential. This workflow automates that process: it pulls specific test details from a Google Sheet, retrieves data from your local Jenkins environment, and uses AI to generate a concise summary report to be sent via email. Who's it for Test automation engineers using Jenkins. QA teams looking to streamline daily reporting. How it works Scheduled Trigger: The workflow runs automatically at a pre-defined time. Dynamic Data Retrieval: It constructs an HTTP request based on the data in your Google Sheet to fetch specific Jenkins results. AI Optimization: Only relevant data is extracted to minimize AI token usage and focus on the most important metrics. Summarization: The AI groups the results and formats them into a clear, professional email. Distribution: The report is sent to every recipient listed in the MailingList column. How to set up In the Google Sheet, set the BaseUrl, Environment, FeatureClass and Feature in order to build up the Jenkins url in their corresponding columns, for example: BaseUrl | Environment | FeatureClass |Feature |MailingList | <BaseUrl> |<environment> | <FeaturClassName> |<Featurename> | <mail> | Define Recipients: In the MailingList column, add the email addresses of the people who need to receive the report. If there are multiple recipients, ensure they are separated by commas. Requirements Access to your Jenkins server. An AI API key (e.g., Gemini, OpenAI). A Google Cloud project with the Google Sheets API enabled.
by Trung Tran
๐ Telegram RAG Chatbot with PDF Document & Google Drive Backup An upgraded Retrieval-Augmented Generation (RAG) chatbot built in n8n that lets users ask questions via Telegram and receive accurate answers from uploaded PDFs. It embeds documents using OpenAI and backs them up to Google Drive. ๐ค Whoโs it for Perfect for: Knowledge workers who want instant access to private documents Support teams needing searchable SOPs and guides Educators enabling course material Q&A for students Individuals automating personal document search + cloud backup โ๏ธ How it works / What it does ๐ฌ Telegram Chat Handling User sends a message Triggered by the Telegram bot, the workflow checks if the message is text. Text message โ OpenAI RAG Agent If the message is text, it's passed to a GPT-powered document agent. This agent: Retrieves relevant info from embedded documents using semantic search Returns a context-aware answer to the user Send answer back The bot sends the generated response back to the Telegram user. Non-text input fallback If the message is not text, the bot replies with a polite unsupported message. ๐ PDF Upload and Embedding User uploads PDFs manually A manual trigger starts the embedding flow. Default Data Loader Reads and chunks the PDF(s) into text segments. Insert to Vector Store (Embedding) Text chunks are embedded using OpenAI and saved for retrieval. Backup to Google Drive The original PDF is uploaded to Google Drive for safekeeping. ๐ ๏ธ How to set up Telegram Bot Create via BotFather Connect it to the Telegram Trigger node OpenAI Use your OpenAI API key Connect the Embeddings and Chat Model nodes (GPT-3.5/4) Ensure both embedding and querying use the same Embedding node Google Drive Set up credentials in n8n for your Google account Connect the โBackup to Google Driveโ node PDF Ingestion Use the โUpload your PDF hereโ trigger Connect it to the loader, embedder, and backup flow โ Requirements Telegram bot token OpenAI API key (GPT + Embeddings) n8n instance (self-hosted or cloud) Google Drive integration PDF files to upload ๐งฉ How to customize the workflow | Feature | How to Customize | |-------------------------------|-------------------------------------------------------------------| | Auto-ingest from folders | Add Google Drive/Dropbox watchers for new PDFs | | Add file upload via Telegram | Extend Telegram bot to receive PDFs and run the embedding flow | | Track user questions | Log Telegram usernames and questions to a database | | Summarize documents | Add summarization step on upload | | Add Markdown or HTML support | Format replies for better Telegram rendering | Built with ๐ฌ Telegram + ๐ PDF + ๐ง OpenAI Embeddings + โ๏ธ Google Drive + โก n8n
by Cheng Siong Chin
How It Works This workflow automates monthly tax filing processes by retrieving financial data, performing AI-driven tax calculations, coordinating pre-filing reviews with key stakeholders, incorporating feedback, and managing overall submission readiness. It pulls accounting records, executes GPT-5โbased tax calculations with transparent reasoning, formats comprehensive pre-filing reports, and routes them to a submission coordinator via email for review. The system captures reviewer feedback through structured prompts, intelligently applies necessary corrections, archives finalized records in Google Drive, and continuously tracks filing status. It is designed for accounting firms, tax practices, and finance departments that require coordinated, multi-stakeholder tax filing with minimal manual intervention. Setup Steps Connect accounting system and configure financial data fetch parameters. Set up OpenAI GPT-4 API for tax calculations and reasoning extraction. Configure Gmail, Chat Model, and Google Drive credentials. Define submission coordinator contacts and configure feedback. Prerequisites Accounting system access; OpenAI API key; Gmail account; Google Drive Use Cases Tax firms managing multi-client monthly filings with partner review Customization Modify tax calculation prompts for jurisdictions, adjust feedback collection fields Benefits Eliminates manual filing coordination, reduces submission errors
by Rahul Joshi
Description Transform Figma design files into detailed QA test cases with AI-driven analysis and structured export to Google Sheets. This workflow helps QA and product teams streamline design validation, test coverage, and documentation โ all without manual effort. ๐จ๐ค๐ What This Template Does Step 1: Trigger manually and input your Figma file ID. ๐ฏ Step 2: Fetches the full Figma design data (layers, frames, components) via API. ๐งฉ Step 3: Sends structured design JSON to GPT-4o-mini for intelligent test case generation. ๐ง Step 4: AI analyzes UI components, user flows, and accessibility aspects to generate 5โ10 test cases. โ Step 5: Parses and formats results into a clean structure. Step 6: Exports test cases directly to Google Sheets for QA tracking and reporting. ๐ Key Benefits โ Saves 2โ3 hours per design by automating test case creation โ Ensures consistent, comprehensive QA documentation โ Uses AI to detect UX, accessibility, and functional coverage gaps โ Centralizes output in Google Sheets for easy collaboration Features Figma API integration for design parsing GPT-4o-mini model for structured test generation Automated Google Sheets export Dynamic file ID and output schema mapping Built-in error handling for large design files Requirements Figma Personal Access Token OpenAI API key (GPT-4o-mini) Google Sheets OAuth2 credentials Target Audience QA and Test Automation Engineers Product & Design Teams Startups and Agencies validating Figma prototypes Setup Instructions Connect your Figma token as HTTP Header Auth (X-Figma-Token). Add your OpenAI API key in n8n credentials (model: gpt-4o-mini). Configure Google Sheets OAuth2 and select your sheet. Input Figma file ID from the design URL. Run once manually, verify output, then enable for regular use.
by Shun Nakayama
Turn your favorite podcast episodes into engaging social media content automatically. This workflow fetches new episodes from an RSS feed, transcribes the audio using OpenAI Whisper, generates a concise summary using GPT-4o, and drafts a tweet. It then sends the draft to Slack for your review before posting it to X (Twitter). Who is this for Content creators, social media managers, and podcast enthusiasts who want to share insights without manually listening to and typing out every episode. Key Features Large File Support:** Includes a custom logic to download audio in chunks, ensuring stability even with long episodes (preventing timeouts). Human-in-the-Loop:** Nothing gets posted without your approval. You can review the AI-generated draft in Slack before it goes live. High-Quality AI:** Uses OpenAI's Whisper for accurate transcription and GPT-4o for intelligent summarization. How it works Monitor: Checks the Podcast RSS feed daily for new episodes. Process: Downloads the audio (handling large files via chunking) and transcribes it. Draft: AI summarizes the transcript into bullet points and formats it for X (Twitter). Approve: Sends the draft to a Slack channel. Publish: Once approved by you, it posts the tweet to your X account. Requirements OpenAI API Key Slack Account & App (Bot Token) X (Twitter) Developer Account (OAuth2) Setup instructions RSS Feed: The template defaults to "TED Talks Daily" for demonstration. Open the [Step 1] RSS node and replace the URL with your target podcast. Connect Credentials: Set up your credentials for OpenAI, Slack, and X (Twitter) in the respective nodes. Slack Channel: In the [Step 12] Slack node, select the Channel ID where you want to receive the approval request.
by Artem Boiko
A professional BIM-to-cost pipeline that extracts data from Revit models (2015โ2026), classifies elements with AI, decomposes them into construction works, and generates detailed cost estimates using the open-source DDC CWICR database. Produces HTML reports and Excel exports with full resource breakdown. Who's it for BIM Managers** automating quantity takeoff and cost estimation Cost Engineers** integrating 5D workflows into design pipelines Construction Companies** standardizing estimates from Revit models General Contractors** doing rapid budget checks during design MEP Engineers** pricing mechanical/electrical/plumbing systems Developers** building custom BIM-to-cost integrations What it does Extracts BIM data from Revit model via converter (RvtExporter) Classifies building vs non-building elements using AI Detects project type (Residential/Commercial/Industrial) Generates construction phases and assigns element types Decomposes each BIM type into detailed work items Searches DDC CWICR vector database for matching rates Calculates costs with unit mapping and resource breakdown Validates work completeness and checks for gaps Generates professional HTML report + Excel file How it works โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ REVIT MODEL (.rvt) โ โ Revit 2015โ2026 supported โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ BLOCK 1: CONVERSION โ โ RvtExporter.exe โ Excel with BIM element schedules โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ BLOCK 2: DATA LOADING & CLASSIFICATION โ โ โข Filter 3D View elements only โ โ โข AI analyzes headers โ aggregation rules (sum/mean/last) โ โ โข AI classifies building vs non-building elements โ โ โข Hard exclude: Grids, Levels, Annotations, Views, etc. โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ BLOCK 3: PROJECT ANALYSIS (Stages 0โ3) โ โ STAGE 0: Collect filtered BIM data โ โ STAGE 1: AI detects project type โ โ STAGE 2: AI generates construction phases โ โ STAGE 3: AI assigns element types to phases โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ BLOCK 4: WORK DECOMPOSITION (Stage 4) โ โ Loop through each BIM type: โ โ โข AI decomposes type into work items โ โ โข Example: Window โ Demolition, Installation, Sealing, Hardware โ โ โข Prepares search queries for pricing โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ BLOCK 5: PRICING & CALCULATION (Stages 5โ7) โ โ STAGE 5: Vector search in Qdrant (text-embedding-3-large, 3072 dim) โ โ STAGE 6: Map BIM units โ Rate units (mยฒ โ 100 mยฒ) โ โ STAGE 7: Calculate costs (Qty ร Unit Price) โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ BLOCK 6: VALIDATION & AGGREGATION โ โ STAGE 7.5: AI validates work completeness โ โ STAGE 8: Aggregate costs by phases โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ BLOCK 7: REPORT GENERATION (Stage 9) โ โ โข Professional HTML report with expandable rows โ โ โข Excel-compatible XLS file โ โ โข Auto-opens in browser โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Pipeline Stages | Stage | Name | Description | |-------|------|-------------| | 0 | Collect | Gather filtered BIM data | | 1 | Project Type | AI detects Residential/Commercial/Industrial | | 2 | Phases | AI generates construction phases | | 3 | Assignment | AI assigns element types to phases | | 4 | Decomposition | AI breaks types into work items | | 5 | Vector Search | Query Qdrant for pricing rates | | 6 | Unit Mapping | Convert BIM units to rate units | | 7 | Calculation | Compute costs (Qty ร Price) | | 7.5 | Validation | AI checks completeness, finds gaps | | 8 | Aggregation | Sum costs by phases | | 9 | Reports | Generate HTML + XLS outputs | Prerequisites | Component | Requirement | |-----------|-------------| | n8n | v1.30+ with Execute Command node | | Revit Exporter | RvtExporter.exe (provided separately) | | OpenAI API | For embeddings + LLM tasks | | Qdrant | Vector DB with DDC CWICR collections | | DDC CWICR Data | GitHub | | Windows | For Revit converter execution | Setup 1. Configure File Paths In Setup - Define file paths node: { "path_to_converter": "C:\\path\\to\\RvtExporter.exe", "project_file": "C:\\path\\to\\your_project.rvt", "group_by": "Type Name", "language_code": "DE" } 2. Select Language & Region | Code | Language | City | Currency | |------|----------|------|----------| | AR | Arabic | Dubai | AED | | ZH | Chinese | Shanghai | CNY | | DE | German | Berlin | EUR | | EN | English | Toronto | CAD | | ES | Spanish | Barcelona | EUR | | FR | French | Paris | EUR | | HI | Hindi | Mumbai | INR | | PT | Portuguese | Sรฃo Paulo | BRL | | RU | Russian | St. Petersburg | RUB | 3. Configure AI Model Connect your preferred LLM in the model nodes: | Provider | Model | Notes | |----------|-------|-------| | OpenAI | GPT-4o | Default, recommended | | Anthropic | Claude Opus 4 | High quality | | Google | Gemini 2.5 Pro | Good for large contexts | | xAI | Grok 4 | Fast inference | | DeepSeek | DeepSeek Chat | Cost-effective | | OpenRouter | Various | Multi-model access | 4. Set Up Qdrant Ensure DDC CWICR collections are loaded: DE_BERLIN_workitems_costs_resources_EMBEDDINGS_3072_DDC_CWICR ENG_TORONTO_workitems_costs_resources_EMBEDDINGS_3072_DDC_CWICR RU_STPETERSBURG_workitems_costs_resources_EMBEDDINGS_3072_DDC_CWICR ... 5. Configure OpenAI Credentials Set up OpenAI API credential for: Embeddings (text-embedding-3-large, 3072 dimensions) LLM calls (if using OpenAI as primary model) Features | Feature | Description | |---------|-------------| | ๐๏ธ Revit Integration | Direct extraction from .rvt files (2015โ2026) | | ๐ค Multi-LLM Support | OpenAI, Claude, Gemini, Grok, DeepSeek | | ๐ Smart Classification | AI separates building from non-building elements | | ๐ Work Decomposition | Breaks BIM types into detailed work items | | ๐ฏ Vector Search | Semantic matching via Qdrant + OpenAI embeddings | | ๐งฎ Unit Mapping | Automatic conversion (mยฒ โ 100 mยฒ, pcs โ sets) | | โ AI Validation | Checks for missing works and duplications | | ๐ Phase Aggregation | Costs grouped by construction phases | | ๐ HTML Report | Professional report with quality indicators | | ๐ Excel Export | XLS file with formulas and links | | ๐ 9 Languages | Full localization + regional pricing | Hard Exclude Categories The pipeline automatically excludes non-physical elements: Levels, Grids, Reference Planes Annotations, Dimensions, Text Notes Tags, Views, Sheets, Schedules Legends, Viewports, Section Boxes Scope Boxes, Match Lines Model Groups, Detail Groups Entourage (RPC people, cars, plants) Example Output Input: Residential building Revit model (45 element types) Processing: Project type detected: Residential Multi-Family Phases generated: Foundations โ Structure โ Envelope โ MEP โ Finishes Types assigned: 45 types โ 5 phases Works decomposed: 45 types โ 280 work items Rates found: 245/280 (87.5%) Output Files: project_2024-12-08.html โ Professional HTML report project_2024-12-08.xls โ Excel with full breakdown HTML Report Features: KPI summary (total cost, items, phases) Expandable phase sections Quality indicators (โ green/yellow/red) Resource breakdown per work item Clickable rate codes Responsive design Output Structure ๐ Cost Estimate: Residential Building โโโ ๐ Phase 1: Foundations โ โโโ Foundation walls โ 125.5 mยณ โ โฌ12,450 โ โโโ Concrete footings โ 45.2 mยณ โ โฌ8,340 โ โโโ Waterproofing โ 280 mยฒ โ โฌ4,200 โโโ ๐ Phase 2: Structure โ โโโ Concrete columns โ 18 pcs โ โฌ9,720 โ โโโ Floor slabs โ 450 mยฒ โ โฌ67,500 โ โโโ Stairs โ 3 flights โ โฌ8,100 โโโ ๐ Phase 3: Envelope โ โโโ Exterior walls โ 680 mยฒ โ โฌ95,200 โ โโโ Windows โ 42 pcs โ โฌ25,200 โ โโโ Roof system โ 225 mยฒ โ โฌ33,750 โโโ ๐ฐ TOTAL: โฌ485,240 Notes & Tips First run:** Conversion takes 1โ3 minutes depending on model size Cached conversion:** Subsequent runs skip conversion if Excel exists Testing mode:** Limit to 10 types for faster debugging Rate accuracy:** Depends on DDC CWICR coverage for your region Custom phases:** AI adapts phases based on project type Missing rates:** Flagged with red indicator in report Extending the Pipeline Add custom rates:** Extend Qdrant collection with your pricing Chain to PM tools:** Connect to OpenProject, Monday, Asana Email reports:** Add email node after report generation Cloud storage:** Upload to Google Drive, OneDrive, S3 Webhook trigger:** Replace manual trigger for API access Categories AI ยท Data Transformation ยท Document Ops ยท Files & Storage Tags bim, revit, cost-estimation, 5d-bim, 4d-bim, qdrant, vector-search, openai, construction, quantity-takeoff, html-report, multilingual Author DataDrivenConstruction.io https://DataDrivenConstruction.io info@datadrivenconstruction.io Consulting & Training We help AEC firms implement: BIM-to-cost automation pipelines 4D/5D integration workflows Custom Revit data extractors AI-powered estimation systems Vector database deployment for construction data Contact us to adapt this pipeline to your Revit templates and regional pricing. Resources DDC CWICR Database:** GitHub Qdrant Documentation:** qdrant.tech/documentation OpenAI Embeddings:** platform.openai.com n8n Execute Command:** docs.n8n.io โญ Star us on GitHub! github.com/datadrivenconstruction/DDC-CWICR
by Shelly-Ann Davy
Automate Bug Reports: GitHub Issues โ AI Analysis โ Jira Tickets with Slack & Discord Alerts Automatically convert GitHub issues into analyzed Jira tickets with AI-powered severity detection, developer assignment, and instant team alerts. Overview This workflow captures GitHub issues in real-time, analyzes them with GPT-4o for severity and categorization, creates enriched Jira tickets, assigns the right developers, and notifies your team across Slack and Discordโall automatically. Features AI-Powered Triage**: GPT-4o analyzes bug severity, category, root cause, and generates reproduction steps Smart Assignment**: Automatically assigns developers based on mentioned files and issue context Two-Way Sync**: Posts Jira ticket links back to GitHub issues Multi-Channel Alerts**: Rich notifications in Slack and Discord with action buttons Time Savings**: Eliminates 15-30 minutes of manual triage per bug Customizable Routing**: Easy developer mapping and priority rules What Gets Created Jira Ticket: Original GitHub issue details with reporter info AI severity assessment and categorization Reproduction steps and root cause analysis Estimated completion time Automatic labeling and priority assignment GitHub Comment: Jira ticket link AI analysis summary Assigned developer and estimated time Team Notifications: Severity badges and quick-access buttons Developer assignment and root cause summary Color-coded priority indicators Use Cases Development teams managing 10+ bugs per week Open source projects handling community reports DevOps teams tracking infrastructure issues QA teams coordinating with developers Product teams monitoring user-reported bugs Setup Requirements Required: GitHub repository with admin access Jira Software workspace OpenAI API key (GPT-4o access) Slack workspace OR Discord server Customization Needed: Update developer email mappings in "Parse GPT Response & Map Data" node Replace YOUR_JIRA_PROJECT_KEY with your project key Update Slack channel name (default: dev-alerts) Replace YOUR_DISCORD_WEBHOOK_URL with your webhook Change your-company.atlassian.net to your Jira URL Setup Time: 15-20 minutes Configuration Steps Import workflow JSON into n8n Add credentials: GitHub OAuth2, Jira API, OpenAI API, Slack, Discord Configure GitHub webhook in repository settings Customize developer mappings and project settings Test with sample GitHub issue Activate workflow Expected Results 90% faster bug triage (20 min โ 2 min per issue) 100% consistency in bug analysis Zero missed notifications Better developer allocation Improved bug documentation Tags GitHub, Jira, AI, GPT-4, Bug Tracking, DevOps, Automation, Slack, Discord, Issue Management, Development, Project Management, OpenAI, Webhook, Team Collaboration
by Divyansh Chauhan
๐ช Prompt To Video (MagicHour API) with Music & YouTube Automate AI video creation, background music, YouTube uploads, and result logging โ all from a single text prompt. โก Overview This n8n template turns a text prompt into a complete AI-generated video using the MagicHour API, adds background music, generates YouTube metadata, uploads to YouTube, and logs results in Google Sheets โ all in one flow. Perfect for creators, marketers, and startups producing YouTube content at scale โ from daily AI Shorts to explainers or marketing clips. ๐งฉ Use Cases ๐ฅ Daily AI-generated Shorts ๐ง Product explainers ๐ Marketing & brand automation ๐ Repurpose blog posts into videos ๐ก AI storytelling or creative projects โ๏ธ How It Works Trigger when a new row is added to Google Sheets or via Chat input. Gemini parses and normalizes the text prompt. MagicHour API generates the AI video. Poll until the render completes. (Optional) Mix background audio using MediaFX. Gemini generates YouTube title, description, and tags. Upload the video to YouTube with metadata. Log YouTube URL, metadata, and download link back to Google Sheets. ๐งฐ Requirements Service Purpose MagicHour API Key Text-to-video generation Gemini API Key Prompt parsing & metadata YouTube OAuth2 Video uploads Google Sheets OAuth2 Trigger & logging (Optional) MediaFX Node Audio mixing ๐๏ธ Google Sheets Setup Column Description Prompt Text prompt for video Background Music URL (Optional) Royalty-free track Status Tracks flow progress YouTube URL Auto-filled after upload Metadata Title, tags, and description JSON Date Generated (Optional) Auto-filled with video creation date ๐ 100 Daily Prompts Automation You can scale this workflow to generate one video per day from a batch of 100 prompts in Google Sheets. Setup Steps Add 100 prompts to your Google Sheet โ one per row. Set the Status column for each to Pending. Use a Cron Trigger in n8n to run the workflow once daily (e.g., at 9 AM). Each run picks one Pending prompt, generates a video, uploads to YouTube, then marks it as Done. Continues daily until all 100 prompts are processed. Example Cron Expression 0 9 * * * โ Runs the automation every day at 9:00 AM. Node Sequence [Schedule Trigger (Daily)] โ [Get Pending Prompt from Sheets] โ [Gemini Prompt Parser] โ [MagicHour Video Generation] โ [Optional: MediaFX Audio Mix] โ [Gemini Metadata Generator] โ [YouTube Upload] โ [Update Row in Sheets] ๐ก Optional Enhancements: Add a notification node (Slack, Discord, or Email) after each upload. Add a counter check to stop after 100 videos. Add a โPausedโ column to skip specific rows. ๐ง Gemini Integration Gemini handles: JSON parsing for MagicHour requests Metadata generation (title, description, tags) Optional creative rewriting of prompts ๐ง Audio Mixing (Optional) Install MediaFX Community Node โ Settings โ Community Nodes โ n8n-nodes-mediafx Use it to blend background music automatically into videos. ๐ชถ Error Handling Avoid โContinue on Failโ in key nodes Use IF branches for MagicHour API errors Add retry/timeout logic for polling steps ๐งฑ Node Naming Tips Rename generic nodes for clarity: Merge โ Merge Video & Audio If โ Check Video Completion HTTP Request โ MagicHour API Request ๐ How to Use Add MagicHour, Gemini, YouTube, and Sheets credentials Replace background music with your own track Use Google Sheets trigger or daily cron for automation Videos are created, uploaded, and logged โ hands-free โ ๏ธ Disclaimer This template uses community nodes (MediaFX). Install and enable them manually. MagicHour API usage may incur costs based on video duration and quality. ๐ SEO Keywords MagicHour API, n8n workflow, AI video generator, automated YouTube upload, Gemini metadata, AI Shorts, MediaFX, Google Sheets automation, AI marketing, content automation.
by Sagar Budhathoki
AI Blog & LinkedIn Content Publisher How it works Daily trigger scans your Notion database for unpublished blog ideas AI generates complete blog posts + engaging LinkedIn content using OpenAI (Blog Posting is not implemented yet) Creates custom images for posts using Replicate's Flux-Schnell AI model Auto-publishes to LinkedIn with image OR emails draft for review Updates Notion with published content and tracks status Set up steps Connect accounts: Notion, OpenAI, Replicate, LinkedIn, Gmail Create 2 Notion databases: Ideas (input) and Articles (output) Update config node: Add your database IDs and email Test with one idea: Run manually first to verify everything works Enable daily automation: Turn on the cron trigger Perfect for: Content creators, developers, and marketers who want to transform rough ideas into professional blog posts and LinkedIn content automatically.
by Rahul Joshi
๐ Description Generate high-quality, SEO-optimized content briefs automatically using AI, real-time keyword research, SERP intelligence, and historical content context. This workflow standardizes user inputs, fetches search metrics, analyzes competitors, and produces structured SEO briefs with quality scoring and version control. It also stores all versions in Google Sheets and generates HTML previews for easy review and publishing. ๐ค๐๐ What This Template Does Normalizes user input from the chat trigger into structured fields (intent, topic, parameters). โ๏ธ Fetches real-time keyword metrics such as search volume, CPC, and difficulty from DataForSEO. ๐ Retrieves SERP insights through SerpAPI for top competitors, headings, and content gaps. ๐ Loads historical brief versions from Google Sheets for continuity and versioning. ๐ Uses an advanced GPT-4o-mini agent to generate a complete SEO brief with title, metadata, keywords, outline, entities, and internal links. ๐ค Calculates detailed SEO, differentiation, and completeness quality scores. ๐ Validates briefs against quality thresholds (outline length, keywords, word count, overall score). โก Stores approved briefs in Google Sheets with version control and timestamping. ๐๏ธ Generates an HTML preview with styled formatting for team review or CMS use. ๐ฅ๏ธ Sends Slack alerts when a brief does not meet quality standards. ๐จ Key Benefits โ Fully automated SEO content brief generation โ Uses real-time keyword + SERP + competitor intelligence โ Ensures quality through automated scoring and validation โ Built-in version control for content operations teams โ Beautiful HTML preview ready for editors or clients โ Reduces research time from hours to minutes โ Ideal for content agencies, SEO teams, and AI-powered workflows Features Chat-triggered brief generation Real-time DataForSEO keyword metrics SERP analysis tool integration GPT-4o-mini structured AI agent Google Sheets integration for storing & retrieving versions Automated quality scoring (SEO, gaps, completeness) HTML preview builder with rich formatting Slack alerting for low-quality briefs Semantic entities, content gaps, competitor insights Requirements OpenAI API (GPT-4o-mini or compatible model) DataForSEO access credentials (Basic Auth) SerpAPI key for SERP extraction Google Sheets OAuth2 integration Optional: Slack webhook for quality alerts Target Audience SEO teams generating large amounts of content briefs Content agencies scaling production with automation Marketing teams building data-driven content strategies SaaS teams wanting automated keyword-based briefs Anyone needing structured, high-quality content briefs from chat Step-by-Step Setup Instructions Connect your OpenAI API credential and confirm GPT-4o-mini availability. ๐ Add DataForSEO HTTP Basic Auth for keyword metrics. ๐ Connect SerpAPI for SERP analysis tools. ๐ Add Google Sheets OAuth2 and link your content_versions sheet. ๐ Optional: Add a Slack webhook URL for quality alerts. ๐ Test by sending a topic via the chat trigger. Review the generated SEO brief and HTML preview. Enable the workflow for continued use in your content pipeline. ๐
by Stephan Koning
VEXA: AI-Powered Meeting Intelligence I'll be honest, I built this because I was getting lazy in meetings and missing key details. I started with a simple VEXA integration for transcripts, then added AI to pull out summaries and tasks. But that just solved part of the problem. The real breakthrough came when we integrated Mem0, creating a persistent memory of every conversation. Now, you can stop taking notes and actually focus on the person you're talking to, knowing a system is tracking everything that matters. This is the playbook for how we built it. How It Works This isn't just one workflow; it's a two-part system designed to manage the entire meeting lifecycle from start to finish. Bot Management: It starts when you flick a switch in your CRM (Baserow). A command deploys or removes an AI bot from Google Meet. No fluffโit's there when you need it, gone when you don't. The workflow uses a quick "digital sticky note" in Redis to remember who the meeting is with and instantly updates the status in your Baserow table. AI Analysis & Memory: Once the meeting ends, VEXA sends the transcript over. Using the client ID (thank god for redis) , we feed the conversation to an AI model (OpenAI). It doesn't just summarize; it extracts actionable next steps and potential risks. All this structured data is then logged into a memory layer (Mem0), creating a permanent, searchable record of every client conversation. Setup Steps: Your Action Plan This is designed for rapid deployment. Here's what you do: Register Webhook: Run the manual trigger in the workflow once. This sends your n8n webhook URL to VEXA, telling it where to dump transcripts after a call. Connect Your CRM: Copy the vexa-start webhook URL from n8n. Paste it into your Baserow automation so it triggers when you set the "Send Bot" field to Start_Bot. Integrate Your Tools: Plug your VEXA, Mem0, Redis, and OpenAI API credentials into n8n. Use the Baserow Template: I've created a free Baserow template to act as your control panel. Grab it here: https://baserow.io/public/grid/t5kYjovKEHjNix2-6Rijk99y4SDeyQY4rmQISciC14w. It has all the fields you need to command the bot. Requirements An active n8n instance or cloud account. Accounts for VEXA.ai, Mem0.ai, Baserow, and OpenAI. A Redis database . Your Baserow table must have these fields: Meeting Link, Bot Name, Send Bot, and Status. Next Steps: Getting More ROI This workflow is the foundation. The real value comes from what you build on top of it. Automate Follow-ups:** Use the AI-identified next steps to automatically trigger follow-up emails or create tasks in your project management tool. Create a Unified Client Memory:** Connect your email and other communication platforms. Use Mem0 to parse and store every engagement, building a complete, holistic view of every client relationship. Build a Headless CRM:** Combine these workflows to build a fully AI-powered system that handles everything from lead capture to client management without any manual data entry. Copy the workflow and stop taking notes
by Incrementors
Description Automatically analyze Upwork SEO job posts, detect hidden screening questions, generate personalized cover letters with portfolio examples using GPT-4 Turbo, DeepSeek & Claude AI โ all saved to Google Docs instantly. Auto-Generate Winning Upwork SEO Proposals with GPT-4, DeepSeek & Claude AI Automate the entire Upwork proposal process โ from analyzing a job post and detecting hidden screening questions, to generating a personalized cover letter backed by your real portfolio data, running it through a 10-point quality check, and saving the final polished version to Google Docs โ all without writing a single word manually. Perfect for SEO freelancers, agencies, and Upwork consultants who want to send high-quality, personalized proposals at scale without spending 45โ60 minutes on each one. What This Workflow Does This automation handles five key tasks: Analyzes job posts โ GPT-4 Turbo extracts structured job data including title, industry, client history, budget, required skills, and client's SEO pain points from raw Upwork job text Detects hidden screening questions โ Automatically identifies and highlights any hidden verification tests clients embed in job descriptions (e.g., "Start your proposal with the word Avocado"), which most freelancers miss Generates cover letters with portfolio proof โ DeepSeek writes a 150โ250 word personalized cover letter, then pulls relevant ranking keyword examples and industry case studies from your Pinecone vector database to add real proof Runs a 10-point quality check โ Another DeepSeek agent evaluates the cover against a strict checklist and flags only the missing or weak elements for improvement Polishes and saves to Google Docs โ Claude 3.7 Sonnet applies QC feedback with minimal changes and saves both the final cover letter and screening Q&A answers to your Google Doc, ready to copy-paste How It Works The workflow begins when you submit a job through a simple form โ paste the Upwork job URL, copy-paste the raw job post text, and select the job type (SEO, Agency, or Automation). GPT-4 Turbo analyzes the job post and outputs a fully structured breakdown: job title, industry focus, primary SEO problems, client's current SEO status, required skills, client history patterns, and strategic notes. It also detects any hidden screening questions and marks them prominently with โ ๏ธ ATTENTION markers. Once analysis is complete, the workflow splits into three parallel branches that run simultaneously: Branch A โ Screening Q&A Writer: DeepSeek reads the detected screening questions and writes direct, concise answers (under 200 words each). It pulls up to 3 relevant examples from your Pinecone databases when helpful. The answers are formatted in clean HTML and saved immediately to your Google Doc. Branch B โ Cover Letter Generator: DeepSeek generates a personalized 150โ250 word cover letter that mirrors the client's exact language, tone, and terminology. It searches your Pinecone vector databases โ one holding case studies with Google Doc URLs, one holding portfolio websites with their ranking keywords โ and adds 2 portfolio examples plus 1 industry-matched case study in a structured format. All URLs are validated to ensure no angle brackets or broken formatting. Both the job analysis output and the generated cover then flow into the Quality Control pipeline. A Merge node combines them, an Aggregate node bundles everything into a single input, and DeepSeek's Cover Quality Checker evaluates the proposal against a 10-point checklist covering client name, job terminology, opening strength, keyword usage, industry relevance, skills match, process outline, and call to action. It outputs only the specific changes needed. Finally, the QC feedback and original cover are merged again and passed to Claude 3.7 Sonnet for the final polish. Claude applies the suggestions with minimal edits โ preserving the client's vocabulary and tone โ formats the output in clean HTML, and the workflow saves it to your Google Doc. A 1-minute read-ready cover letter, complete with real portfolio proof, is waiting for you. Setup Requirements Accounts needed: n8n instance (self-hosted or cloud) OpenAI account with GPT-4 Turbo API access (for Job Analysis + Embeddings) DeepSeek account with API access (for Cover Writing, Q&A, and QC) Anthropic API key for Claude 3.7 Sonnet (for Final Polish) Pinecone account with two indexes: casestudiesdatabase and websitewithrankingkeywords-v2 Google account with Google Docs access Estimated setup time: 15โ20 minutes Setup Steps 1. Import Workflow Copy the workflow JSON Open n8n โ Workflows โ Import from JSON Paste and import Verify all nodes are properly connected across the three parallel branches 2. Configure OpenAI (GPT-4 Turbo + Embeddings) Add OpenAI API credential in n8n Enter your API key Credential is used by three nodes: GPT-4 Turbo LLM (Job Analyzer), OpenAI Embeddings (Case Studies), and OpenAI Embeddings (Keywords) Test the connection before proceeding 3. Configure DeepSeek Add DeepSeek API credential in n8n Enter your DeepSeek API key Credential is used by three nodes: DeepSeek LLM (Cover Writer), DeepSeek LLM (Q&A Writer), and DeepSeek LLM (QC Checker) Test the connection 4. Configure Anthropic (Claude 3.7 Sonnet) Add Anthropic API credential in n8n Enter your Anthropic API key Model is set to claude-3-7-sonnet-20250219 Credential is used by: Claude 3.7 Sonnet LLM (Final Cover Polish node) Test the connection 5. Set Up Pinecone Vector Databases Create two Pinecone indexes: casestudiesdatabase and websitewithrankingkeywords-v2 Add your Pinecone API credential in n8n Case Studies DB: Upload your industry case studies with Google Doc URLs โ do NOT modify these URLs or the links will break Ranking Keywords DB: Upload your portfolio websites with their ranking keywords (the workflow retrieves top 20 results per query) Verify both indexes appear in the Case Studies DB (Pinecone) and Ranking Keywords DB (Pinecone) nodes 6. Connect Google Docs Create two Google Docs โ one for cover letters, one for Q&A answers Add Google Docs OAuth2 credential in n8n and complete the OAuth flow Paste your Cover Letter Google Doc URL in the Save Final Cover to Docs node Paste your Q&A Google Doc URL in the Save Q&A to Docs node Test by triggering the workflow and verifying content appears in both documents 7. Test and Activate Open the Job Input Form webhook URL in your browser Paste a real Upwork SEO job post text and submit Check execution logs for all three parallel branches Verify your Google Doc shows both the final cover letter and the Q&A answers Activate the workflow once output is confirmed correct What Gets Analyzed and Generated From the Upwork job post: Job title, industry focus, and niche Primary SEO problems the client wants solved Client's current SEO status and gaps Required skills ranked by importance Client country (for regional SEO approach) Client hiring history and industry patterns with confidence scores Budget and preferred engagement model Hidden screening questions (with โ ๏ธ ATTENTION markers) Strategic SEO project type (technical / content / link building) AI-generated outputs: Structured job analysis with industry pattern matching 150โ250 word personalized cover letter with portfolio examples 2 portfolio website examples with 3 ranking keywords each 1 industry-matched case study with metrics and Google Doc link Direct answers to all screening questions (under 200 words each) 10-point QC evaluation with specific improvement suggestions Final HTML-formatted cover letter ready to copy-paste Use Cases High-volume Upwork freelancers: Send 5โ10 personalized, data-backed proposals daily without manual writing โ each one tailored to the client's exact industry and pain points SEO agencies on Upwork: Scale proposal output across multiple team members using a shared workflow โ everyone gets consistent, on-brand proposals New Upwork SEO freelancers: Never miss a hidden screening question again and always include relevant portfolio proof that matches the client's industry Freelance business automation: Eliminate the most time-consuming part of freelancing โ proposal writing โ and redirect that time to client work Important Notes Replace all placeholder API keys and credential IDs before activating the workflow Ensure all five credential types are tested successfully: OpenAI, DeepSeek, Anthropic, Pinecone, and Google Docs Case study Google Doc URLs in Pinecone must never be modified โ the workflow uses them as-is The Pinecone databases must be populated with your own portfolio data before the workflow produces accurate examples DeepSeek handles the majority of AI tasks for cost efficiency; Claude 3.7 Sonnet is used only for the final polish step Each job submission generates one complete proposal set (cover letter + Q&A) in your Google Doc Processing time is typically 60โ120 seconds depending on Pinecone retrieval speed and AI response time Form Access Access the workflow via the built-in n8n form at: https://your-n8n-instance.com/webhook/upwork-proposal-generator Paste any Upwork job post text and submit to start the automation instantly. Support For questions or assistance: Email: info@incrementors.com Contact: https://www.incrementors.com/contact-us/