by Davide
This workflow implements an advanced AI-powered system for generating, and executing Claude Skills stored on GitHub. When creating a skill, the workflow: Uses an AI agent to generate a properly structured SKILL.md file Extracts and formats the skill content Saves it automatically into a GitHub repository under a structured /skills directory When executing a skill, the workflow: Dynamically lists available skills from GitHub Navigates directories to find the correct skill files Retrieves their content via API Executes instructions strictly based on those files using an AI agent Key Benefits 1. โ Dynamic Skill Execution The system doesnโt rely on hardcoded logic. It retrieves and executes skills directly from GitHub, making it highly flexible and extensible. 2. โ Self-Extending Architecture New capabilities can be added simply by creating new skills. The workflow automatically integrates them without requiring changes to the core system. 3. โ Separation of Logic and Execution All instructions are stored in external skill files, keeping the workflow clean, modular, and easy to maintain. 4. โ Automated Skill Creation The workflow can generate complete Claude Skills (including structured documentation) and publish them to GitHub without manual intervention. 5. โ Multi-Model Intelligence By combining OpenAI and Anthropic models, the system leverages different strengths (reasoning, generation, structure). 6. โ Context-Aware Conversations Memory nodes allow the system to maintain session context, improving continuity and personalization. 7. โ Reliable Output Handling Structured output parsing ensures decisions (like whether to proceed or ask for more info) are deterministic and machine-readable. 8. โ Up-to-Date Knowledge via Context7 Before generating skills, the system fetches real documentation, reducing hallucinations and ensuring accuracy. 9. โ GitHub as a Skill Registry Using GitHub as a storage layer provides: Version control Collaboration Transparency Easy scaling 10. โ Agent-Based Orchestration The workflow uses multiple specialized AI agents, each with a clear responsibility (validation, generation, execution), improving robustness and clarity. How it works Creating new skills interactively with the help of a dedicated agent that uses Context7 for upโtoโdate documentation. Executing existing Claude skills stored in a GitHub repository (Claude Skills + Github System) Creation flow (making a new skill) The AI Conversational Agent decides if the user is trying to create a skill and whether enough information is available. If information is missing, the user is asked for clarification via the More info node. Once ready, the Claude Skills Creator Agent takes over. This agent always consults Context7 to fetch documentation for any libraries/APIs involved. It then generates a properly formatted SKILL.md file (with YAML frontmatter, imperative style, โค500 lines). The Extract Skill MD node parses the generated output. The SKILL.md Parser converts the markdown into binary data. The Create a Skill node uploads the file to the correct path in the GitHub repository (skills/<skillโname>/SKILL.md). Finally, the Skill created node confirms success in the chat. Execution flow (using skills) A chat message arrives โ the workflow checks if the user wants to create a skill (via the AI Conversational Agent). If the request is not about creating a skill, it proceeds to the Skills Agent. The Skills Agent first receives a list of all skill directories from GitHub (via the List Skills node). It then uses two GitHub tools to explore those directories: List Files โ to browse the contents of a skill folder. Get File from Skill โ to fetch the actual SKILL.md or other resources. The agent follows the instructions found in the skill files (not its own general knowledge) and produces an answer. The final answer is sent back to the chat via the Respond node. A Simple Memory node maintains conversation context across turns. Set up steps Prerequisites A GitHub account and a repository for storing skills (in the workflow: https://github.com/n3witalia/my-skills). Create inside a repo a folder called skills An OpenAI or Anthropic API key (the workflow uses Anthropic Claude models, but also includes an OpenAI node for extraction). A Context7 API key (for retrieving library documentation during skill creation). Stepโbyโstep configuration Import the workflow Copy the JSON definition into a new n8n workflow. Configure credentials GitHub API โ add your personal access token (with repo scope). Anthropic API โ add your API key. OpenAI API โ add your API key (used only by the Extract Skill MD node). Context7 โ add your API key (used as HTTP Header Auth). Update GitHub repository details In the List Skills, List Files, Create a Skill, and Get File from Skill nodes, replace n3witalia/my-skills with your own GitHub organisation and repository name. Adjust the chat trigger Set the Webhook URL of the When chat message received node so your frontโend can post messages to it. Review agent prompts Open the Claude Skills Creator Agent and check the system prompt (it enforces Context7 usage, skill format, and output rules). Modify if needed. Set up memory The workflow uses three Simple Memory nodes. Each is configured with a session key derived from Set chatbot vars.sessionId. Ensure your chat frontโend sends a unique sessionId to keep conversations separate. Test the workflow Send a message like โcreate a skill for working with PDFsโ โ the agent should ask questions, call Context7, and propose a skill. Send a normal request like โlist all my skillsโ โ the Skills Agent should list directories from your GitHub repo. Activate the workflow Toggle the workflow from inactive to active (topโright corner in n8n). The webhook will start listening for chat messages. ๐ Subscribe to my new YouTube channel. Here Iโll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by WeblineIndia
n8n Workflow Intelligence (RAG): Auto Indexing & Semantic AI Search with Supabase Vector DB This workflow automatically indexes your n8n workflows every 24 hours, converts them into vector embeddings using OpenAI and stores them in Supabase. It exposes a webhook that lets you query your workflows in natural language. The AI agent uses Retrieval-Augmented Generation (RAG) to fetch relevant workflow data and generate contextual answersโmaking it easy to understand, debug and reuse automation logic. Quick Implementation Steps Enable n8n API and configure authentication (header-based). Set up Supabase with pgvector and create the required table and function. Add OpenAI credentials (for embeddings and chat model). Import and activate the workflow in n8n. Send a POST request to /ask-workflows: { "query": "How does my webhook workflow work?" } Receive AI-powered answers based on your workflows. What It Does This workflow creates an intelligent knowledge layer on top of your n8n automations. It automatically fetches workflows from your n8n instance, processes each node and converts them into structured text chunks. These chunks are transformed into vector embeddings using OpenAI and stored in Supabase for semantic search. Once indexed, users can query workflows through a webhook endpoint using natural language. The AI agent retrieves relevant workflow data using vector similarity search and generates meaningful responses. It can also guide users directly to workflows using links. In short, it transforms your workflows into a searchable, AI-powered system. Who It's For Developers managing multiple n8n workflows Automation engineers handling complex pipelines Teams working on shared n8n environments Businesses needing faster debugging and workflow discovery Anyone looking to add AI-powered search to automation systems Requirements 1. n8n API Access Enable API in your n8n instance Example endpoint: http://YOUR_N8N_HOST:5678/api/v1/workflows Requires authentication via HTTP headers (API key/token) 2. Supabase Setups Enable Extension create extension if not exists vector; Create Table create table if not exists documents ( id uuid primary key default gen_random_uuid(), content text, metadata jsonb, embedding vector(1536) ); Create Match Function create or replace function match_documents ( query_embedding vector(1536), match_count int, filter jsonb default '{}'::jsonb ) returns table ( id uuid, content text, metadata jsonb, similarity float ) language plpgsql as $$ begin return query select documents.id, documents.content, documents.metadata, 1 - (documents.embedding <=> query_embedding) as similarity from documents where (filter = '{}'::jsonb or documents.metadata @> filter) order by documents.embedding <=> query_embedding limit match_count; end; $$; 3.Credentials Required OpenAI API key (for embeddings and chat model) Supabase API credentials n8n API authentication (header-based) How It Works & Set Up Step 1: Auto Sync Trigger Runs every 24 hours Keeps your vector database updated automatically Step 2: Fetch Workflows Calls n8n API to retrieve workflows Current limit is set to 5 (can be increased) Step 3: Split Workflows Splits API response into individual workflows Processes them one at a time Step 4: Clear Existing Data Deletes existing vector entries for each workflow Ensures no duplication Step 5: Transform into Chunks Each workflow node is converted into structured text: Workflow: "My Workflow". Node Name: "Webhook". Type: "n8n-nodes-base.webhook". Logic: {...} Step 6: Generate Embeddings Uses OpenAI embedding model Converts chunks into vector format Step 7: Store in Supabase Stores content, metadata and embeddings Enables semantic retrieval Step 8: Query via Webhook Endpoint: /ask-workflows Request: { "query": "Find workflows using webhook" } Step 9: AI Agent + RAG AI agent receives query Uses vector search tool Retrieves relevant chunks Generates contextual answer Step 10: Return Response Sends structured response back to user Includes workflow links: http://YOUR_N8N_HOST:5678/workflow/[ID] How To Customize Nodes Fetch n8n Workflows API** Increase limit Add filters for specific workflows Transform Workflow to Chunks** Include connections, credentials or triggers Embedding Model** Upgrade model for better accuracy AI Agent Prompt** Modify instructions, formatting or tone Metadata** Add fields like project name, owner or tags Add-ons (Enhancements) Real-time indexing via webhook trigger Workflow version history tracking UI dashboard for search Slack or Discord chatbot integration AI debugging assistant Workflow recommendation system Use Case Examples 1. Workflow Discovery โDo I already have a webhook + email automation?โ 2. Debugging Assistance โWhich workflow is calling this API?โ 3. Developer Onboarding Explore workflows using natural language 4. Reuse Automation Logic Find and reuse existing patterns 5. Documentation System Automatically understand workflow structure This workflow can support many more use cases depending on your automation needs. Troubleshooting Guide | Issue | Possible Cause | Solution | |------|----------------|----------| | No workflows fetched | Incorrect API URL or authentication | Verify endpoint and headers | | Empty responses | No indexed data available | Ensure indexing process has completed successfully | | Supabase error | Missing table or function setup | Run the required SQL setup scripts properly | | Duplicate entries | Delete step failed or skipped | Check metadata filter logic in delete node | | Poor answers | Weak or improper chunking strategy | Improve workflow-to-text transformation logic | | Embedding errors | OpenAI API issue or invalid key | Check OpenAI credentials and usage limits | Need Help? If you need help setting up or extending this workflow with AI-powered workflow assistants Custom RAG implementations Advanced n8n automation systems Enterprise-grade automation solutions Contact our n8n workflow developers at WeblineIndia for expert support and custom development. We can help you scale this into a production-ready AI automation platform.
by Artem Boiko
A Telegram bot that converts natural-language work descriptions into detailed cost estimates using AI parsing, vector search, and the open-source DDC CWICR database with 55,000+ construction work items. Who's it for Contractors & Estimators** who need quick ballpark figures from verbal/text descriptions Construction managers** doing feasibility checks on-site via mobile BIM/CAD professionals** integrating text-based estimation into workflows Developers** building construction cost APIs or chatbots What it does Receives text messages in Telegram (work lists, specifications, notes) Parses input with AI (OpenAI/Claude/Gemini) into structured work items Searches DDC CWICR vector database via Qdrant for matching rates Calculates costs with full breakdown (labor, materials, machines) Exports results as HTML report, Excel, or PDF Supports 9 languages: ๐ฉ๐ช DE ยท ๐ฌ๐ง EN ยท ๐ท๐บ RU ยท ๐ช๐ธ ES ยท ๐ซ๐ท FR ยท ๐ง๐ท PT ยท ๐จ๐ณ ZH ยท ๐ฆ๐ช AR ยท ๐ฎ๐ณ HI How it works โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โ Telegram โ โ โ AI Parse โ โ โ Embeddings โ โ โ Qdrant โ โ Text Input โ โ (GPT/Claude)โ โ (OpenAI) โ โ Search โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โ Export โ โ โ Aggregate โ โ โ Calculate โ โ โ AI Rerank โ โ HTML/XLS/PDFโ โ Results โ โ Costs โ โ Results โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ Step-by-step: User sends /start โ selects language โ enters work description AI Parse extracts work items: name, quantity, unit, room Query Transform optimizes search terms for construction domain Embeddings API converts query to vector (OpenAI text-embedding-3-small) Qdrant Search finds top-10 matching rates from DDC CWICR AI Rerank selects best match considering context and units Calculate applies quantities, sums labor/materials/machines Report sends Telegram message + optional Excel/PDF export Prerequisites | Component | Requirement | |-----------|-------------| | n8n | v1.30+ (AI nodes support) | | Telegram Bot | Token from @BotFather | | OpenAI API | For embeddings + LLM parsing | | Qdrant | Vector DB with DDC CWICR collections loaded | | DDC CWICR Data | github.com/datadrivenconstruction/DDC-CWICR | Setup 1. Credentials (n8n Settings โ Credentials) OpenAI API** โ required for embeddings and text parsing Anthropic API** โ optional, for Claude models Google Gemini API** โ optional, for Gemini models 2. Configuration (๐ TOKEN node) bot_token = YOUR_TELEGRAM_BOT_TOKEN QDRANT_URL = http://localhost:6333 QDRANT_API_KEY = (if using Qdrant Cloud) 3. Qdrant Setup Load DDC CWICR collections for your target languages: DE_construction_rates โ German (STLB-Bau based) EN_construction_rates โ English RU_construction_rates โ Russian (GESN/FER based) ... (see DDC CWICR docs for all 9 languages) 4. Link AI Model Nodes Open OpenAI Model nodes Select your OpenAI credential (Optional) Enable Claude/Gemini nodes for alternative models 5. Telegram Webhook Activate workflow Telegram Trigger auto-registers webhook Test with /start in your bot Features | Feature | Description | |---------|-------------| | ๐ค Multi-LLM | Swap between OpenAI, Claude, Gemini | | ๐ 9 Languages | Full UI + database localization | | ๐ Smart Parsing | Handles lists, tables, free-form text | | ๐ Semantic Search | Vector similarity + AI reranking | | ๐ Cost Breakdown | Labor, materials, machines, hours | | โ๏ธ Inline Edit | Modify quantities, delete items | | ๐ค Export | HTML report, Excel, PDF | | ๐พ Session State | Multi-turn conversation support | Example Input/Output Input (Telegram message): Living room renovation: Laminate flooring 25 mยฒ Wall painting 60 mยฒ Ceiling plasterboard 25 mยฒ 3 electrical outlets Output: โ Estimate Ready โ 4 items found Laminate flooring โ 25 mยฒ ร โฌ18.50 = โฌ462.50 โ Labor: โฌ125 ยท Materials: โฌ337.50 Wall painting โ 60 mยฒ ร โฌ8.20 = โฌ492.00 โ Labor: โฌ312 ยท Materials: โฌ180 Ceiling plasterboard โ 25 mยฒ ร โฌ32.00 = โฌ800.00 โ Labor: โฌ425 ยท Materials: โฌ375 Electrical outlets โ 3 pcs ร โฌ45.00 = โฌ135.00 โ Labor: โฌ95 ยท Materials: โฌ40 โโโโโโโโโโโโโโโโโโโโโ Total: โฌ1,889.50 [โ Excel] [โ PDF] [โป Restart] Notes & Tips First run:** Ensure Qdrant has DDC CWICR data loaded before testing Rate accuracy:** Results depend on query quality; AI reranking improves matching Large lists:** Bot handles 50+ items; progress shown per-item Customization:** Edit Config node for UI text, currencies, database mapping Extend:** Chain with your CRM, project management, or reporting tools Categories AI ยท Data Extraction ยท Communication ยท Files & Storage Tags telegram-bot, construction, cost-estimation, qdrant, vector-search, openai, multilingual, bim, cad Author DataDrivenConstruction.io https://DataDrivenConstruction.io info@datadrivenconstruction.io Consulting & Training We help construction, engineering, and technology firms implement: Open data principles for construction CAD/BIM processing automation AI-powered estimation pipelines ETL workflows for construction databases Contact us to test with your data or adapt to your project requirements. Resources DDC CWICR Database:** GitHub Qdrant Setup Guide:** qdrant.tech/documentation n8n AI Nodes:** docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain โญ Star us on GitHub! github.com/datadrivenconstruction/DDC-CWICR
by Muhammad Ali
Description How it works This powerful workflow helps businesses and freelancers automatically manage invoices received on WhatsApp. It detects new messages, downloads attached invoices, extracts key data using OCR (Optical Character Recognition), summarizes the details with AI, updates Google Sheets for record-keeping, saves files to Google Drive, and instantly replies with a clean summary message all without manual effort. Perfect for small businesses, agencies, accountants, and freelancers who regularly receive invoices via WhatsApp. Say goodbye to manual data entry and hello to effortless automation. Set up steps Setup takes around 10โ15 minutes: Connect your WhatsApp Cloud API to trigger incoming messages. Add your OCR.Space API key to extract invoice text. Link your Google Sheets and Google Drive accounts for data logging and storage. Enter your OpenAI API key for AI-based summarization. Import the template, test once, and youโre ready to automate your invoice workflow. Why use this workflow Save hours of manual data entry Keep all invoices safely stored and organized in Drive Get instant summaries directly in WhatsApp Improve efficiency for client billing, and expense tracking.
by ศugui Dragoศ
This workflow automates the process of turning meeting recordings into structured notes and actionable tasks using AssemblyAI and Google Sheets. It is ideal for teams who want to save time on manual note-taking and ensure that action items from meetings are never missed. What it does Receives a meeting recording (audio file) via webhook Transcribes the audio using AssemblyAI Uses AI to generate structured meeting notes and extract action items (tasks) Logs meeting details and action items to a Google Sheet for easy tracking Use cases Automatically document meetings and share notes with your team Track action items and responsibilities from every meeting Centralize meeting outcomes and tasks in Google Sheets Quick Setup AssemblyAI API Key: Sign up at AssemblyAI and get your API key. Google Sheets Credentials: Set up a Google Service Account and share your target Google Sheet with the service account email. OpenAI API Key (optional, if using OpenAI for notes extraction): Get your API key from OpenAI. Configure the following essential nodes: Recording Ready Webhook: Set the webhook URL in your meeting platform to trigger the workflow when a recording is ready. Workflow Configuration: Enter your AssemblyAI API key, default due date, and admin email. AssemblyAI Transcription: Add your AssemblyAI API key in the credentials. Generate Meeting Notes & Extract Action Items: Add your OpenAI API key if required. Log Meeting to Sheets: Enter your Google Sheets document ID and sheet name. How to Use AssemblyAI in this Workflow The workflow sends the meeting audio file to AssemblyAI via the AssemblyAI Transcription node. AssemblyAI processes the audio and returns a full transcript. The transcript is then used by AI nodes to generate meeting notes and extract action items. Requirements AssemblyAI API key Google Service Account credentials (Optional) OpenAI API key for advanced note and action item extraction Start the workflow by sending a meeting recording to the webhook URL. The rest is fully automated!
by Joseph
Transform meeting transcripts into fully customized, AI-powered presentations automatically. This comprehensive 5-workflow automation system analyzes client conversations and generates professional slide decks complete with personalized content and AI-generated illustrations. ๐ฏ What This Automation Does This end-to-end solution takes a meeting transcript (Google Docs) and client information as input, then automatically: Creates a presentation from your custom template Generates a strategic presentation plan tailored to the client's needs Creates custom illustrations using AI image generation Populates slides with personalized text content Inserts generated images into the appropriate slides Delivers a client-ready presentation Perfect for sales teams, consultants, agencies, and anyone who needs to create customized presentations at scale. ๐ง How It Works The automation is split into 5 interconnected workflows: Workflow 1: Clone Presentation & Database Setup Form trigger captures client name, transcript URL, and submission time Clones your presentation template via Google Slides API Saves presentation details to Google Sheets for tracking Workflow 2: AI Presentation Plan Generation Analyzes meeting transcript to understand client pain points Generates comprehensive presentation structure and content strategy Saves plan to Google Docs for review and tracking Uses company profile (customizable) to match solutions to client needs Workflow 3: AI Illustration Generation AI agent creates image prompts based on presentation plan Generates illustrations using Flux model via OpenRouter (nanobanana) Uploads images to Google Drive for slide insertion Tracks all generated assets in database Workflow 4: Text Content Population AI agent generates final presentation text from the plan Replaces template placeholders with personalized content Uses Object IDs to target specific text elements in slides Updates slides using native n8n Google Slides node Workflow 5: Image Insertion Retrieves image Object IDs from presentation structure Downloads illustrations from Google Drive Converts images for ImgBB hosting (resolves Google Drive URL limitations) Updates slide images via Google Slides API ๐ Prerequisites Required Accounts & API Keys: Google Workspace (Drive, Slides, Docs) OpenAI API (for AI agents) OpenRouter API (for Flux image generation) ImgBB API (free tier available) Gemini API (optional, for additional AI tasks) Setup Requirements: Google Sheets database (template provided in article and inside the workflow) Google Slides presentation template with standard Object IDs Meeting transcript in Google Docs format ๐จ Customization Options This automation is designed to be flexible: Template Flexibility**: Use any slide template structure Company Profile**: Customize the business context for your use case AI Models**: Swap OpenAI/Gemini agents for your preferred LLM Image Generation**: Replace Flux with DALL-E, Midjourney API, or other models Slide Logic**: Extend to dynamically select slides based on content needs ๐ก Key Technical Insights Structured Output Handling**: Uses JavaScript for reliable JSON parsing when AI output structure is complex Object ID System**: Template placeholders use unique IDs for precise element targeting Image Hosting Workaround**: ImgBB resolves Google Drive direct URL limitations in API calls HTTP Request Nodes**: Used for API operations not covered by native n8n nodes (copying presentations, image updates) ๐ Full Documentation For a detailed breakdown of each workflow, configuration steps, and best practices, read the complete guide on this Medium article ๐ Use Cases Sales Teams**: Auto-generate pitch decks from discovery calls Consulting Firms**: Create client proposals from needs assessments Marketing Agencies**: Build campaign presentations from strategy sessions Product Teams**: Transform user research into stakeholder presentations Training & Education**: Convert session notes into learning materials โ ๏ธ Important Notes Template must use consistent Object IDs for automation to work Google Drive images require ImgBB hosting for reliable URL access AI agent output structure is complex; JavaScript parsing recommended Rate limits apply for API services (especially image generation) ๐ฆ Resources & Templates API Services (Get Your Keys Here) OpenRouter** - For Flux (nanobanana) AI image generation ImgBB API** - Free image hosting service OpenAI API** - For AI agents and text generation Google Cloud Console** - Enable Google Slides, Drive, and Docs APIs Google AI Studio** - For Gemini API key Templates & Examples Meeting Transcript Sample** - Example transcript structure Google Sheets Database Template** - Copy this to track your presentations Presentation Template** - Base slide deck with Object IDs ๐ก Tip: Make copies of all templates before using them in your workflows! Have questions or improvements? Connect with me: X (Twitter): @juppfy Email: joseph@uppfy.com P.S: I'd love to hear how you adapt this for your workflow!
by Blumpo
Generate AI ad creatives from website, logo, and product image with Claude + NanoBanana Who is this for? This workflow is designed for marketers, founders, agencies, and content teams who want to generate static ad creatives faster from minimal brand input. It works especially well if you already have: a website a logo a product image, screenshot, or UI visual and want to turn that into a structured ad concept and final creative without building everything manually. What problem is this workflow solving? / Use case Creating decent ad creatives usually takes more than just prompting an image model. You need to: understand what the product actually does pull useful messaging from the website figure out who the product is for write a clear value proposition decide what visual direction makes sense then generate the final ad This workflow solves that by automating the full process from website + brand assets โ insights โ ad concept โ generated image. What this workflow does Collects a website URL, logo, and product image through a form Analyzes the uploaded product image with Claude to understand what kind of visual it is Fetches the homepage and selected internal pages from the website Extracts and cleans website text into one usable source Builds structured brand insights such as: product summary customer group problems key features key benefits brand voice Creates a marketing brief and ad concept with Claude Generates a static ad creative with NanoBanana through OpenRouter Converts the output into a file and uploads it to Google Drive Setup Connect your accounts: Anthropic API** for brand insights and ad concept generation OpenRouter** for image analysis and final image generation Google Drive** if you want to store the final output Set your credentials in the respective nodes. Make sure your form accepts: .jpg** .png** .webp** If you do not want file export, disable the Upload file node. How to customize this workflow to your needs Brand analysis: Adjust the prompt in the brand insight step if you want different fields, such as competitor angles, tone categories, or ICP detail. Page selection: Change the subpage selection prompt if you want to prioritize pages like pricing, testimonials, integrations, or case studies. Ad concept style: Edit the concept generation prompt to control tone, structure, and creative direction. Visual output: Update the image generation prompt to make outputs more minimal, more editorial, more SaaS-like, or more product-focused. Export flow: Replace Google Drive with your own storage, CMS, or downstream creative workflow. How it works The workflow starts with a form submission containing a website, logo, and optional product image. The uploaded assets are processed first: the logo is prepared for generation, while the product image is analyzed to understand whether it is a UI, product shot, illustration, object, or another type of visual. Next, the workflow fetches the homepage, extracts navigation links, and uses Claude to select a few useful internal pages likely to contain stronger marketing input. Those pages are fetched and converted into text. That content is then cleaned and merged into one source. Claude uses it to build structured brand insights and turn them into a full ad concept, including headline, subheadline, CTA, visual direction, and layout direction. Finally, the concept and uploaded assets are passed to the image model to generate the final ad creative, which can then be exported automatically. Result With this workflow, you go from website + assets โ brand insights โ ad concept โ generated creative in one flow, with much less manual prompting and much more structure.
by Gilbert Onyebuchi
Automate video creation: AI generates ideas, Vertex AI renders videos, and auto-uploads to Google Drive with complete tracking. What You Get Gemini AI for creative prompts Vertex AI video generation Auto-upload to Google Drive Complete Google Sheets logging Smart retry logic Base64 to MP4 conversion Setup Enable Vertex AI in Google Cloud Get Gemini API key Run gcloud auth print-access-token for ACCESS TOKEN Import workflow & configure credentials Add prompts & test Flow Schedule โ Gemini AI โ Vertex AI โ Wait โ Convert โ Upload โ Log Resources Google Sheets Template โ ๏ธ Note: ACCESS TOKEN expires hourly - refresh using gcloud auth print-access-token ๐ง LinkedIn: linkedin.com/in/yourprofile ๐ More n8n Products: Click here
by Jay Emp0
Automatically turns trending Reddit posts into punchy, first-person tweets powered by Google Gemini AI, Reddit, and Twitter API, with Google Sheets logging. ๐งฉ Overview This workflow repurposes Reddit content into original tweets every few hours. Itโs perfect for creators, marketers, or founders who want to automate content inspiration while keeping tweets sounding human, edgy, and fresh. Core automation loop: Fetch trending Reddit posts from selected subreddits. Use Gemini AI to write a short, first-person tweet. Check your Google Sheet to avoid reusing the same Reddit post. Publish to Twitter automatically. Log tweet + Reddit reference in Google Sheets. ๐ง Workflow Diagram ๐ช How It Works 1๏ธโฃ Every 2 hours โ the workflow triggers automatically. 2๏ธโฃ It picks a subreddit (like r/automation, r/n8n, r/SaaS). 3๏ธโฃ Gemini AI analyzes a rising Reddit post and writes a fresh, short tweet. 4๏ธโฃ The system checks your Google Sheet to ensure it hasnโt used that Reddit post before. 5๏ธโฃ Once validated, the tweet is published via Twitter API and logged. ๐ง Example Tweet Output ๐ Logged Data (Google Sheets) Each tweet is automatically logged for version control and duplication checks. | Date | Subreddit | Post ID | Tweet Text | |------|------------|----------|-------------| | 08/10/2025 | n8n_ai_agents | 1o16ome | Just saw a wild n8n workflow on Reddit... | โ๏ธ Key Components | Node | Function | |------|-----------| | Schedule Trigger | Runs every 2 hours to generate a new tweet. | | Code (Randomly Decide Subreddit) | Picks one subreddit randomly from your preset list. | | Gemini Chat Model | Generates tweet text in first person tone using custom prompt rules. | | Reddit Tool | Fetches top or rising posts from the chosen subreddit. | | Google Sheets (read database) | Keeps a record of already-used Reddit posts. | | Structured Output Parser | Ensures consistent tweet formatting (tweet text, subreddit, post ID). | | Twitter Node | Publishes the AI-generated tweet. | | Append Row in Sheet | Logs the tweet with date, subreddit, and post ID. | ๐งฉ Setup Tutorial 1๏ธโฃ Prerequisites | Tool | Purpose | |------|----------| | n8n Cloud or Self-Host | Workflow execution | | Google Gemini API Key | For tweet generation | | Reddit OAuth2 API | To fetch posts | | Twitter (X) API OAuth2 | To publish tweets | | Google Sheets API | For logging and duplication tracking | 2๏ธโฃ Import the Workflow Download Reddit Twitter Automation.json. In n8n, click Import Workflow โ From File. Connect your credentials: Gemini โ Gemini Reddit โ Reddit account Twitter โ X Google Sheets โ Gsheet 3๏ธโฃ Configure Google Sheet Your sheet must include these columns: | Column | Description | |--------|--------------| | PAST TWEETS | The tweet text | | Date | Auto-generated date | | subreddit | Reddit source | | post_id | Reddit post reference | 4๏ธโฃ Customize Subreddits In the Code Node, update this array to choose which subreddits to monitor: const subreddits = [ "n8n", "microsaas", "SaaS", "automation", "n8n_ai_agents" ];
by Roshan Ramani
๐ Smart Telegram Shopping Assistant with AI Product Recommendations Workflow Overview Target User Role: E-commerce Business Owners, Affiliate Marketers, Customer Support Teams Problem Solved: Businesses need an automated way to help customers find products on Telegram without manual intervention, while providing intelligent recommendations that increase conversion rates. Opportunity Created: Transform any Telegram channel into a smart shopping assistant that can handle both product queries and customer conversations automatically. What This Workflow Does This workflow creates an intelligent Telegram bot that: ๐ค Automatically detects** whether users are asking about products or just chatting ๐ Scrapes Amazon** in real-time to find the best matching products ๐ฏ Uses AI to analyze and rank** products based on price, ratings, and user needs ๐ฑ Delivers perfectly formatted** recommendations optimized for Telegram ๐ฌ Handles casual conversations** professionally when users aren't shopping Real-World Use Cases E-commerce Support**: Reduce customer service workload by 70% Affiliate Marketing**: Automatically recommend products with tracking links Telegram Communities**: Add shopping capabilities to existing channels Product Discovery**: Help customers find products they didn't know existed Key Features & Benefits ๐ง Intelligent Intent Detection Uses Google Gemini AI to understand user messages Automatically routes to product search or conversation mode Handles multiple languages and casual typing styles ๐ Real-Time Product Data Integrates with Apify's Amazon scraper for live data Fetches prices, ratings, reviews, and product details Processes up to 10 products per search instantly ๐ฏ AI-Powered Recommendations Analyzes multiple products simultaneously Ranks by relevance, value, and user satisfaction Provides top 5 personalized recommendations with reasoning ๐ฑ Telegram-Optimized Output Perfect formatting with emojis and markdown Respects character limits for mobile viewing Includes direct purchase links for easy buying Setup Requirements Required Credentials Telegram Bot Token - Free from @BotFather Google Gemini API Key - Free tier available at AI Studio Apify API Token - Free tier includes 100 requests/month Required n8n Nodes @n8n/n8n-nodes-langchain (for AI functionality) Built-in Telegram, HTTP Request, and Code nodes Quick Setup Guide Step 1: Telegram Bot Creation Message @BotFather on Telegram Create new bot with /newbot command Copy the bot token to your credentials Step 2: AI Configuration Sign up for Google AI Studio Generate API key for Gemini Add credentials to all three AI model nodes Step 3: Product Scraping Setup Register for free Apify account Get API token from dashboard Add token to "Amazon Product Scraper" node Step 4: Activation Import workflow JSON Add your credentials Activate the Telegram Trigger Test with a product query! Workflow Architecture ๐ฑ Message Entry Point Telegram Trigger receives all messages ๐งน Query Preprocessing Cleans and normalizes user input for better search results ๐ค AI Intent Classification Determines if message is product-related or conversational ๐ Smart Routing Directs to appropriate workflow path based on intent ๐ฌ Conversation Path Handles greetings, questions, and general support ๐ Product Search Path Scrapes Amazon โ Processes data โ AI analysis โ Recommendations ๐ค Optimized Delivery Formats and sends responses back to Telegram Customization Opportunities Easy Modifications Multiple Marketplaces**: Add eBay, Flipkart, or local stores Product Categories**: Specialize for electronics, fashion, etc. Language Support**: Translate for different markets Branding**: Customize responses with your brand voice Advanced Extensions Price Monitoring**: Set up alerts for price drops User Preferences**: Remember customer preferences Analytics Dashboard**: Track popular products and queries Affiliate Integration**: Add commission tracking links Success Metrics & ROI Performance Benchmarks Response Time**: 3-5 seconds for product queries Accuracy**: 90%+ relevant product matches User Satisfaction**: 85%+ positive feedback in testing Business Impact Reduced Support Costs**: Automate 70% of product inquiries Increased Conversions**: Personalized recommendations boost sales 24/7 Availability**: Never miss a customer inquiry Scalability**: Handle unlimited concurrent users Workflow Complexity Intermediate Level - Requires API setup but includes detailed instructions. Perfect for users with basic n8n experience who want to create something powerful.
by Cheng Siong Chin
How It Works This workflow automates predictive maintenance for vehicle fleets by combining real-time telemetry analysis with historical pattern recognition to identify potential failures before they occur. Designed for fleet managers, maintenance supervisors, and transportation operations teams, it solves the critical challenge of preventing unexpected vehicle breakdowns while optimizing maintenance scheduling and resource allocation. The system triggers on schedule, fetches current vehicle telemetry data alongside historical maintenance records, merges datasets for comprehensive analysis, then deploys specialized AI agents using Anthropic's Claude to detect anomalies and prioritize maintenance interventions. The workflow calculates urgency levels using machine learning models and business rules, formats findings into standardized maintenance records and urgent alerts, generates audit logs for compliance tracking, and routes notifications to appropriate maintenance teams based on severity. Setup Steps Configure Schedule Trigger with desired monitoring frequency for fleet checks Set up API credentials for Fetch Real-Time Vehicle Telemetry node with fleet management system Configure Fetch Historical Vehicle Data node with maintenance database API access Connect Anthropic API credentials for both Anomaly Detection and Maintenance Prioritization agents Update Anomaly Detection Model with your fleet's baseline performance parameters Customize UL Calculation Tool and Maintenance Prioritization Output Parser Prerequisites Active Anthropic API account, fleet telemetry system with API access, historical maintenance database Use Cases Commercial fleet preventive maintenance, vehicle health monitoring, breakdown prediction Customization Modify anomaly detection thresholds for vehicle types, adjust prioritization algorithms for operational priorities Benefits Reduces unexpected breakdowns by 80%, decreases maintenance costs through predictive scheduling
by Rahul Joshi
Automatically detect, classify, and document GitHub API errors using AI. This workflow connects GitHub, OpenAI (GPT-4o), Airtable, Notion, and Slack to build a real-time, searchable API error knowledge base โ helping engineering and support teams respond faster, stay aligned, and maintain clean documentation. โ๏ธ๐๐ฌ ๐ What This Template Does 1๏ธโฃ Triggers on new or updated GitHub issues (API-related). ๐ช 2๏ธโฃ Extracts key fields (title, body, repo, and link). ๐ 3๏ธโฃ Classifies issues using OpenAI GPT-4o, identifying error type, category, root cause, and severity. ๐ค 4๏ธโฃ Validates & parses AI output into structured JSON format. โ 5๏ธโฃ Creates or updates organized FAQ-style entries in Airtable for quick lookup. ๐๏ธ 6๏ธโฃ Logs detailed entries into Notion, maintaining an ongoing issue knowledge base. ๐ 7๏ธโฃ Notifies the right Slack team channel (DevOps, Backend, API, Support) with concise summaries. ๐ฌ 8๏ธโฃ Tracks & prevents duplicates, keeping your error catalog clean and auditable. ๐ ๐ก Key Benefits โ Converts unstructured GitHub issues into AI-analyzed documentation โ Centralizes API error intelligence across teams โ Reduces time-to-resolution for recurring issues โ Maintains synchronized records in Airtable & Notion โ Keeps DevOps and Support instantly informed through Slack alerts โ Fully automated, scalable, and low-cost using GPT-4o โ๏ธ Features Real-time GitHub trigger for API or backend issues GPT-4o-based AI classification (error type, cause, severity, confidence) Smart duplicate prevention logic Bi-directional sync to Airtable + Notion Slack alerts with contextual AI insights Modular design โ easy to extend with Jira, Teams, or email integrations ๐งฐ Requirements GitHub OAuth2 credentials OpenAI API key (GPT-4o recommended) Airtable Base & Table IDs (with fields like Error Code, Category, Severity, Root Cause) Notion integration with database access Slack Bot token with chat:write scope ๐ฅ Target Audience Engineering & DevOps teams managing APIs Customer support & SRE teams maintaining FAQs Product managers tracking recurring API issues SaaS orgs automating documentation & error visibility ๐ช Step-by-Step Setup Instructions 1๏ธโฃ Connect your GitHub account and enable the โissuesโ webhook event. 2๏ธโฃ Add OpenAI credentials (GPT-4o model for classification). 3๏ธโฃ Create an Airtable base with fields: Error Code, Category, Root Cause, Severity, Confidence. 4๏ธโฃ Configure your Notion database with matching schema and access. 5๏ธโฃ Set up Slack credentials and choose your alert channels. 6๏ธโฃ Test with a sample GitHub issue to validate AI classification. 7๏ธโฃ Enable the workflow โ enjoy continuous AI-powered issue documentation!