by Axiomlab.dev
Tasks Briefing This template posts a clean, Slack-ready morning summary of your Google Tasks due today. It fetches tasks, filters only those due “today” in your timezone, asks a local LLM (via LangChain + Ollama) to produce a short summary (no steps, just a concise brief), strips any hidden <think> blocks, and delivers the message to your chosen Slack channel. How it works Trigger at Morning (Cron) – runs at 7:00 AM (you can change the hour) to kick things off daily. Get many tasks (Google Tasks node) – pulls tasks from your selected Google Tasklist. Code (Filter Due Today) – normalizes dates to your timezone, keeps only tasks due today, and emits a fallback flag if none exist. If – routes: True (has tasks) → continues to the LLM summary path. False (no tasks) → sends a “No tasks due today” message to Slack. Code (Build LLM Prompt) – builds a compact, Markdown-only prompt for the model (no tool calls). Basic LLM Chain (LangChain) + Ollama Model – generates a short summary for Slack. Code (Cleanup) – removes any <think>…</think> content if the model includes it. Send a message (Slack) – posts the final brief to your Slack channel. Required credentials Google Tasks OAuth2 API** – to read tasks from your Google Tasklist. Slack API** – to post the summary into a channel. Ollama** – local model endpoint (e.g., qwen3:4b); used by the LangChain LLM nodes. Setup Instructions Google Tasks credential In Google Cloud Console: enable Google Tasks API, create an OAuth Client (Web), and set the redirect URI shown by n8n. In n8n Credentials, add Google Tasks OAuth2 API with scope: https://www.googleapis.com/auth/tasks (read/write) or https://www.googleapis.com/auth/tasks.readonly (read-only). In the Get many tasks node, select your credential and your Tasklist. Slack credential & channel In n8n Credentials, add Slack API (bot/user token with chat:write). In Send a message nodes, select your Slack credential and set the Channel (e.g., #new-leads). Ollama model (LangChain) Ensure Ollama is running on your host (default http://localhost:11434). Pull a model (e.g., ollama pull qwen3:4b) or use another supported model (llama3:8b, etc.). In Ollama Model node, select your Ollama credential and set the model name to match what you pulled. Timezone & schedule The Cron node is set to 7:00 AM. Adjust as needed. The Code (Filter Due Today) node is configured for Asia/Dhaka; change the TZ constant if you prefer a different timezone. (Optional) Cleanup safety The template includes a Code (Cleanup) node that strips <think>…</think> blocks from model output. Keep this connected before the Slack node. Test the flow Run the workflow once manually: If you have tasks due today, you should see a concise summary posted to your Slack channel. If none are due, you’ll receive a friendly “No tasks due today” message. Activate When everything looks good, toggle the workflow Active to receive the daily summary automatically.
by Abdulrahman Alhalabi
Arabic OCR Telegram Bot How it Works Receive PDF Files - Users send PDF documents via Telegram to the bot OCR Processing - Mistral AI's OCR service extracts Arabic text from document pages Text Organization - Processes and formats extracted content with page numbers Create Google Doc - Generates a formatted document with all extracted text Deliver Results - Sends users a clickable link to their processed document Set up Steps Setup Time: ~20 minutes Create Telegram Bot - Get bot token from @BotFather on Telegram Configure APIs - Set up Mistral AI OCR and Google Docs API credentials Set Folder Permissions - Create Google Drive folder for storing results Test Bot - Send a sample Arabic PDF to verify OCR accuracy Deploy Webhook - Activate the Telegram webhook for real-time processing Detailed API configuration and Arabic text handling notes are included as sticky notes within the workflow. What You'll Need: Telegram Bot Token (free from @BotFather) Mistral AI API key (OCR service) Google Docs/Drive API credentials Google Drive folder for document storage Sample Arabic PDF files for testing Key Features: Real-time progress updates (5-step process notifications) Automatic page numbering in Arabic Direct Google Docs integration Error handling for non-PDF files
by Cuong Nguyen
Who is this for? This workflow is designed for Content Marketing Teams, Agencies, and Professional Editors who prefer writing in Google Docs but need a seamless way to publish to WordPress. Unlike generic "AI Writers" that generate content from scratch (which often fails AI detection), this workflow focuses on "Document Ops"—automating the tedious task of moving, cleaning, and optimizing existing human-written content. Why use this workflow? (The SEO Advantage) Most automation templates leave your SEO score at 0/100 because they fail to map RankMath metadata. This workflow hits the ground running with an immediate 65-70/100 RankMath Score. By using a Gemini AI Agent to analyze your content and mapping it to hidden RankMath API fields, it automatically passes these critical checks: ✅ Focus Keyword in SEO Title: AI automatically inserts the target keyword at the beginning. ✅ Focus Keyword in Meta Description: AI crafts a compelling description containing the keyword. ✅ Focus Keyword in URL: AI generates a clean, short, keyword-rich slug. ✅ Focus Keyword at the Start: The workflow intelligently injects a "hook" sentence containing the keyword at the very top of your post. ✅ Content Length: Preserves your original long-form content. How it works Monitors Google Drive: Watches for new HTML/Doc files in a specific "Drafts" folder. Cleans Content: Sanitizes raw HTML from Google Docs (removing messy styles and tags). Smart Duplicate Check: Checks if the post already exists on WordPress (via slug) to decide whether to Create a new draft or Update an existing one. AI Analysis (Gemini): Extracts the best Focus Keyword, SEO Title, and Meta Description from your content. RankMath Integration: Pushes these SEO values directly into RankMath's custom meta keys. Archiving: Moves processed files to a "Published" folder to keep your Drive organized. Critical Prerequisites (Must Read) To allow n8n to update RankMath SEO data and prevent 401 Unauthorized errors, you MUST add a helper snippet to your WordPress site. Access your WordPress files via FTP/File Manager. Navigate to wp-content/mu-plugins/ (Create the folder mu-plugins if it doesn't exist). Create a file named n8n-rankmath-helper.php and paste the following code: <?php /* Plugin Name: n8n RankMath & Auth Helper Description: Fixes Basic Auth Header for n8n and exposes RankMath meta keys to REST API. */ // 1. Fix Authorization Header (Solves 401 Errors on Apache/LiteSpeed) add_filter('wp_is_application_passwords_available', '__return_true'); if ( !function_exists('aiops_enable_basic_auth') ) { function aiops_enable_basic_auth() { if ( isset( $_SERVER['HTTP_AUTHORIZATION'] ) ) { $auth = $_SERVER['HTTP_AUTHORIZATION']; if ( strpos( $auth, 'Basic ' ) === 0 ) { list( $username, $password ) = explode( ':', base64_decode( substr( $auth, 6 ) ) ); $_SERVER['PHP_AUTH_USER'] = $username; $_SERVER['PHP_AUTH_PW'] = $password; } } } add_action('init', 'aiops_enable_basic_auth'); } // 2. Expose RankMath Meta Keys to REST API add_action( 'rest_api_init', function () { $meta_keys = [ 'rank_math_title', 'rank_math_description', 'rank_math_focus_keyword', 'rank_math_robots', 'rank_math_canonical_url' ]; foreach ( $meta_keys as $meta_key ) { register_meta( 'post', $meta_key, [ 'show_in_rest' => true, 'single' => true, 'type' => 'string', 'auth_callback' => function() { return current_user_can( 'edit_posts' ); } ] ); } }); ?>; How to set up 1. Configure Credentials: Google Drive OAuth2** (Drive scopes). Google Gemini (PaLM)** API Key. WordPress: Connect using **Application Passwords (Users > Profile > Application Passwords). 2. Global Configuration (First Node): Open the node named CONFIG - Edit Settings Here. wp_base_url**: Enter your site URL (e.g., https://your-site.com - no trailing slash). drive_published_folder_id**: Enter the ID of the Google Drive folder where you want to move published files. 3. Trigger Setup: Open the Google Drive Trigger node. Select your specific "Drafts" folder in the Folder to Watch field. Future Roadmap We are actively improving this template. Upcoming V2 will feature: AI Featured Image Generation: Auto-create branded thumbnails. Content Illustrations: Auto-insert relevant images into the body content. Need Help or Want to Customize This? Contact me for consulting and support: Email: cuongnguyen@aiops.vn
by Summer
Website Leads to Voice Demo and Scheduling Creator: Summer Chang AI Booking Agent Setup Guide Overview This automation turns your website into an active booking agent. When someone fills out your form, it automatically: Adds their information to Notion AI researches their business from their website Calls them immediately with a personalized pitch Updates Notion with call results Total setup time: 30-45 minutes What You Need Before starting, create accounts and gather these: n8n account (cloud or self-hosted) Notion account - Free plan works duplicate my notion template OpenRouter API key - Get from openrouter.ai Vapi account - Get from vapi.ai Create an AI assistant Set up a phone number Copy your API key, Assistant ID, and Phone Number ID How It Works The Complete Flow Visitor fills form on your website Form submission creates new record in Notion with Status = "New" Notion Trigger detects new record (checks every minute) Main Workflow executes: Fetches lead's website AI analyzes their business Updates Notion with analysis Makes Vapi call with personalized intro Call happens between your AI agent and the lead When call ends, Vapi sends webhook to n8n Webhook Workflow executes: Fetches call details from Vapi AI generates call summary Updates Notion with results and recording
by Mr Shifu
AI NETWORK DIAGRAM PROMPT GENERATOR Template Description This workflow automates the creation of network diagram prompts using AI. It retrieves Layer-2 topology data from AWX, parses device relationships, and generates a clean, structured prompt ready for Lucidchart’s AI diagram generator. How It Works The workflow triggers an AWX Job Template that runs commands such as show cdp neighbors detail. After the job completes, n8n fetches the stdout, extracts neighbor relationships through a JavaScript parser, and sends the structured data to an LLM (Gemini). The LLM transforms the topology into a formatted prompt you can paste directly into Lucidchart to instantly generate a visual network diagram. Setup Steps Configure AWX: Ensure your Job Template runs the required network commands and produces stdout. Obtain your AWX base URL, credentials, and Job Template ID. Add Credentials in n8n: Create AWX API credentials. Add Google Gemini credentials for the LLM node. Update Workflow Nodes: Insert your AWX URL and Job Template ID in the “Launch Job” node. Verify endpoints in the “Job Status” and “Job Stdout” nodes. Run the workflow: After execution, copy the generated Lucidchart prompt and paste it into Lucidchart’s AI to produce the network diagram.
by Dietmar
Build a PDF to Vector RAG System: Mistral OCR, Weaviate Database and MCP Server A comprehensive RAG (Retrieval-Augmented Generation) workflow that transforms PDF documents into searchable vector embeddings using advanced AI technologies. 🚀 Features PDF Document Processing**: Upload and extract text from PDF files using Mistral's OCR capabilities Vector Database Storage**: Store document embeddings in Weaviate vector database for efficient retrieval AI-Powered Search**: Search through documents using semantic similarity with Cohere embeddings MCP Server Integration**: Expose the knowledge base as an AI tool through MCP (Model Context Protocol) Document Metadata**: Basic document metadata including filename, content, source, and upload timestamp Text Chunking**: Automatic text splitting for optimal vector storage and retrieval 🛠️ Technologies Used Mistral AI**: OCR and text extraction from PDF documents Weaviate**: Vector database for storing and retrieving document embeddings Cohere**: Multilingual embeddings and reranking for improved search accuracy MCP (Model Context Protocol)**: AI tool integration for external AI workflows n8n**: Workflow automation and orchestration 📋 Prerequisites Before using this template, you'll need to set up the following credentials: Mistral Cloud API: For PDF text extraction Weaviate API: For vector database operations Cohere API: For embeddings and reranking HTTP Header Auth: For MCP server authentication 🔧 Setup Instructions Import the template into your n8n instance Configure credentials for all required services Set up Weaviate collection named "KnowledgeDocuments" Configure webhook paths for the MCP server and form trigger Test the workflow by uploading a PDF document 📊 Workflow Overview PDF Upload → Text Extraction → Document Processing → Vector Storage → AI Search ↓ ↓ ↓ ↓ ↓ Form Trigger → Mistral OCR → Prepare Metadata → Weaviate DB → MCP Server 🎯 Use Cases Knowledge Base Management**: Create searchable repositories of company documents Research Documentation**: Process and search through research papers and reports Legal Document Search**: Index and search through legal documents and contracts Technical Documentation**: Make technical manuals and guides searchable Academic Literature**: Process and search through academic papers and publications ⚠️ Important Notes Model Consistency**: Use the same embedding model for both storage and retrieval Collection Management**: Ensure your Weaviate collection is properly configured API Limits**: Be aware of rate limits for Mistral, Cohere, and Weaviate APIs Document Size**: Consider chunking large documents for optimal processing 🔗 Related Resources n8n Documentation Weaviate Documentation Mistral AI Documentation Cohere Documentation MCP Protocol Documentation 📝 License This template is provided as-is for educational and commercial use.
by Dahiana
This template demonstrates how to build an AI-powered name generator that creates realistic names perfect for UX/UI designers, developers, and content creators. Use cases: User persona creation, mockup development, prototype testing, customer testimonials, team member listings, app interface examples, website content, accessibility testing, and any scenario requiring realistic placeholder names. How it works AI-Powered Generation:** Uses any LLM to generate names based on your specifications Customizable Parameters:** Accepts gender preferences, name count, and optional reference names for style matching UX/UI Optimized:** Names are specifically chosen to work well in design mockups and prototypes Smart Formatting:** Returns clean JSON arrays ready for integration with design tools and applications Reference Matching:** Can generate names similar in style to a provided reference name How to set up Replace "Dummy API" credentials with your preferred language model API key Update webhook path and authentication as needed for your application Test with different parameters: gender (masculine/feminine/neutral), count (1-20), reference_name (optional) Integrate webhook URL with your design tools, Bubble apps, or other platforms Requirements LLM API access (OpenAI, Claude, or other language model) n8n instance (cloud or self-hosted) Platform capable of making HTTP POST requests API Usage POST to webhook with JSON body: { "gender": "masculine", "count": 5, "reference_name": "Alex Chen" // optional } Response: { "success": true, "names": ["Marcus Johnson", "David Kim", "Sofia Rodriguez", "Chen Wei", "James Wilson"], "count": 5 } How to customize Modify AI prompt for specific naming styles or regions Add additional parameters (age, profession, cultural background) Connect to databases for persistent name storage Integrate with design tools APIs (Figma, Sketch, Adobe XD) Create batch processing for large mockup projects
by Dahiana
YouTube Transcript Extractor This n8n template demonstrates how to extract transcripts from YouTube videos using two different approaches: automated Google Sheets monitoring and direct webhook API calls. Use cases: Content creation, research, accessibility, meeting notes, content repurposing, SEO analysis, or building transcript databases for analysis. How it works Google Sheets Integration:** Monitor a sheet for new YouTube URLs and automatically extract transcripts Direct API Access:** Send YouTube URLs via webhook and get instant transcript responses Smart Parsing:** Extracts video ID from various YouTube URL formats (youtube.com, youtu.be, embed) Rich Metadata:** Returns video title, channel, publish date, duration, and category alongside transcript Fallback Handling:** Gracefully handles videos without available transcripts Two Workflow Paths Automated Sheet Processing: Add URLs to Google Sheet → Auto-extract → Save results to sheet Webhook API: Send POST request with video URL → Get instant transcript response How to set up Replace "Dummy YouTube Transcript API" credentials with your YouTube Transcript API key Create your own Google Sheet with columns: "url" (input sheet) and "video title", "transcript" (results sheet) Update Google Sheets credentials to connect your sheets Test each workflow path separately Customize the webhook path and authentication as needed Requirements YouTube Transcript API access (youtube-transcript.io or similar) Google Sheets API credentials (for automated workflow) n8n instance (cloud or self-hosted) YouTube videos How to customize Modify transcript processing in the Code nodes Add additional metadata extraction Connect to other storage solutions (databases, CMS) Add text analysis or summarization steps Set up notifications for new transcripts
by Avkash Kakdiya
How it works This workflow automates the generation of ad-ready product images by combining product and influencer photos with AI styling. It runs on a scheduled trigger, fetches data from Google Sheets, and retrieves product and influencer images from Google Drive. The images are processed with OpenAI and OpenRouter to generate enhanced visuals, which are then saved back to Google Drive. Finally, the result is logged into Google Sheets with a ready-to-publish status. Step-by-step 1. Trigger & Data preparation Schedule Trigger** – Runs workflow automatically on a set schedule. Google Sheets (Get the Raw)** – Retrieves today’s product and model URLs. Google Drive (Download Product Image)** – Downloads the product image. Google Drive (Download Influencer Image)** – Downloads the influencer image. Extract from File (Binary → Base64)** – Converts both product and model images for AI processing. 2. AI analysis & image generation OpenAI (Analyze Image)** – Creates an ad-focused visual description (lighting, mood, styling). HTTP Request (OpenRouter Gemini)** – Generates an AI-enhanced image combining product + influencer. Code Node (Cleanup)** – Cleans base64 output to remove extra prefixes. Convert to File** – Transforms AI output into a proper image file. 3. Save & update Google Drive (Upload Image)** – Uploads generated ad image to target folder. Google Sheets (Append/Update Row)** – Stores the Drive link and updates publish status. Why use this? Automates the entire ad image creation process without manual design work. Ensures product visuals are consistent, styled, and ad-ready. Saves final creatives in Google Drive for easy access and sharing. Keeps campaign tracking organized by updating Google Sheets automatically. Scales daily ad production efficiently for multiple products or campaigns.
by Takuya Ojima
Who’s it for Remote and distributed teams that schedule across time zones and want to avoid meetings landing on public holidays—PMs, CS/AM teams, and ops leads who own cross-regional calendars. What it does / How it works The workflow checks next week’s Google Calendar events, compares event dates against public holidays for selected country codes, and produces a single Slack digest with any conflicts plus suggested alternative dates. Core steps: Workflow Configuration (Set) → Fetch Public Holidays (via a public holiday API such as Calendarific/Nager.Date) → Get Next Week Calendar Events (Google Calendar) → Detect Holiday Conflicts (compare dates) → Generate Reschedule Suggestions (find nearest business day that isn’t a holiday/weekend) → Format Slack Digest → Post Slack Digest. How to set up Open Workflow Configuration (Set) and edit: countryCodes, calendarId, slackChannel, nextWeekStart, nextWeekEnd. Connect your own Google Calendar and Slack credentials in n8n (no hardcoded keys). (Optional) Adjust the Trigger to run daily or only on Mondays. Requirements n8n (Cloud or self-hosted) Google Calendar read access to the target calendar Slack app with permission to post to the chosen channel A public-holiday API (no secrets needed for Nager.Date; Calendarific requires an API key) How to customize the workflow Time window: Change nextWeekStart/End to scan a different period. Holiday sources: Add or swap APIs; merge multiple regions. Suggestion logic: Tweak the look-ahead window or rules (e.g., skip Fridays). Output: Post per-calendar messages, DM owners, or create tentative reschedule events automatically.
by vanhon
Split Test AI Prompts Using Supabase & Langchain Agent This workflow allows you to A/B test different prompts for an AI chatbot powered by Langchain and OpenAI. It uses Supabase to persist session state and randomly assigns users to either a baseline or alternative prompt, ensuring consistent prompt usage across the conversation. 🧠 Use Case Prompt optimization is crucial for maximizing the performance of AI assistants. This workflow helps you run controlled experiments on different prompt versions, giving you a reliable way to compare performance over time. ⚙️ How It Works When a message is received, the system checks whether the session already exists in the Supabase table. If not, it randomly assigns the session to either the baseline or alternative prompt. The selected prompt is passed into a Langchain Agent using the OpenAI Chat Model. Postgres is used as chat memory for multi-turn conversation support. 🧪 Features Randomized A/B split test per session Supabase database for session persistence Langchain Agent + OpenAI GPT-4o integration PostgreSQL memory for maintaining chat context Fully documented with sticky notes 🛠️ Setup Instructions Create a Supabase table named split_test_sessions with the following columns: session_id (text) show_alternative (boolean) Add credentials for: Supabase OpenAI PostgreSQL (for chat memory) Modify the "Define Path Values" node to set your baseline and alternative prompts. Activate the workflow. Send messages to test both prompt paths in action. 🔄 Next Steps Add tracking for conversions or feedback scores to compare outcomes. Modify the prompt content or model settings (e.g. temperature, model version). Expand to multi-variant tests beyond A/B. 📚 Learn More How This Workflow Uses Supabase + OpenAI for Prompt Testing
by SpaGreen Creative
WooCommerce New Category Alert via WhatsApp Using Rapiwa API This n8n automation listens for the creation of a new WooCommerce product category, fetches all WooCommerce customers, cleans and formats their phone numbers, verifies them using the Rapiwa WhatsApp validation API, sends a WhatsApp message to verified numbers with the new category info, and logs each interaction into a Google Sheet (separately for verified and unverified customers). Who this is for You have a WooCommerce store and want to: Send a promotional message when a new product category is added, Verify customer WhatsApp numbers in bulk, Keep a clear log in Google Sheets of which numbers are verified or not. What it does (high level) Webhook is triggered when a new WooCommerce category is created. Fetches all WooCommerce customers via API. Limits processing to the first 10 customers (for performance/testing). Cleans phone numbers (removes +, spaces, and non-digits). Verifies each number via Rapiwa WhatsApp Verify API. If verified: sends WhatsApp message with new category info, logs as Verification = verified, Status = sent. If not verified: logs as Verification = unverified, Status = not sent. Processes users in batches with delays to avoid rate limiting. How it works (step-by-step) Trigger**: Webhook node is triggered by WooCommerce category creation. Format Data**: Category details (name, slug, description) are parsed. Get Customers**: Fetch all WooCommerce customers using the WooCommerce API. Limit**: Only the first 10 are processed. Loop & Clean**: Loop over each customer, clean phone numbers and extract info. Verify Number**: Send HTTP POST to https://app.rapiwa.com/api/verify-whatsapp. Decision Node**: Use If node to check if exists == true. Send Message**: If verified, send WhatsApp message with category details. Append to Sheet**: Log verified and unverified customers separately in Google Sheets. Wait + Batch Control**: Use Wait and SplitInBatches nodes to control flow and prevent throttling. Example verify body (HTTP Request node): { "number": "{{ $json['WhatsApp No'] }}" } Customization ideas Send images, videos, or template messages if supported by Rapiwa. Personalize messages using name or category data. Increase delay or reduce batch size to minimize risk of rate limits. Add a second sheet to log full API responses for debugging and auditing. Best practices Test on small batches before scaling. Only send messages to users who opted in. Store API credentials securely using n8n’s credentials manager. Ensure your Google Sheet column headers match exactly with what's expected. Key Improvements Made Clarified the trigger source as a Webhook from WooCommerce category creation. Fixed inconsistency in the "What it does" section (originally referenced reading from Google Sheets, but your workflow starts from WooCommerce, not Sheets). Standardized terminology to match n8n nodes: Webhook, Loop, HTTP Request, etc. Aligned the flow exactly with your nodes: Webhook → Format → Get Customers → Limit → Loop → Clean → Verify → If → Send/Log → Wait → Repeat Useful Links Dashboard:** https://app.rapiwa.com Official Website:** https://rapiwa.com Documentation:** https://docs.rapiwa.com Support WhatsApp Support: Chat Now Discord: Join SpaGreen Community Facebook Group: SpaGreen Support Website: https://spagreen.net Developer Portfolio: Codecanyon SpaGreen