by Nikan Noorafkan
🧠 Google Ads Monthly Performance Optimization (Channable + Google Ads + Relevance AI) 🚀 Overview This workflow automatically analyzes your Google Ads performance every month, identifies top-performing themes and categories, and regenerates optimized ad copy using Relevance AI — powered by insights from your Channable product feed. It then saves the improved ads to Google Sheets for review and sends a detailed performance report to your Slack workspace. Ideal for marketing teams who want to automate ad optimization at scale with zero manual intervention. 🔗 Integrations Used Google Ads** → Fetch campaign and ad performance metrics using GAQL. Relevance AI** → Analyze performance data and regenerate ad copy using AI agents and tools. Channable** → Pull updated product feeds for ad refresh cycles. Google Sheets** → Save optimized ad copy for review and documentation. Slack** → Send a 30-day performance report to your marketing team. 🧩 Workflow Summary | Step | Node | Description | | ---- | --------------------------------------------------- | --------------------------------------------------------------------------- | | 1 | Monthly Schedule Trigger | Runs automatically on the 1st of each month to review last 30 days of data. | | 2 | Get Google Ads Performance Data | Fetches ad metrics via GAQL query (impressions, clicks, CTR, etc.). | | 3 | Calculate Performance Metrics | Groups results by ad group and theme to find top/bottom performers. | | 4 | AI Performance Analysis (Relevance AI) | Generates human-readable insights and improvement suggestions. | | 5 | Update Knowledge Base (Relevance AI) | Saves new insights for future ad copy training. | | 6 | Get Updated Product Feed (Channable) | Retrieves the latest catalog items for ad regeneration. | | 7 | Split Into Batches | Splits the feed into groups of 50 to avoid API rate limits. | | 8 | Regenerate Ad Copy with Insights (Relevance AI) | Rewrites ad copy with the latest product and performance data. | | 9 | Save Optimized Ads to Sheets | Writes output to your “Optimized Ads” Google Sheet. | | 10 | Generate Performance Report | Summarizes the AI analysis, CTR trends, and key insights. | | 11 | Email Performance Report (Slack) | Sends report directly to your Slack channel/team. | 🧰 Requirements Before running the workflow, make sure you have: A Google Ads account with API access and OAuth2 credentials. A Relevance AI project (with one Agent and one Tool setup). A Channable account with API key and project feed. A Google Sheets document for saving results. A Slack webhook URL for sending performance summaries. ⚙️ Environment Variables Add these environment variables to your n8n instance (via .env or UI): | Variable | Description | | -------------------------------- | ------------------------------------------------------------------- | | GOOGLE_ADS_API_VERSION | API version (e.g., v17). | | GOOGLE_ADS_CUSTOMER_ID | Your Google Ads customer ID. | | RELEVANCE_AI_API_URL | Base Relevance AI API URL (e.g., https://api.relevanceai.com/v1). | | RELEVANCE_AGENT_PERFORMANCE_ID | ID of your Relevance AI Agent for performance analysis. | | RELEVANCE_KNOWLEDGE_SOURCE_ID | Knowledge base or dataset ID used to store insights. | | RELEVANCE_TOOL_AD_COPY_ID | Relevance AI tool ID for generating ad copy. | | CHANNABLE_API_URL | Channable API endpoint (e.g., https://api.channable.com/v1). | | CHANNABLE_COMPANY_ID | Your Channable company ID. | | CHANNABLE_PROJECT_ID | Your Channable project ID. | | FEED_ID | The feed ID for product data. | | GOOGLE_SHEET_ID | ID of your Google Sheet to store optimized ads. | | SLACK_WEBHOOK_URL | Slack Incoming Webhook URL for sending reports. | 🔐 Credentials Setup in n8n | Credential | Type | Usage | | ----------------------------------------------- | ------- | --------------------------------------------------- | | Google Ads OAuth2 API | OAuth2 | Authenticates your Ads API queries. | | HTTP Header Auth (Relevance AI & Channable) | Header | Uses your API key as Authorization: Bearer <key>. | | Google Sheets OAuth2 API | OAuth2 | Writes optimized ads to Sheets. | | Slack Webhook | Webhook | Sends monthly reports to your team channel. | 🧠 Example AI Insight Output { "insights": [ "Ad groups using 'vegan' and 'organic' messaging achieved +23% CTR.", "'Budget' keyword ads underperformed (-15% CTR).", "Campaigns featuring 'new' or 'bestseller' tags showed higher conversion rates." ], "recommendations": [ "Increase ad spend for top-performing 'vegan' and 'premium' categories.", "Revise copy for 'budget' and 'sale' ads with low CTR." ] } 📊 Output Example (Google Sheet) | Product | Category | Old Headline | New Headline | CTR Change | Theme | | ------------------- | -------- | ------------------------ | -------------------------------------------- | ---------- | ------- | | Organic Protein Bar | Snacks | “Healthy Energy Anytime” | “Organic Protein Bar — 100% Natural Fuel” | +12% | Organic | | Eco Face Cream | Skincare | “Gentle Hydration” | “Vegan Face Cream — Clean, Natural Moisture” | +17% | Vegan | 📤 Automation Flow Run Automatically on the first of every month (cron: 0 0 1 * *). Fetch Ads Data → Analyze & Learn → Generate New Ads → Save & Notify. Every iteration updates the AI’s knowledge base — improving your campaigns progressively. ⚡ Scalability The flow is batch-optimized (50 items per request). Works for large ad accounts with up to 10,000 ad records. AI analysis & regeneration steps are asynchronous-safe (timeouts extended). Perfect for agencies managing multiple ad accounts — simply duplicate and update the environment variables per client. 🧩 Best Use Cases Monthly ad creative optimization for eCommerce stores. Marketing automation for Google Ads campaign scaling. Continuous learning ad systems powered by Relevance AI insights. Agencies automating ad copy refresh cycles across clients. 💬 Slack Report Example 30-Day Performance Optimization Report Date: 2025-10-01 Analysis Period: Last 30 days Ads Analyzed: 842 Top Performing Themes Vegan: 5.2% CTR (34 ads) Premium: 4.9% CTR (28 ads) Underperforming Themes Budget: 1.8% CTR (12 ads) AI Insights “Vegan” and “Premium” themes outperform baseline by +22% CTR. “Budget” ads underperform due to lack of value framing. Next Optimization Cycle: 2025-11-01 🛠️ Maintenance Tips Update your GAQL query occasionally to include new metrics or segments. Refresh Relevance AI tokens every 90 days (if required). Review generated ads in Google Sheets before pushing them live. Test webhook and OAuth connections after major n8n updates. 🧩 Import Instructions Open n8n → Workflows → Import from File / JSON. Paste this workflow JSON or upload it. Add all required environment variables and credentials. Execute the first run manually to validate connections. Once verified, enable scheduling for automatic monthly runs. 🧾 Credits Developed for AI-driven marketing teams leveraging Google Ads, Channable, and Relevance AI to achieve continuous ad improvement — fully automated via n8n.
by Sridevi Edupuganti
Try It Out! Use n8n to extract medical test data from diagnostic reports uploaded to Google Drive, automatically detect abnormal values, and generate personalized health advice. How it works Upload a medical report (PDF or image) to a monitored Google Drive folder Mistral AI extracts text using OCR while preserving document structure GPT-4 parses the extracted text into structured JSON (patient info, test names, results, units, reference ranges) All test results are saved to the "All Values" sheet in Google Sheets JavaScript code compares each result against its reference range to detect abnormalities For out-of-range values, GPT-4 generates personalized dietary, lifestyle, and exercise advice based on patient age and gender Abnormal results with recommendations are saved to the "Out of Range Values" sheet How to use Set up Google Drive folder monitoring and Google Sheets with two tabs: "All Values" and "Out of Range Values" Configure API credentials for Google Drive, Mistral AI, and OpenAI (GPT-4) Upload medical reports to your monitored folder Review extracted data and personalized health advice in Google Sheets Requirements Google Drive and Sheets with OAuth2 authentication Mistral AI API key for OCR OpenAI API key (GPT-4 access required) for intelligent extraction and advice generation Need Help? See the detailed Read Me file at https://drive.google.com/file/d/1Wv7dfcBLsHZlPcy1QWPYk6XSyrS3H534/view?usp=sharing Join the n8n community forum for support
by Rahul Joshi
Description Keep your CRM pipeline clean and actionable by automatically archiving inactive deals, logging results to Google Sheets, and sending Slack summary reports. This workflow ensures your sales team focuses on active opportunities while maintaining full audit visibility. 🚀📈 What This Template Does Triggers daily at 9 AM to check all GoHighLevel CRM opportunities. ⏰ Filters deals that have been inactive for 10+ days using last activity or update date. 🔍 Automatically archives inactive deals to keep pipelines clutter-free. 📦 Formats and logs deal details into Google Sheets for record-keeping. 📊 Sends a Slack summary report with total archived count, value, and deal names. 💬 Key Benefits ✅ Keeps pipelines organized by removing stale opportunities. ✅ Saves time through fully automated archiving and reporting. ✅ Maintains a transparent audit trail in Google Sheets. ✅ Improves sales visibility with automated Slack summaries. ✅ Easily adjustable inactivity threshold and scheduling. Features Daily scheduled trigger (9 AM) with adjustable cron expression. GoHighLevel CRM integration for fetching and updating opportunities. Conditional logic to detect inactivity periods. Google Sheets logging with automatic updates. Slack integration for real-time reporting and team visibility. Requirements GoHighLevel API credentials (OAuth2) with opportunity access. Google Sheets OAuth2 credentials with edit permissions. Slack Bot token with chat:write permission. A connected n8n instance (cloud or self-hosted). Target Audience Sales and operations teams managing CRM hygiene. Business owners wanting automated inactive deal cleanup. Agencies monitoring client pipelines across teams. CRM administrators ensuring data accuracy and accountability. Step-by-Step Setup Instructions Connect your GoHighLevel OAuth2 credentials in n8n. 🔑 Link your Google Sheets document and replace the Sheet ID. 📋 Configure Slack credentials and specify your target channel. 💬 Adjust inactivity threshold (default: 10 days) as needed. ⚙️ Update the cron schedule (default: 9 AM daily). ⏰ Test the workflow manually to verify end-to-end automation. ✅
by Hugo
🤖 n8n AI Workflow Dashboard Template Overview This template is designed to collect execution data from your AI workflows and generate an interactive dashboard for easy monitoring. It's compatible with any AI Agent or RAG workflow in n8n. Main Objectives 💾 Collect Execution Data Track messages, tokens used (prompt/completion), session IDs, model names, and compute costs Designed to plug into any AI agent or RAG workflow in n8n 📊 Generate an Interactive Dashboard Visualize KPIs like total messages, unique sessions, tokens used, and costs Display daily charts, including stacked bars for prompt vs completion tokens Monitor AI activity, analyze usage, and track costs at a glance ✨ Key Features 💬 Conversation Data Collection Messages sent to the AI agent are recorded with: sessionId chatInput output promptTokens, completionTokens, totalTokens globalCost and modelName This allows detailed tracking of AI interactions across sessions. 💰 Model Pricing Management A sub-workflow with a Set node provides token prices for LLMs Data is stored in the Model price table for cost calculations 🗄️ Data Storage via n8n Data Tables Two tables need to be created: Model price { "id": 20, "createdAt": "2025-10-11T12:16:47.338Z", "updatedAt": "2025-10-11T12:16:47.338Z", "name": "claude-4.5-sonnet", "promptTokensPrice": 0.000003, "completionTokensPrice": 0.000015 } Messages [ { "id": 20, "createdAt": "2025-10-11T15:28:00.358Z", "updatedAt": "2025-10-11T15:31:28.112Z", "sessionId": "c297cdd4-7026-43f8-b409-11eb943a2518", "action": "sendMessage", "output": "Hey! \nHow's it going?", "chatInput": "yo", "completionTokens": 6, "promptTokens": 139, "totalTokens": 139, "globalCost": null, "modelName": "gpt-4.1-mini", "executionId": 245 } ] These tables store conversation data and pricing info to feed the dashboard and calculations. 📈 Interactive Dashboard KPIs Generated**: total messages, unique sessions, total/average tokens, total/average cost 💸 Charts Included**: daily messages, tokens used per day (prompt vs completion, stacked bar) Provides a visual summary of AI workflow performance ⚙️ Installation & Setup Follow these steps to set up and run the workflow in n8n: 1. Import the Workflow Download or copy the JSON workflow and import it into n8n. 2. Create the Data Tables Model price table**: stores token prices per model Messages table**: stores messages generated by the AI agent 3. Configure the Webhook The workflow is triggered via a webhook Use the webhook URL to send conversation data 4. Set Up the Pricing Sub-workflow Automatically generates price data for the models used Connect it to your main workflow to enrich cost calculations 5. Dashboard Visualization The workflow returns HTML code rendering the dashboard View it in a browser or embed it in your interface 🌐 Once configured, your workflow tracks AI usage and costs in real-time, providing a live dashboard for quick insights. 🔧 Adaptability The template is modular and can be adapted to any AI agent or RAG workflow KPIs, charts, colors, and metrics can be customized in the HTML rendering Ideal for monitoring, cost tracking, and reporting AI workflow performance
by Nguyen Thieu Toan
🤖 Facebook Messenger Smart Chatbot – Batch, Format & Notify with n8n Data Table by Nguyen Thieu Toan 🌟 What Is This Workflow? This is a smart chatbot solution built with n8n, designed to integrate seamlessly with Facebook Messenger. It batches incoming messages, formats them for clarity, tracks conversation history, and sends natural replies using AI. Perfect for businesses, customer support, or personal AI agents. ⚙️ Key Features 🔄 Smart batching: Groups consecutive user messages to process them in one go, avoiding fragmented replies. 🧠 Context formatting: Automatically formats messages to fit Messenger’s structure and length limits. 📋 Conversation history tracking: Stores and retrieves chat logs between user and bot using n8n Data Table. 👀 Seen & Typing effects: Adds human-like responsiveness with Messenger’s sender actions. 🧩 AI Agent integration: Easily connects to GPT, Gemini, or any LLM for natural replies, scheduling, or business logic. 🚀 How It Works Connects to your Facebook Page via webhook to receive and send messages. Stores incoming messages in a Data Table called Batch_messages, including fields like user_text, bot_rep, processed, etc. Collects unprocessed messages, sorts them by id, and creates a merged_message and full history. Sends the history to an AI Agent for contextual response generation. Sends the AI reply back to Messenger with Seen/Typing effects. Updates the message status to processed = true to prevent duplicate handling. 🛠️ Setup Guide Create a Facebook App and Messenger webhook, link it to your Page. Set up the Batch_messages Data Table in n8n with required columns. Import the workflow or build nodes manually using the tutorial. Configure your API tokens, webhook URLs, and AI Agent endpoint. Deploy the workflow on a public n8n server. 📘 Full tutorial available at: 👉 Smart Chatbot Workflow Guide by Nguyen Thieu Toan 💡 Pro Tips Customize the AI prompt and persona to match your business tone. Add scheduling, lead capture, or CRM integration using n8n’s flexible nodes. Monitor your Data Table regularly to ensure clean message flow and batching. 👤 About the Creator Nguyen Thieu Toan (Nguyễn Thiệu Toàn/Jay Nguyen) is an expert in AI automation, business optimization, and chatbot development. With a background in marketing and deep knowledge of n8n workflows, Jay helps businesses harness AI to save time, boost performance, and deliver smarter customer experiences. Website: https://nguyenthieutoan.com
by Laiba
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works User Uploads PDF** : The workflow accepts a PDF via webhook. Extract Text** : n8n extracts the text content from the PDF. Summarize with AI** : The extracted text is passed to an AI model (Groq) with OpenAI model for summarization. Generate Audio** : The summary text is sent to a TTS (Text-to-Speech) API (Qwen-TTS-Demo), you can use other free alternatives. Serve Result** : The workflow outputs both Summary and Audio File URL (WAV link) which you can attached to your audioPlayer. This allows users to read or listen to the summary instantly. How to use / Requirements Import Workflow** : Copy/paste the workflow JSON into your n8n instance. Set Up Input Trigger** : If you want users to upload directly you can use webhook or any other trigger. Configure AI Node** : Add your own API key for (Groq / Open AI). Configure TTS Node** : Add credentials for your chosen TTS service. Run Workflow** : Upload a PDF and get back the summary and audio file url. n8n-smart pdf summarizer & voice generator Please reach out to me at Laiba Zubair if you need further assistance with you n8n workflows and automations!
by Vitorio Magalhães
🎯 What this workflow does This workflow automatically monitors Reddit subreddits for new image posts and downloads them to Google Drive. It's perfect for content creators, meme collectors, or anyone who wants to automatically archive images from their favorite subreddits without manual work. The workflow intelligently prevents duplicate downloads by checking existing files in Google Drive and sends you Telegram notifications about the download status, so you always know when new content has been saved. 🚀 Key Features Multi-subreddit monitoring**: Configure multiple subreddits to monitor simultaneously Smart duplicate detection**: Never downloads the same image twice Automated scheduling**: Runs on a customizable cron schedule Real-time notifications**: Get instant Telegram updates about download activity Rate limit friendly**: Built-in delays to respect Reddit's API limits Cloud storage integration**: Direct upload to organized Google Drive folders 📋 Prerequisites Before using this workflow, you'll need: Reddit Developer Account**: Create an app at reddit.com/prefs/apps Google Cloud Project**: With Drive API enabled and OAuth2 credentials Telegram Bot**: Created via @BotFather with your chat ID Basic n8n knowledge**: Understanding of credentials and node configuration ⚙️ Setup Instructions 1. Configure Reddit API Access Visit reddit.com/prefs/apps and create a new "script" type application Note your Client ID and Client Secret Add Reddit OAuth2 credentials in n8n 2. Set up Google Drive Integration Enable Google Drive API in Google Cloud Console Create OAuth2 credentials with appropriate scopes Configure Google Drive OAuth2 credentials in n8n Update the folder ID in the workflow to your desired destination 3. Configure Telegram Notifications Create a bot via @BotFather on Telegram Get your chat ID (message @userinfobot) Add Telegram API credentials in n8n 4. Customize Your Settings Update the Settings node with: Your Telegram chat ID List of subreddits to monitor (e.g., ['memes', 'funny', 'pics']) Optional: Adjust wait time between requests Optional: Modify the cron schedule 🔄 How it works Scheduled Trigger: The workflow starts automatically based on your cron configuration Random Selection: Picks a random subreddit from your configured list Fetch Posts: Retrieves the latest 30 posts from the subreddit's "new" section Image Filtering: Keeps only posts with i.redd.it image URLs Duplicate Check: Searches Google Drive to avoid re-downloading existing images Download & Upload: Downloads new images and uploads them to your Drive folder Notification: Sends a Telegram message with the download summary 🛠️ Customization Options Scheduling Modify the cron trigger to run hourly, daily, or at custom intervals Add timezone considerations for your location Content Filtering Add upvote threshold filters to get only popular content Filter by image dimensions or file size Implement NSFW content filtering Storage & Organization Create subfolders by subreddit Add date-based folder organization Implement file naming conventions Notifications & Monitoring Add Discord webhook notifications Create download statistics tracking Log failed downloads for debugging 📊 Use Cases Content Creators**: Automatically collect memes and trending images for social media Digital Marketers**: Monitor visual trends across different communities Researchers**: Archive visual content from specific subreddits for analysis Personal Use**: Build a curated collection of images from your favorite subreddits 🎯 Best Practices Respect Rate Limits**: Keep the wait time between requests to avoid being blocked Monitor Storage**: Regularly check Google Drive storage usage Subreddit Selection**: Choose active subreddits with regular image posts Credential Security**: Use n8n's credential system and never hardcode API keys 🚨 Important Notes This workflow only downloads images from i.redd.it (Reddit's image host) Some subreddits may have bot restrictions Reddit's API has rate limits (~60 requests per minute) Ensure your Google Drive has sufficient storage space Always comply with Reddit's Terms of Service and content policies
by Recrutei Automações
Overview: Automated LinkedIn Job Posting with AI This workflow automates the publication of new job vacancies on LinkedIn immediately after they are created in the Recrutei ATS (Applicant Tracking System). It leverages a Code node to pre-process the job data and a powerful AI model (GPT-4o-mini, configured via the OpenAI node) to generate compelling, marketing-ready content. This template is designed for Recruitment and Marketing teams aiming to ensure consistent, timely, and high-quality job postings while saving significant operational time. Workflow Logic & Steps Recrutei Webhook Trigger: The workflow is instantly triggered when a new job vacancy is published in the Recrutei ATS, sending all relevant job data via a webhook. Data Cleaning (Code Node 1): The first Code node standardizes boolean fields (like remote, fixed_remuneration) from 0/1 to descriptive text ('yes'/'no'). Prompt Transformation (Code Node 2): The second, crucial Code node receives the clean job data and: Maps the original data keys (e.g., title, description) to user-friendly labels (e.g., Job Title, Detailed Description). Cleans and sanitizes the HTML description into readable Markdown format. Generates a single, highly structured prompt containing all job details, ready for the AI model. AI Content Generation (OpenAI): The AI Model receives the structured prompt and acts as a 'Marketing Copywriter' to create a compelling, engaging post specifically optimized for the LinkedIn platform. LinkedIn Post: The generated text is automatically posted to the configured LinkedIn profile or Company Page. Internal Logging (Google Sheets): The workflow concludes by logging the event (Job Title, Confirmation Status) into a Google Sheet for internal tracking and auditing. Setup Instructions To implement this workflow successfully, you must configure the following: Credentials: Configure OpenAI (for the Content Generator). Configure LinkedIn (for the Post action). Configure Google Sheets (for the logging). Node Configuration: Set up the Webhook URL in your Recrutei ATS settings. Replace YOUR_SHEET_ID_HERE in the Google Sheets Logging node with your sheet's ID. Select the correct LinkedIn profile/company page in the Create a post node.
by Daniel Shashko
This workflow automates the creation of user-generated-content-style product videos by combining Gemini's image generation with OpenAI's SORA 2 video generation. It accepts webhook requests with product descriptions, generates images and videos, stores them in Google Drive, and logs all outputs to Google Sheets for easy tracking. Main Use Cases Automate product video creation for e-commerce catalogs and social media. Generate UGC-style content at scale without manual design work. Create engaging video content from simple text prompts for marketing campaigns. Build a centralized library of product videos with automated tracking and storage. How it works The workflow operates as a webhook-triggered process, organized into these stages: Webhook Trigger & Input Accepts POST requests to the /create-ugc-video endpoint. Required payload includes: product prompt, video prompt, Gemini API key, and OpenAI API key. Image Generation (Gemini) Sends the product prompt to Google's Gemini 2.5 Flash Image model. Generates a product image based on the description provided. Data Extraction Code node extracts the base64 image data from Gemini's response. Preserves all prompts and API keys for subsequent steps. Video Generation (SORA 2) Sends the video prompt to OpenAI's SORA 2 API. Initiates video generation with specifications: 720x1280 resolution, 8 seconds duration. Returns a video generation job ID for polling. Video Status Polling Continuously checks video generation status via OpenAI API. If status is "completed": proceeds to download. If status is still processing: waits 1 minute and retries (polling loop). Video Download & Storage Downloads the completed video file from OpenAI. Uploads the MP4 file to Google Drive (root folder). Generates a shareable Google Drive link. Logging to Google Sheets Records all generation details in a tracking spreadsheet: Product description Video URL (Google Drive link) Generation status Timestamp Summary Flow: Webhook Request → Generate Product Image (Gemini) → Extract Image Data → Generate Video (SORA 2) → Poll Status → If Complete: Download Video → Upload to Google Drive → Log to Google Sheets → Return Response If Not Complete: Wait 1 Minute → Poll Status Again Benefits: Fully automated video creation pipeline from text to finished product. Scalable solution for generating multiple product videos on demand. Combines cutting-edge AI models (Gemini + SORA 2) for high-quality output. Centralized storage in Google Drive with automatic logging in Google Sheets. Flexible webhook interface allows integration with any application or service. Retry mechanism ensures videos are captured even with longer processing times. Created by Daniel Shashko
by Khair Ahammed
Meet Troy, your intelligent personal assistant that seamlessly manages your Google Calendar and Tasks through Telegram. This workflow combines AI-powered natural language processing with MCP (Model Context Protocol) integration to provide a conversational interface for scheduling meetings, managing tasks, and organizing your digital life. Key Features 📅 Smart Calendar Management Create single and recurring events with conflict detection Support for multiple attendees (1-2 attendee variants) Automatic time zone handling (Bangladesh Standard Time) Weekly recurring event scheduling Event retrieval, updates, and deletion ✅ Task Management Create, update, and delete tasks in Google Tasks Mark tasks as completed Retrieve task lists with completion status Task repositioning and organization Parent-child task relationships 🤖Intelligent Processing Natural language understanding for scheduling requests Automatic conflict detection before event creation Context-aware responses with conversation memory Error handling with fallback messages 📱 Telegram Interface Real-time chat interaction Simple commands and natural language Instant confirmations and updates Error notifications Workflow Components Core Architecture: Telegram Trigger for user messages AI Agent with GPT-4o-mini processing MCP Client Tools for Google services Conversation memory for context Error handling with backup responses MCP Integrations: Google Calendar MCP Server (6 specialized tools) Google Tasks MCP Server (5 task operations) Custom HTTP tool for advanced task positioning Use Cases Calendar Scenarios: "Schedule a meeting tomorrow at 3 PM with john@example.com" "Set up weekly team standup every Monday at 10 AM" "Check my calendar for conflicts this afternoon" "Delete the meeting with ID xyz123" Task Management: "Add a task to buy groceries" "Mark the project report task as completed" "Update my presentation task due date to Friday" "Show me all pending tasks" Setup Requirements Required Credentials: Google Calendar OAuth2 Google Tasks OAuth2 OpenAI API key Telegram Bot token ** MCP Configuration:** Two MCP server endpoints for Google services Proper webhook configurations SSL-enabled n8n instance for MCP triggers Business Benefits Productivity: Voice-to-action task and calendar management *Efficiency: *Eliminate app switching with chat interface Intelligence: AI prevents scheduling conflicts automatically Accessibility: Simple Telegram commands for complex operations Technical Specifications Components: 1 Telegram trigger 1 AI Agent with memory 2 MCP triggers (Calendar & Tasks) 13 Google service tools Error handling flows Response Time: Sub-second for most operations *Memory: *Session-based conversation context Timezone: Automatic Bangladesh Standard Time conversion This personal assistant transforms how you interact with Google services, making scheduling and task management as simple as sending a text message to Troy on Telegram. Tags: personal-assistant, mcp-integration, google-calendar, google-tasks, telegram-bot, ai-agent, productivity
by Daniel Shashko
How it Works This workflow automates intelligent Reddit marketing by monitoring brand mentions, analyzing sentiment with AI, and engaging authentically with communities. Every 24 hours, the system searches Reddit for posts containing your configured brand keywords across all subreddits, finding up to 50 of the newest mentions to analyze. Each discovered post is sent to OpenAI's GPT-4o-mini model for comprehensive analysis. The AI evaluates sentiment (positive/neutral/negative), assigns an engagement score (0-100), determines relevance to your brand, and generates contextual, helpful responses that add genuine value to the conversation. It also classifies the response type (educational/supportive/promotional) and provides reasoning for whether engagement is appropriate. The workflow intelligently filters posts using a multi-criteria system: only posts that are relevant to your brand, score above 60 in engagement quality, and warrant a response type other than "pass" proceed to engagement. This prevents spam and ensures every interaction is meaningful. Selected posts are processed one at a time through a loop to respect Reddit's rate limits. For each worthy post, the AI-generated comment is posted, and complete interaction data is logged to Google Sheets including timestamp, post details, sentiment, engagement scores, and success status. This creates a permanent audit trail and analytics database. At the end of each run, the workflow aggregates all data into a comprehensive daily summary report with total posts analyzed, comments posted, engagement rate, sentiment breakdown, and the top 5 engagement opportunities ranked by score. This report is automatically sent to Slack with formatted metrics, giving your team instant visibility into your Reddit marketing performance. Who is this for? Brand managers and marketing teams** needing automated social listening and engagement on Reddit Community managers** responsible for authentic brand presence across multiple subreddits Startup founders and growth marketers** who want to scale Reddit marketing without hiring a team PR and reputation teams** monitoring brand sentiment and responding to discussions in real-time Product marketers** seeking organic engagement opportunities in product-related communities Any business** that wants to build authentic Reddit presence while avoiding spammy marketing tactics Setup Steps Setup time:** Approx. 30-40 minutes (credential configuration, keyword setup, Google Sheets creation, Slack integration) Requirements:** Reddit account with OAuth2 application credentials (create at reddit.com/prefs/apps) OpenAI API key with GPT-4o-mini access Google account with a new Google Sheet for tracking interactions Slack workspace with posting permissions to a marketing/monitoring channel Brand keywords and subreddit strategy prepared Create Reddit OAuth Application: Visit reddit.com/prefs/apps, create a "script" type app, and obtain your client ID and secret Configure Reddit Credentials in n8n: Add Reddit OAuth2 credentials with your app credentials and authorize access Set up OpenAI API: Obtain API key from platform.openai.com and configure in n8n OpenAI credentials Create Google Sheet: Set up a new sheet with columns: timestamp, postId, postTitle, subreddit, postUrl, sentiment, engagementScore, responseType, commentPosted, reasoning Configure these nodes: Brand Keywords Config: Edit the JavaScript code to include your brand name, product names, and relevant industry keywords Search Brand Mentions: Adjust the limit (default 50) and sort preference based on your needs AI Post Analysis: Customize the prompt to match your brand voice and engagement guidelines Filter Engagement-Worthy: Adjust the engagementScore threshold (default 60) based on your quality standards Loop Through Posts: Configure max iterations and batch size for rate limit compliance Log to Google Sheets: Replace YOUR_SHEET_ID with your actual Google Sheets document ID Send Slack Report: Replace YOUR_CHANNEL_ID with your Slack channel ID Test the workflow: Run manually first to verify all connections work and adjust AI prompts Activate for daily runs: Once tested, activate the Schedule Trigger to run automatically every 24 hours Node Descriptions (10 words each) Daily Marketing Check - Schedule trigger runs workflow every 24 hours automatically daily Brand Keywords Config - JavaScript code node defining brand keywords to monitor Reddit Search Brand Mentions - Reddit node searches all subreddits for brand keyword mentions AI Post Analysis - OpenAI analyzes sentiment, relevance, generates contextual helpful comment responses Filter Engagement-Worthy - Conditional node filters only high-quality relevant posts worth engaging Loop Through Posts - Split in batches processes each post individually respecting limits Post Helpful Comment - Reddit node posts AI-generated comment to worthy Reddit discussions Log to Google Sheets - Appends all interaction data to spreadsheet for permanent tracking Generate Daily Summary - JavaScript aggregates metrics, sentiment breakdown, generates comprehensive daily report Send Slack Report - Posts formatted daily summary with metrics to team Slack channel
by vinci-king-01
Software Vulnerability Patent Tracker ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow automatically tracks newly-published patent filings that mention software-security vulnerabilities, buffer-overflow mitigation techniques, and related technology keywords. Every week it aggregates fresh patent data from USPTO and international patent databases, filters it by relevance, and delivers a concise JSON digest (and optional Intercom notification) to R&D teams and patent attorneys. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or n8n cloud, v1.7.0+) ScrapeGraphAI community node installed Basic understanding of patent search syntax (for customizing keyword sets) Optional: Intercom account for in-app alerts Required Credentials | Credential | Purpose | |------------|---------| | ScrapeGraphAI API Key | Enables ScrapeGraphAI nodes to fetch and parse patent-office webpages | | Intercom Access Token (optional) | Sends weekly digests directly to an Intercom workspace | Additional Setup Requirements | Setting | Recommended Value | Notes | |---------|-------------------|-------| | Cron schedule | 0 9 * * 1 | Triggers every Monday at 09:00 server time | | Patent keyword matrix | See example CSV below | List of comma-separated keywords per tech focus | Example keyword matrix (upload as keywords.csv or paste into the “Matrix” node): topic,keywords Buffer Overflow,"buffer overflow, stack smashing, stack buffer" Memory Safety,"memory safety, safe memory allocation, pointer sanitization" Code Injection,"SQL injection, command injection, injection prevention" How it works This workflow automatically tracks newly-published patent filings that mention software-security vulnerabilities, buffer-overflow mitigation techniques, and related technology keywords. Every week it aggregates fresh patent data from USPTO and international patent databases, filters it by relevance, and delivers a concise JSON digest (and optional Intercom notification) to R&D teams and patent attorneys. Key Steps: Schedule Trigger**: Fires weekly based on the configured cron expression. Matrix (Keyword Loader)**: Loads the CSV-based technology keyword matrix into memory. Code (Build Search Queries)**: Dynamically assembles patent-search URLs for each keyword group. ScrapeGraphAI (Fetch Results)**: Scrapes USPTO, EPO, and WIPO result pages and parses titles, abstracts, publication numbers, and dates. If (Relevance Filter)**: Removes patents older than 1 year or without vulnerability-related terms in the abstract. Set (Normalize JSON)**: Formats the remaining records into a uniform JSON schema. Intercom (Notify Team)**: Sends a summarized digest to your chosen Intercom workspace. (Skip or disable this node if you prefer to consume the raw JSON output instead.) Sticky Notes**: Contain inline documentation and customization tips for future editors. Set up steps Setup Time: 10-15 minutes Install Community Node Navigate to “Settings → Community Nodes”, search for ScrapeGraphAI, and click “Install”. Create Credentials Go to “Credentials” → “New Credential” → select ScrapeGraphAI API → paste your API key. (Optional) Add an Intercom credential with a valid access token. Import the Workflow Click “Import” → “Workflow JSON” and paste the template JSON, or drag-and-drop the .json file. Configure Schedule Open the Schedule Trigger node and adjust the cron expression if a different frequency is required. Upload / Edit Keyword Matrix Open the Matrix node, paste your custom CSV, or modify existing topics & keywords. Review Search Logic In the Code (Build Search Queries) node, review the base URLs and adjust patent databases as needed. Define Notification Channel If using Intercom, select your Intercom credential in the Intercom node and choose the target channel. Execute & Activate Click “Execute Workflow” for a trial run. Verify the output. If satisfied, switch the workflow to “Active”. Node Descriptions Core Workflow Nodes: Schedule Trigger** – Initiates the workflow on a weekly cron schedule. Matrix** – Holds the CSV keyword table and makes each row available as an item. Code (Build Search Queries)** – Generates search URLs and attaches meta-data for later nodes. ScrapeGraphAI** – Scrapes patent listings and extracts structured fields (title, abstract, pub. date, link). If (Relevance Filter)** – Applies date and keyword relevance filters. Set (Normalize JSON)** – Maps scraped fields into a clean JSON schema for downstream use. Intercom** – Sends formatted patent summaries to an Intercom inbox or channel. Sticky Notes** – Provide inline documentation and edit history markers. Data Flow: Schedule Trigger → Matrix → Code → ScrapeGraphAI → If → Set → Intercom Customization Examples Change Data Source to Google Patents // In the Code node const base = 'https://patents.google.com/?q='; items.forEach(item => { item.json.searchUrl = ${base}${encodeURIComponent(item.json.keywords)}&oq=${encodeURIComponent(item.json.keywords)}; }); return items; Send Digest via Slack Instead of Intercom // Replace Intercom node with Slack node { "text": 🚀 New Vulnerability-related Patents (${items.length})\n + items.map(i => • <${i.json.link}|${i.json.title}>).join('\n') } Data Output Format The workflow outputs structured JSON data: { "topic": "Memory Safety", "keywords": "memory safety, safe memory allocation, pointer sanitization", "title": "Memory protection for compiled binary code", "publicationNumber": "US20240123456A1", "publicationDate": "2024-03-21", "abstract": "Techniques for enforcing memory safety in compiled software...", "link": "https://patents.google.com/patent/US20240123456A1/en", "source": "USPTO" } Troubleshooting Common Issues Empty Result Set – Ensure that the keywords are specific but not overly narrow; test queries manually on USPTO. ScrapeGraphAI Timeouts – Increase the timeout parameter in the ScrapeGraphAI node or reduce concurrent requests. Performance Tips Limit the keyword matrix to <50 rows to keep weekly runs under 2 minutes. Schedule the workflow during off-peak hours to reduce load on patent-office servers. Pro Tips: Combine this workflow with a vector database (e.g., Pinecone) to create a semantic patent knowledge base. Add a “Merge” node to correlate new patents with existing vulnerability CVE entries. Use a second ScrapeGraphAI node to crawl citation trees and identify emerging technology clusters.