by Jitesh Dugar
What This Does Automatically finds relevant Reddit posts where your brand can add value, generates helpful AI comments, and sends the best opportunities to your Slack channel for review. Setup Requirements Reddit API credentials OpenAI API key Slack webhook URL Quick Setup Reddit API Create app at reddit.com/prefs/apps (select "script" type) Add client ID and secret to n8n credentials Configure Subreddits Edit the workflow to monitor subreddits relevant to your business: entrepreneur, startups, smallbusiness, [your_niche] AI Prompt Setup Customize the OpenAI node with your brand context: You're helping in [subreddit] discussions. When relevant, mention how [your_product] solves similar problems. Be helpful first, promotional second. Slack Integration Add your webhook URL to get notifications with: Post title and link AI-generated comment Engagement score (1-10) Key Features Smart Filtering**: AI evaluates if a post is worth engaging with Brand-Aware Comments**: Generated responses stay on-brand and helpful Team Review**: All opportunities go to Slack before posting Multiple Subreddits**: Monitor several communities simultaneously Customization Tips Adjust AI Scoring - Modify what makes a "good" opportunity: Post engagement level Relevance to your product Tone of the discussion Comment Templates - Set different styles for different subreddits: Technical advice for developer communities Business insights for entrepreneur groups User experience for product discussions Best Practices Start with 2-3 subreddits to test effectiveness Review and approve comments in Slack before posting Follow Reddit's 90/10 rule (90% helpful content, 10% self-promotion) Adjust the AI prompt based on what works in your communities Why Use This Saves hours of manual Reddit browsing Maintains consistent brand voice Never miss relevant conversations Team can review before engaging publicly
by DIGITAL BIZ TECH
AI Product Catalog Chatbot with Google Drive Ingestion & Supabase RAG Overview This workflow builds a dual-system that connects automated document ingestion with a live product catalog chatbot powered by Mistral AI and Supabase. It includes: Ingestion Pipeline:** Automatically fetches JSON files from Google Drive, processes their content, and stores vector embeddings in Supabase. Chatbot:** An AI agent that queries the Supabase vector store (RAG) to answer user questions about the product catalog. It uses Mistral AI for chat intelligence and embeddings, and Supabase for vector storage and semantic product search. Chatbot Flow Trigger:** When chat message received or Webhook (from live website) Model:** Mistral Cloud Chat Model (mistral-medium-latest) Memory:** Simple Memory (Buffer Window) — keeps last 15 messages for conversational context Vector Search Tool:** Supabase Vector Store Embeddings:** Mistral Cloud Agent:** product catalog agent Responds to user queries using the products table in Supabase. Searches vectors for relevant items and returns structured product details (name, specs, images, and links). Maintains chat session history for natural follow-up questions. Document → Knowledge Base Pipeline Triggered manually (Execute workflow) to populate or refresh the Supabase vector store. Steps Google Drive (List Files) → Fetch all files from the configured Google Drive folder. Loop Over Items → For each file: Google Drive (Get File) → Download the JSON document. Extract from File → Parse and read raw JSON content. Map Data into Fields (Set node) → Clean and normalize JSON keys (e.g., page_title, comprehensive_summary, key_topics). Convert Data into Chunks (Code node) → Merge text fields like summary and markdown. → Split content into overlapping 2,000-character chunks. → Add metadata such as title, URL, and chunk index. Embeddings (Mistral Cloud) → Generate vector embeddings for each text chunk. Insert into Supabase Vectorstore → Save chunks + embeddings into the website_mark table. Wait → Pause for 30 seconds before the next file to respect rate limits. Integrations Used | Service | Purpose | Credential | |----------|----------|------------| | Google Drive | File source for catalog JSON documents | Google Drive account dbt | | Mistral AI | Chat model & embeddings | Mistral Cloud account dbt | | Supabase | Vector storage & RAG search | Supabase DB account dbt | | Webhook / Chat | User-facing interface for chatbot | Website or Webhook | Sample JSON Data Format (for Ingestion) The ingestion pipeline expects structured JSON product files, which can include different categories such as Apparel or Tools. Apparel Example (T-Shirts) [ { "Name": "Classic Crewneck T-Shirt", "Item Number": "A-TSH-NVY-M", "Image URL": "https://www.example.com/images/tshirt-navy.jpg", "Image Markdown": "", "Size Chart URL": "https://www.example.com/charts/tshirt-sizing", "Materials": "100% Pima Cotton", "Color": "Navy Blue", "Size": "M", "Fit": "Regular Fit", "Collection": "Core Essentials" } ] Tools Example (Drill Bits) [ { "Name": "Titanium Drill Bit, 1/4\"", "Item Number": "T-DB-TIN-250", "Image URL": "https://www.example.com/images/drill-bit-1-4.jpg", "Image Markdown": "", "Spec Sheet URL": "https://www.example.com/specs/T-DB-TIN-250", "Materials": "HSS with Titanium Coating", "Type": "Twist Drill Bit", "Size (in)": "1/4", "Shank Type": "Hex", "Application": "Metal, Wood, Plastic" } ] Agent System Prompt Summary > “You are an AI product catalog assistant. Use only the Supabase vector database as your knowledge base. Provide accurate, structured responses with clear formatting — including product names, attributes, and URLs. If data is unavailable, reply politely: ‘I couldn’t find that product in the catalog.’” Key Features Automated JSON ingestion from Google Drive → Supabase Intelligent text chunking and metadata mapping Dual-workflow architecture (Ingestion + Chatbot) Live conversational product search via RAG Supports both embedded chat and webhook channels Summary > A powerful end-to-end workflow that transforms your product data into a searchable, AI-ready knowledge base, enabling real-time product Q&A through a Mistral-powered chatbot. Perfect for eCommerce teams, distributors, or B2B companies managing large product catalogs. Need Help or More Workflows? Want to customize this workflow for your business or integrate it with your tools? Our team at Digital Biz Tech can tailor it precisely to your use case — from automation pipelines to AI-powered product discovery. 💡 We can help you set it up for free — from connecting credentials to deploying it live. Contact: shilpa.raju@digitalbiz.tech Website: https://www.digitalbiz.tech LinkedIn: https://www.linkedin.com/company/digitalbiztech/ You can also DM us on LinkedIn for any help.
by Wolf Bishop
A reliable, no-frills web scraper that extracts content directly from websites using their sitemaps. Perfect for content audits, migrations, and research when you need straightforward HTML extraction without external dependencies. How It Works This streamlined workflow takes a practical approach to web scraping by leveraging XML sitemaps and direct HTTP requests. Here's how it delivers consistent results: Direct Sitemap Processing: The workflow starts by fetching your target website's XML sitemap and parsing it to extract all available page URLs. This eliminates guesswork and ensures comprehensive coverage of the site's content structure. Robust HTTP Scraping: Each page is scraped using direct HTTP requests with realistic browser headers that mimic legitimate web traffic. The scraper includes comprehensive error handling and timeout protection to handle various website configurations gracefully. Intelligent Content Extraction: The workflow uses sophisticated JavaScript parsing to extract meaningful content from raw HTML. It automatically identifies page titles through multiple methods (title tags, Open Graph metadata, H1 headers) and converts HTML structure into readable text format. Framework Detection: Built-in detection identifies whether sites use WordPress, Divi themes, or heavy JavaScript frameworks. This helps explain content extraction quality and provides valuable insights about the site's technical architecture. Rich Metadata Collection: Each scraped page includes detailed metadata like word count, HTML size, response codes, and technical indicators. This data is formatted into comprehensive markdown files with YAML frontmatter for easy analysis and organization. Respectful Rate Limiting: The workflow includes a 3-second delay between page requests to respect server resources and avoid overwhelming target websites. The processing is sequential and controlled to maintain ethical scraping practices. Detailed Success Reporting: Every scraped page generates a report showing extraction success, potential issues (like JavaScript dependencies), and technical details about the site's structure and framework. Setup Steps Configure Google Drive Integration Connect your Google Drive account in the "Save to Google Drive" node Replace YOUR_GOOGLE_DRIVE_CREDENTIAL_ID with your actual Google Drive credential ID Create a dedicated folder for your scraped content in Google Drive Copy the folder ID from the Google Drive URL (the long string after /folders/) Replace YOUR_GOOGLE_DRIVE_FOLDER_ID_HERE with your actual folder ID in both the folderId field and cachedResultUrl Update YOUR_FOLDER_NAME_HERE with your folder's actual name Set Your Target Website In the "Set Sitemap URL" node, replace https://yourwebsitehere.com/page-sitemap.xml with your target website's sitemap URL Common sitemap locations include /sitemap.xml, /page-sitemap.xml, or /sitemap_index.xml Tip: Not sure where your sitemap is? Use a free online tool like https://seomator.com/sitemap-finder Verify the sitemap URL loads correctly in your browser before running the workflow Update Workflow IDs (Automatic) When you import this workflow, n8n will automatically generate new IDs for YOUR_WORKFLOW_ID_HERE, YOUR_VERSION_ID_HERE, YOUR_INSTANCE_ID_HERE, and YOUR_WEBHOOK_ID_HERE No manual changes needed for these placeholders Adjust Processing Limits (Optional) The "Limit URLs (Optional)" node is currently disabled for full site scraping Enable this node and set a smaller number (like 5-10) for initial testing For large websites, consider running in batches to manage processing time and storage Customize Rate Limiting (Optional) The "Wait Between Pages" node is set to 3 seconds by default Increase the delay for more respectful scraping of busy sites Decrease only if you have permission and the target site can handle faster requests Test Your Configuration Enable the "Limit URLs (Optional)" node and set it to 3-5 pages for testing Click "Test workflow" to verify the setup works correctly Check your Google Drive folder to confirm files are being created with proper content Review the generated markdown files to assess content extraction quality Run Full Extraction Disable the "Limit URLs (Optional)" node for complete site scraping Execute the workflow and monitor the execution log for any errors Large websites may take considerable time to process completely (plan for several hours for sites with hundreds of pages) Review Results Each generated file includes technical metadata to help you assess extraction quality Look for indicators like "Limited Content" warnings for JavaScript-heavy pages Files include word counts and framework detection to help you understand the site's structure Framework Compatibility: This scraper is specifically designed to work well with WordPress sites, Divi themes, and many JavaScript-heavy frameworks. The intelligent content extraction handles dynamic content effectively and provides detailed feedback about framework detection. While some single-page applications (SPAs) that render entirely through JavaScript may have limited content extraction, most modern websites including those built with popular CMS platforms will work excellently with this scraper. Important Notes: Always ensure you have permission to scrape your target website and respect their robots.txt guidelines. The workflow includes respectful delays and error handling, but monitor your usage to maintain ethical scraping practices.RetryClaude can make mistakes. Please double-check responses.
by Rahul Joshi
📘 Description: This workflow automates the entire release note creation and announcement process whenever a task status changes in ClickUp. Using Azure OpenAI GPT-4o, Notion, Slack, Gmail, and Google Sheets, it converts technical task data into clear, structured, and branded release notes — ready for documentation and team broadcast. The flow captures task details, generates Markdown-formatted FAQs, documents them in Notion, formats professional Slack messages, and notifies the task owner via HTML email. Any failed payloads or validation errors are logged automatically to Google Sheets for full traceability. The result is a zero-touch release workflow that saves time, keeps communication consistent, and ensures every completed feature is clearly documented and shared. ⚙️ What This Workflow Does (Step-by-Step) 🟢 ClickUp Task Status Trigger Listens for task status updates (e.g., In Review → Complete) within the specified ClickUp team. Whenever a task reaches a completion state, this node starts the release note workflow automatically. 🔍 Validate ClickUp Payload (IF Node) Checks that the incoming ClickUp webhook contains a valid task_id. ✅ True Path: Proceeds to fetch task details. ❌ False Path: Logs the invalid payload to Google Sheets for review. 📋 Fetch Task Details from ClickUp Retrieves full information about the task using the task_id, including title, description, status, assignee, priority, and custom fields. Provides complete task context for AI processing. 🧩 Parse Task Details in JavaScript Cleans and standardizes task data into JSON format with fields like title, description, priority, owner, due date, and task URL. Also extracts optional links (e.g., GitHub references). Ensures consistent, structured input for the AI model. 🧠 Configure GPT-4o Model (Azure OpenAI) Initializes GPT-4o as the core reasoning engine for FAQ and release-note generation, ensuring context-aware and concise output. 🤖 Generate Release Notes FAQ (AI Agent) Transforms task details into a Markdown-formatted release note under four standardized sections: 1️⃣ What changed 2️⃣ Why 3️⃣ How to use 4️⃣ Known issues Each section is written clearly and briefly for internal and external readers. 📘 Save Release Notes to Notion Creates a new page in the Notion “Release Notes” database. Includes task URL, owner, status, priority, and the full AI-generated FAQ content. Serves as the single source of truth for changelogs and release documentation. 💬 Configure GPT-4o Model (Slack Formatting) Prepares another GPT-4o model instance for formatting Slack-ready announcements in a professional and brand-consistent tone. 🎨 Generate Slack Release Announcement (AI Agent) Converts the Notion release information into a polished Slack message. Adds emojis, bullet points, and a clickable task URL — optimized for quick team consumption. 📢 Announce Release in Slack Posts the AI-formatted message directly to the internal Slack channel, notifying the team of the latest feature release. Keeps everyone aligned without manual drafting or posting. 📨 Send Acknowledgment Email to Assignee (Gmail Node) Sends an automated HTML email to the task owner confirming that their release is live. Includes task name, status, priority, release date, quick links to Notion and ClickUp, and a preview of the AI-generated FAQ. Delivers a professional confirmation while closing the communication loop. 🚨 Log Errors in Google Sheets Captures all payload validation errors, API failures, or processing exceptions into an “Error Log Sheet.” Ensures complete auditability and smooth maintenance of the workflow. 🧩 Prerequisites ClickUp API credentials (for task triggers & data fetch) Azure OpenAI (GPT-4o) credentials Notion API integration (for release documentation) Slack API connection (for announcements) Gmail API access (for acknowledgment emails) Google Sheets API access (for error logging) 💡 Key Benefits ✅ Converts completed tasks into professional release notes automatically ✅ Publishes directly to Notion with consistent documentation ✅ Broadcasts updates to Slack in clean, branded format ✅ Notifies assignees instantly via personalized HTML email ✅ Maintains transparent error tracking in Google Sheets 👥 Perfect For Product & Engineering Teams managing frequent feature releases SaaS companies automating changelog and release documentation Project managers maintaining internal knowledge bases Teams using ClickUp, Notion, Slack, and Gmail for daily operations
by NodeAlchemy
🧾 Short Description An AI-powered customer support workflow that automatically triages, summarizes, classifies, and routes tickets to the right Slack and CRM queues. It sends personalized auto-replies, logs results to Google Sheets, and uses a DLQ for failed cases. ⚙️ How It Works Trigger: Captures messages from email or form submissions. AI Triage: Summarizes and classifies issues, scores urgency, and suggests next steps. Routing: Directs to Slack or CRM queue based on type and priority. Logging: Records summaries, urgency, and responses in Google Sheets. Auto-Reply: Sends an acknowledgment email with ticket ID and SLA timeframe. Error Handling: Failed triage or delivery attempts are logged in a DLQ. 🧩 How to Use Configure triggers (email or webhook) and connect credentials for OpenAI, Slack, Gmail, and Google Sheets. In Workflow Configuration, set: Slack Channel IDs CRM Type (HubSpot, Salesforce, or custom) Google Sheet URL SLA thresholds (e.g., 2h, 6h, 24h) Test with a sample ticket and verify routing and summaries in Slack and Sheets. 🔑 Requirements OpenAI API key (GPT-4o-mini or newer) Slack OAuth credentials Google Sheets API access Gmail/SMTP credentials CRM API (HubSpot, Salesforce, or custom endpoint) 💡 Customization Ideas Add sentiment detection for customer tone. Localize responses for multilingual support. Extend DLQ logging to Notion or Airtable. Add escalation alerts for SLA breaches.
by Fabian Herhold
Who's it for Sales teams, BDRs, account managers, and customer success professionals who want to show up prepared for every meeting. Perfect for anyone using Calendly who wants to automate prospect research and never walk into a call blind again. Watch the full tutorial here: What it does This workflow automatically researches your meeting attendees the moment they book through Calendly. It combines multiple AI agents to gather comprehensive intelligence: Company Research**: Uses Perplexity AI to validate company details, recent news, funding, leadership changes, and business signals LinkedIn Analysis**: Leverages RapidAPI to analyze the person's profile, recent posts, comments, and engagement patterns from the last 60-90 days Signal Detection**: Identifies hiring signals, growth indicators, and potential risks with confidence scoring Meeting Prep**: Synthesizes everything into personalized talking points, conversation starters, and strategic recommendations The final research brief gets delivered directly to your Slack, saving 30-45 minutes of manual research per meeting. How it works Someone books a meeting via your Calendly (must include LinkedIn URL in booking form) Main AI Agent extracts company domain from email and coordinates three specialist research agents Company Agent researches business intel via Perplexity Person Agent analyzes LinkedIn activity using 4 different RapidAPI endpoints Signal Agent identifies business opportunities and risks Comprehensive meeting brief gets sent to your Slack channel Requirements API Credentials needed: Calendly API (for webhook trigger) OpenAI API key (GPT-4 recommended for orchestration) Perplexity API key (for web research) RapidAPI subscription (for LinkedIn data endpoints) Slack bot token (for output delivery) Important: Your Calendly booking form must include a LinkedIn URL field to get optimal results. How to set up Configure Calendly: Add the Calendly trigger node with your API credentials Update Slack destination: Modify the final Slack node with your user ID or channel Add API keys: Configure all the API credentials in their respective nodes Test the workflow: Book a test meeting through Calendly to verify the complete flow Customize prompts: Adjust the AI agent prompts based on your specific industry or use case The workflow uses structured JSON output with confidence scoring and source citation for reliable, actionable intelligence. How to customize the workflow Change output destination**: Replace Slack with email, Teams, or CRM integration Modify research depth**: Adjust the AI prompts to focus on specific industries or company types Add more signals**: Extend the Signal Research Agent to detect additional business indicators Integrate with CRM**: Add nodes to automatically update contact records in your sales system Schedule follow-ups**: Connect to calendar tools to automatically schedule research updates The modular design makes it easy to adapt for different sales processes and research requirements.
by Alejandro Scuncia
An extendable RAG template to build powerful, explainable AI assistants — with query understanding, semantic metadata, and support for free-tier tools like Gemini, Gemma and Supabase. Description This workflow helps you build smart, production-ready RAG agents that go far beyond basic document Q&A. It includes: ✅ File ingestion and chunking ✅ Asynchronous LLM-powered enrichment ✅ Filterable metadata-based search ✅ Gemma-based query understanding and generation ✅ Cohere re-ranking ✅ Memory persistence via Postgres Everything is modular, low-cost, and designed to run even with free-tier LLMs and vector databases. Whether you want to build a chatbot, internal knowledge assistant, documentation search engine, or a filtered content explorer — this is your foundation. ⚙️ How It Works This workflow is divided into 3 pipelines: 📥 Ingestion Upload a PDF via form Extract text and chunk it for embedding Store in Supabase vector store using Google Gemini embeddings 🧠 Enrichment (Async) Scheduled task fetches new chunks Each chunk is enriched with LLM metadata (topics, use_case, risks, audience level, summary, etc.) Metadata is added to the vector DB for improved retrieval and filtering 🤖 Agent Chat A user question triggers the RAG agent Query Builder transforms it into keywords and filters Vector DB is queried and reranked The final answer is generated using only retrieved evidence, with references Chat memory is managed via Postgres 🌟 Key Features Asynchronous enrichment** → Save tokens, batch process with free-tier LLMs like Gemma Metadata-aware** → Improved filtering and reranking Explainable answers** → Agent cites sources and sections Chat memory** → Persistent context with Postgres Modular design** → Swap LLMs, rerankers, vector DBs, and even enrichment schema Free to run** → Built with Gemini, Gemma, Cohere, Supabase (free tier-compatible) 🔐 Required Credentials |Tool|Use| |-|-|-| |Supabase w/ PostreSQL|Vector DB + storage| |Google Gemini/Gemma|Embeddings & LLM| |Cohere API|Re-ranking| |PostgreSQL|Chat memory| 🧰 Customization Tips Swap extractFromFile with Notion/Google Drive integrations Extend Metadata Obtention prompt to fit your domain (e.g., financial, legal) Replace LLMs with OpenAI, Mistral, or Ollama Replace Postgre Chat Memory with Simple Memory or any other Use a webhook instead of a form to automate ingestion Connect to Telegram/Slack UI with a few extra nodes 💡 Use Cases Company knowledge base bot (internal docs, SOPs) Educational assistant with smart filtering (by topic or level) Legal or policy assistant that cites source sections Product documentation Q&A with multi-language support Training material assistant that highlights risks/examples Content Generation 🧠 Who It’s For Indie developers building smart chatbots AI consultants prototyping Q&A assistants Teams looking for an internal knowledge agent Anyone building affordable, explainable AI tools 🚀 Try It Out! Deploy a modular RAG assistant using n8n, Supabase, and Gemini — fully customizable and almost free to run. 1. 📁 Prepare Your PDFs Use any internal documents, manuals, or reports in *PDF *format. Optional: Add Google Drive integration to automate ingestion. 2. 🧩 Set Up Supabase Create a free Supabase project Use the table creation queries included in the workflow to set up your schema. Add your *supabaseUrl *and *supabaseKey *in your n8n credentials. > 💡 Pro Tip: Make sure you match the embedding dimensions to your model. This workflow uses Gemini text-embedding-04 (768-dim) — if switching to OpenAI, change your table vector size to 1536. 3. 🧠 Connect Gemini & Gemma Use Gemini/Gemma for embeddings and optional metadata enrichment. Or deploy locally for lightweight async LLM processing (via Ollama/HuggingFace). 4. ⚙️ Import the Workflow in n8n Open n8n (self-hosted or cloud). Import the workflow file and paste your credentials. You’re ready to ingest, enrich, and query your document base. 💬 Have Feedback or Ideas? I’d Love to Hear This project is open, modular, and evolving — just like great workflows should be :). If you’ve tried it, built on top of it, or have suggestions for improvement, I’d genuinely love to hear from you. Let’s share ideas, collaborate, or just connect as part of the n8n builder community. 📧 ascuncia.es@gmail.com 🔗 Linkedin
by Ritesh
Automated Incident and Request Management in ServiceNow Who’s it for This workflow is designed for IT teams, service desk agents, and operations managers who use ServiceNow. It reduces manual effort by automatically classifying chat messages as Incidents or Requests, creating/updating them in ServiceNow, and summarizing ticket updates. What it does Receives incoming chat messages. Classifies the message as one of: Incident (something broken, unavailable, or a complaint) Request (access, provisioning, product/order related) Follow-ups (incident or request update checks) Update action (user wants to add info to an existing ticket) Everything else (knowledge search / general query). Creates Incidents in ServiceNow via the ServiceNow node. Creates Requests in ServiceNow using the Service Catalog API. Updates existing Incidents with new work notes when the user provides an update. Pulls existing incident/request work notes for summaries. Optionally uses SerpAPI for general queries (if enabled). Returns a concise summary back to the user through the webhook. Requirements ServiceNow account** with API access (Basic Auth) OpenAI API key** (used by the classifier and summarizer) SerpAPI key* *(optional – for general web lookups) Credentials needed You will need to set up the following credentials in n8n: ServiceNow Basic Auth (username, password, instance URL). OpenAI API (API key). SerpAPI (optional – only if you want web search enabled). How to set up Import the workflow JSON into your n8n instance. Create the credentials mentioned above and assign them to the corresponding nodes: Create an incident → ServiceNow Basic Auth HTTP Request1 (for Service Catalog requests) → ServiceNow Basic Auth OpenAI Chat Model / OpenAI Chat Model1 / OpenAI Chat Model2 / OpenAI Chat Model3 → OpenAI API SerpAPI node (optional) → SerpAPI key Adjust the ServiceNow instance URL in the HTTP Request node to match your environment. Deploy the workflow. Send a test chat message to trigger the workflow. How to customize Update the classification rules in the Text Classifier node if your organization uses different definitions for incidents vs. requests. Edit the summary prompt in the Summarization Chain to include or exclude specific fields. Add additional notification nodes (Slack, Teams, or Email) if you want updates pushed to other channels. Notes & Limitations This workflow creates general requests in ServiceNow using the catalog API. For production, update the Service Catalog item ID to match your environment. “Everything else” category uses SerpAPI. If not configured, those queries will not return results. This workflow requires OpenAI GPT-4.1 mini (or another supported model) for classification and summarization.
by IranServer.com
Automate IP geolocation and HTTP port scanning with Google Sheets trigger This n8n template automatically enriches IP addresses with geolocation data and performs HTTP port scanning when new IPs are added to a Google Sheets document. Perfect for network monitoring, security research, or maintaining an IP intelligence database. Who's it for Network administrators, security researchers, and IT professionals who need to: Track IP geolocation information automatically Monitor HTTP service availability across multiple ports Maintain centralized IP intelligence in spreadsheets Automate repetitive network reconnaissance tasks How it works The workflow triggers whenever a new row containing an IP address is added to your Google Sheet. It then: Fetches geolocation data using the ip-api.com service to get country, city, coordinates, ISP, and organization information Updates the spreadsheet with the geolocation details Scans common HTTP ports (80, 443, 8080, 8000, 3000) to check service availability Records port status back to the same spreadsheet row, showing which services are accessible The workflow handles both successful connections and various error conditions, providing a comprehensive view of each IP's network profile. Requirements Google Sheets API access** - for reading triggers and updating data Google Sheets document** with at least an "IP" column header How to set up Create a Google Sheet with columns: IP, Country, City, Lat, Lon, ISP, Org, Port_80, Port_443, Port_8000, Port_8080, Port_3000 Configure Google Sheets credentials in both the trigger and update nodes Update the document ID in the Google Sheets Trigger and both Update nodes to point to your spreadsheet Test the workflow by adding an IP address to your sheet and verifying the automation runs How to customize the workflow Modify port list**: Edit the "Edit Fields" node to scan different ports by changing the ports array Add more geolocation fields**: The ip-api.com response includes additional fields like timezone, zip code, and AS number Change trigger frequency**: Adjust the polling interval in the Google Sheets Trigger for faster or slower monitoring Add notifications**: Insert Slack, email, or webhook nodes to alert when specific conditions are detected Filter results**: Add IF nodes to process only certain IP ranges or geolocation criteria
by Patrick Jennings
Sleeper NFL Team Chatbot Starter A Telegram chatbot built to look up your fantasy football team in the Sleeper app and return your roster details, player names, positions, and team info. This starter workflow is perfect for users who want a simple, conversational way to view their Sleeper team in-season or pre-draft. What It Does When a user types their Sleeper username into Telegram, this workflow: Extracts the username from Telegram Pulls their Sleeper User ID Retrieves their Leagues and selects the first one (by default) Pulls the full league Rosters Finds the matching roster owned by that user Uses player_ids to look up full player info from a connected database (e.g. Airtable or Google Sheets) Returns a clean list of player names, positions, and teams via Telegram Requirements To get this running, you’ll need: A Telegram bot (set up through BotFather) A Sleeper Fantasy Football account A synced player database that matches player_id to full player details (we recommend using the companion template: Sleeper NFL Players Daily Sync) Setup Instructions Import the workflow into your n8n instance Add the required credentials: Telegram (API Key from BotFather) Airtable (or replace with another database method like Google Sheets or HTTP request to a hosted JSON file) Trigger the workflow by sending your exact Sleeper username to the bot Your full team roster will return as a formatted message > If the user is in multiple Sleeper leagues, the current logic returns the first league found. Example Output You have 19 players on your roster: Cam Akers (RB - NO), Jared Goff (QB - DET), ... Customization Notes Replace Telegram Trigger with any other input method (webhook, form input, etc.) Replace Airtable node with Google Sheets, SQL DB, or even a local file if preferred You can hardcode a Sleeper username if you're using this for a single user Related Templates Sleeper NFL Players Daily Sync (syncs player_id to player name, position, team) -Create Player Sync first then either integrate it to this template or reate a subworkflow from it & use most recent data set. Difficulty Rating & Comment (from the author) 3 out of 10 if this ain't you're first rodeo, respectfully. Just a little bit more work on adding the Players Sync as your data table & knowing how to GET from Sleeper. If you use Sleeper for fantasy football, lets go win some games!
by Sergey Skorobogatov
GiggleGPTBot — Witty Telegram Bot with AI & Postgres 📝 Overview GiggleGPTBot is a witty Telegram bot built with n8n, OpenRouter, and Postgres. It delivers short jokes, motivational one-liners, and playful roasts, responds to mentions, and posts scheduled witty content. The workflow also tracks user activity and provides lightweight statistics and leaderboards. ✨ Features 🤖 AI-powered humor engine — replies with jokes, motivation, random witty lines, or sarcastic roasts. 💬 Command support — /joke, /inspire, /random, /roast, /help, /stats, /top. 🎯 Mention detection — replies when users tag @GiggleGPTBot. ⏰ Scheduled posts — morning jokes, daily motivation, and random wisdom at configured times. 📊 User analytics — counts messages, commands, reactions, and generates leaderboards. 🗄️ Postgres persistence — robust schema with tables for messages, responses, stats, and schedules. 🛠️ How It Works Triggers Telegram Trigger — receives all messages and commands from a chat. Schedule Trigger — runs hourly to check for planned posts. Processing Switch routes commands (/joke, /inspire, /random, /roast, /help, /stats, /top). Chat history fetches the latest context. Mention Analysis determines if the bot was mentioned. Generating an information response builds replies for /help, /stats, /top. AI nodes (AI response to command, AI response to mention, AI post generation) craft witty content via OpenRouter. Persistence Init Database ensures tables exist (user_messages, bot_responses, bot_commands, message_reactions, scheduled_posts, user_stats). Logging nodes update stats and store every bot/user interaction. Delivery Replies are sent back via Telegram Send nodes (Send AI response, Send info reply, Reply to Mention, Submit scheduled post). ⚙️ Setup Instructions Create a Telegram Bot with @BotFather and get your API token. Add credentials in n8n: Telegram API (your bot token) OpenRouter (API key from openrouter.ai) Postgres (use your DB, Supabase works well). Run the Init Database node once to create all required tables. (Optional) Seed schedule with the Adding a schedule node — it inserts: Morning joke at 06:00 Daily motivation at 09:00 Random wisdom at 17:00 (Adjust chat_id to your group/channel ID.) Activate workflow and connect Telegram Webhook or Polling. 📊 Database Schema user\_messages** — stores user chat messages. bot\_responses** — saves bot replies. bot\_commands** — logs command usage. message\_reactions** — tracks reactions. scheduled\_posts** — holds scheduled jokes/wisdom/motivation. user\_stats** — aggregates per-user message/command counts and activity. 🔑 Example Commands /joke → witty one-liner with light irony. /inspire → short motivational phrase. /random → unexpected witty remark. /roast → sarcastic roast (no offensive targeting). /stats → shows your personal stats. /top → displays leaderboard. /help → lists available commands. @GiggleGPTBot + message → bot replies in context. 🚀 Customization Ideas Add new command categories (/quote, /fact, /news). Expand analytics with reaction counts or streaks. Localize prompts into multiple languages. Adjust CRON schedules for posts. ✅ Requirements Telegram Bot token OpenRouter API key Postgres database 📦 Import this workflow, configure credentials, run the DB initializer — and your witty AI-powered Telegram companion is ready!
by Budi SJ
Smart POS System with Live Updates to Telegram & Sheets This Smart POS (Point of Sale) System template provides a lightweight yet powerful sales management solution. It features a modern web based interface for placing orders, with real time integration to Google Sheets and instant Telegram notifications, enhanced by AI-generated reports. Ideal for small businesses, mobile vendors, or anyone who needs a quick and smart POS system. ✨ Key Features 🖥️ Modern web interface with product catalog and search 🛒 Cart system with quantity, price, and discount handling 🆔 Unique Sales ID generation for every transaction 📊 Google Sheets integration to store product and sales data 🤖 AI-generated sales summary via OpenRouter 🚀 Instant Telegram notifications for new orders 🔧 Requirements A Google Sheet to store products and sales data 👉 Use this Google Sheets template to get started Telegram Bot Token and User ID (Create a bot via @BotFather) OpenRouter API Key (Sign up at openrouter.ai and use the LLM model) ⚙️ Setup Instructions Set Up Your Google Sheets Use the template and fill in product details in the products tab Configure Telegram Bot Create a bot via BotFather Obtain your Bot Token and Chat ID (message the bot once to get ID) Set Up AI Agent In the AI agent node, replace the placeholder with your actual OpenRouter API Key 🚀 Deploy the Workflow Activate the workflow in n8n Open the webhook URL to access the POS interface Enter product orders and customer details Submit the order Receive an instant Telegram notification with AI-generated sales summary Data is automatically saved to Google Sheets for tracking and analysis