by Marth
Automated AI-Driven Competitor & Market Intelligence System Problem Solved:** Small and Medium-sized IT companies often struggle to stay ahead in a rapidly evolving market. Manually tracking competitor moves, pricing changes, product updates, and emerging market trends is time-consuming, inconsistent, and often too slow for agile sales strategies. This leads to missed sales opportunities, ineffective pitches, and a reactive rather than proactive market approach. Solution Overview:** This n8n workflow automates the continuous collection and AI-powered analysis of competitor data and market trends. By leveraging web scraping, RSS feeds, and advanced AI models, it transforms raw data into actionable insights for your sales and marketing teams. The system generates structured reports, notifies relevant stakeholders, and stores intelligence in your database, empowering your team with real-time, strategic information. For Whom:** This high-value workflow is perfect for: IT Solution Providers & SaaS Companies: To maintain a competitive edge and tailor sales pitches based on competitor weaknesses and market opportunities. Sales & Marketing Leaders: To gain comprehensive, automated market intelligence without extensive manual research. Product Development Teams: To identify market gaps and validate new feature development based on competitive landscapes and customer sentiment. Business Strategists: To inform strategic planning with data-driven insights into industry trends and competitive threats. How It Works (Scope of the Workflow) ⚙️ This system establishes a powerful, automated pipeline for market and competitor intelligence: Scheduled Data Collection: The workflow runs automatically at predefined intervals (e.g., weekly), initiating data retrieval from various online sources. Diverse Information Gathering: It pulls data from competitor websites (pricing, features, blogs via web scraping services), industry news and blogs (via RSS feeds), and potentially other sources. Intelligent Data Preparation: Collected data is aggregated, cleaned, and pre-processed using custom code to ensure it's in an optimal format for AI analysis, removing noise and extracting relevant text. AI-Powered Analysis: An advanced AI model (like OpenAI's GPT-4o) performs in-depth analysis on the cleaned data. It identifies competitor strengths, weaknesses, new offerings, pricing changes, customer sentiment from reviews, emerging market trends, and suggests specific opportunities and threats for your company. Automated Report Generation: The AI's structured insights are automatically populated into a professional Google Docs report using a predefined template, making the intelligence easily digestible for your team. Team Notification: Stakeholders (sales leads, marketing managers) receive automated notifications via Slack (or email), alerting them to the new report and key insights. Strategic Data Storage & Utilization: All analyzed insights are stored in a central database (e.g., PostgreSQL). This builds a historical record for long-term trend analysis and can optionally trigger sub-workflows to generate personalized sales talking points directly relevant to ongoing deals or specific prospects. Setup Steps 🛠️ (Building the Workflow) To implement this sophisticated workflow in your n8n instance, follow these detailed steps: Prepare Your Digital Assets & Accounts: Google Sheet (Optional, if using for CRM data): For simpler CRM, create a sheet with CompetitorName, LastAnalyzedDate, Strengths, Weaknesses, Opportunities, Threats, SalesTalkingPoints. API Keys & Credentials: OpenAI API Key: Essential for the AI analysis. Web Scraping Service API Key: For services like Apify, Crawlbase, or similar (e.g., Bright Data, ScraperAPI). Database Access: Credentials for your PostgreSQL/MySQL database. Ensure you've created necessary tables (competitor_profiles, market_trends) with appropriate columns. Google Docs Credential: To link n8n to your Google Drive for report generation. Create a template Google Doc with placeholders (e.g., {{competitorName}}, {{strengths}}). Slack Credential: For sending team notifications to specific channels. CRM API Key (Optional): If directly integrating with HubSpot, Salesforce, or custom CRM via API. Identify Data Sources for Intelligence: Compile a list of competitor website URLs you want to monitor (e.g., pricing pages, blog sections, news). Identify relevant online review platforms (e.g., G2, Capterra) for competitor products. Gather RSS Feed URLs from key industry news sources, tech blogs, and competitor's own blogs. Define keywords for general market trends or competitor mentions, if using tools that provide RSS feeds (like Google Alerts). Build the n8n Workflow (10 Key Nodes): Start a new workflow in n8n and add the following nodes, configuring their parameters and connections carefully: Cron (Scheduled Analysis Trigger): Set this to trigger daily or weekly at a specific time (e.g., Every Week, At Hour: 0, At Minute: 0). HTTP Request (Fetch Competitor Web Data): Configure this to call your chosen web scraping service's API. Set Method to POST, URL to the service's API endpoint, and build the JSON/Raw Body with the startUrls (competitor websites, review sites) for scraping, including your API Key in Authentication (e.g., Header Auth). RSS Feed (Fetch News & Blog RSS): Add the URLs of competitor blogs and industry news RSS feeds. Merge (Combine Data Sources): Connect inputs from both Fetch Competitor Web Data and Fetch News & Blog RSS. Use Merge By Position. Code (Pre-process Data for AI): Write JavaScript code to iterate through merged items, extract relevant text content, perform basic cleaning (e.g., HTML stripping), and limit text length for AI input. Output should be an array of objects with content, title, url, and source. OpenAI (AI Analysis & Competitor Insights): Select your OpenAI credential. Set Resource to Chat Completion and Model to gpt-4o. In Messages, create a System message defining AI's role and a User message containing the dynamic prompt (referencing {{ $json.map(item => ... ).join('\\n\\n') }} for content, title, url, source) and requesting a structured JSON output for analysis. Set Output to Raw Data. Google Docs (Generate Market Intelligence Report): Select your Google Docs credential. Set Operation to Create document from template. Provide your Template Document ID and map the Values from the parsed AI output (using JSON.parse($json.choices[0].message.content).PropertyName) to your template placeholders. Slack (Sales & Marketing Team Notification): Select your Slack credential. Set Chat ID to your team's Slack channel ID. Compose the Text message, referencing the report link ({{ $json.documentUrl }}) and key AI insights (e.g., {{ JSON.parse($json.choices[0].message.content).Competitor_Name }}). PostgreSQL (Store Insights to Database): Select your PostgreSQL credential. Set Operation to Execute Query. Write an INSERT ... ON CONFLICT DO UPDATE SQL query to store the AI insights into your competitor_profiles or market_trends table, mapping values from the parsed AI output. OpenAI (Generate Personalized Sales Talking Points - Optional Branch): This node can be part of the main workflow or a separate, manually triggered workflow. Configure it similarly to the main AI node, but with a prompt tailored to generate sales talking points based on a specific sales context and the stored insights. Final Testing & Activation: Run a Test: Before going live, manually trigger the workflow from the first node. Carefully review the data at each stage to ensure correct processing and output. Verify that reports are generated, notifications are sent, and data is stored correctly. Activate Workflow: Once testing is complete and successful, activate the workflow in n8n. This system will empower your IT company's sales team with invaluable, data-driven intelligence, enabling them to close more deals and stay ahead in the market.
by franck fambou
Overview This advanced automation workflow enables deep web scraping combined with Retrieval-Augmented Generation (RAG) to transform websites into intelligent, queryable knowledge bases. The system recursively crawls target websites, extracts content, and indexes all data in a vector database for AI conversational access. How the system works Intelligent Web Scraping and RAG Pipeline Recursive Web Scraper - Automatically crawls every accessible page of a target website Data Extraction - Collects text, metadata, emails, links, and PDF documents Supabase Integration - Stores content in PostgreSQL tables for scalability RAG Vectorization - Generates embeddings and stores them for semantic search AI Query Layer - Connects embeddings to an AI chat engine with citations Error Handling - Automatically retriggers failed queries Setup Instructions Estimated setup time: 30-45 minutes Prerequisites Self-hosted n8n instance (v0.200.0 or higher) Supabase account and project (PostgreSQL enabled) OpenAI/Gemini/Claude API key for embeddings and chat Optional: External vector database (Pinecone, Qdrant) Detailed configuration steps Step 1: Supabase configuration Project creation**: New Supabase project with PostgreSQL enabled Generating credentials**: API keys (anon key and service_role key) and connection string Security configuration**: RLS policies according to your access requirements Step 2: Connect Supabase to n8n Configure Supabase node**: Add credentials to n8n Credentials Test connection**: Verify with a simple query Configure PostgreSQL**: Direct connection for advanced operations Step 3: Preparing the database Main tables**: pages: URLs, content, metadata, scraping statuses documents: Extracted and processed PDF files embeddings: Vectors for semantic search links: Link graph for navigation Management functions**: Scripts to reactivate failed URLs and manage retries Step 4: Configuring automation Recursive scraper**: Starting URL, crawling depth, CSS selectors HTTP extraction**: User-Agent, headers, timeouts, and retry policies Supabase backup**: Batch insertion, data validation, duplicate management Step 5: Error handling and re-executions Failure monitoring**: Automatic detection of failed URLs Manual triggers**: Selective re-execution by domain or date Recovery sub-streams**: Retry logic with exponential backoff Step 6: RAG processing Embedding generation**: Text-embedding models with intelligent chunking Vector storage**: Supabase pgvector or external database Conversational engine**: Connection to chat models with source citations Data structure Main Supabase tables | Table | Content | Usage | |-------|---------|-------| | pages | URLs, HTML content, metadata | Main storage for scraped content | | documents | PDF files, extracted text | Downloaded and processed documents | | embeddings | Vectors, text chunks | Semantic search and RAG | | links | Link graph, navigation | Relationships between pages | Use cases Business and enterprise Competitive intelligence with conversational querying Market research from complex web domains Compliance monitoring and regulatory watch Research and academia Literature extraction with semantic search Building datasets from fragmented sources Legal and technical Scraping legal repositories with intelligent queries Technical documentation transformed into a conversational assistant Key features Advanced scraping Recursive crawling with automatic link discovery Multi-format extraction (HTML, PDF, emails) Intelligent error handling and retry Intelligent RAG Contextual embeddings for semantic search Multi-document queries with citations Intuitive conversational interface Performance and scalability Processing of thousands of pages per execution Embedding cache for fast responses Scalable architecture with Supabase Technical Architecture Main flow: Target URL → Recursive scraping → Content extraction → Supabase storage → Vectorization → Conversational interface Supported types: HTML pages, PDF documents, metadata, links, emails Performance specifications Capacity**: 10,000+ pages per run Response time**: < 5 seconds for RAG queries Accuracy**: >90% relevance for specific domains Scalability**: Distributed architecture via Supabase Advanced configuration Customization Crawling depth and scope controls Domain and content type filters Chunking settings to optimize RAG Monitoring Real-time monitoring in Supabase Cost and performance metrics Detailed conversation logs
by KPendic
This n8n flow demos basic dev-ops operation task, dns records management. AI agent with light and basic prompt functions like getter and setter for DNS records. In this special case, we are managing remote dns server, via API calls - that are handled on CloudFlare platform side. Use-cases for this flow can be standalone, or you can chain it in your pipe-line to get powerful infrastructure flows for your needs. How it works we created basic agent and gave it a prompt to know about one tool: cf_tool - sub-routine (to itself flow - or it can be separate dedicated one) prompt have defined arguments that are needed for passing them when calling agent, for each action specifically tool it self have basic if switch that is - based of a action call - calling specific CloudFlare API endpoint (and pass down the args from the tool) Requirements For storing and processing of data in this flow you will need: CloudFlare.com API key/token - for retrieving your data (https://dash.cloudflare.com/?to=/:account/api-tokens) OpenAPI credentials (or any other LLM provider) saved - for agent chat (Optional) PostGres table for chat history saving Official CloudFlare api Documentation For full details and specifications please use API documentation from: https://developers.cloudflare.com/api/ Linkedin post Let me know if you found this flow usefull on my Linkedin post > here. tags: #cloudflare, #dns, #domain
by Akshay
Overview This project is an AI-powered hotel receptionist built using n8n, designed to handle guest queries automatically through WhatsApp. It integrates Google Gemini, Redis, MySQL, and Google Sheets via LangChain to create an intelligent conversational system that understands and answers booking-related questions in real time. A standout feature of this workflow is its AI model-switching system — it dynamically assigns users to different Gemini models, balancing traffic, improving performance, and reducing API costs. How It Works WhatsApp Trigger The workflow starts when a hotel guest sends a message through WhatsApp. The system captures the message text, contact details, and session information for further processing. Redis-Based Model Management The workflow checks Redis for a saved record of the user’s previously assigned AI model. If no record exists, a Model Decider node assigns a new model (e.g., Gemini 1 or Gemini 2). Redis then stores this model assignment for an hour, ensuring consistent routing and controlled traffic distribution. Model Selector The Model Selector routes each user’s request to the correct Gemini instance, enabling parallel execution across multiple AI models for faster response times and cost optimization. AI Agent Logic The LangChain AI Agent serves as the system’s reasoning core. It: Interprets guest questions such as: “Who checked in today?” “Show me tomorrow’s bookings.” “What’s the price for a deluxe suite for two nights?” Generates safe, read-only SQL SELECT queries. Fetches the requested data from the MySQL database. Combines this with dynamic pricing or promotions from Google Sheets, if available. Response Delivery Once the AI Agent formulates an answer, it sends a natural-sounding message back to the guest via WhatsApp, completing the interaction loop. Setup & Requirements Prerequisites Before deploying this workflow, ensure the following: n8n Instance** (local or hosted) WhatsApp Cloud API** with messaging permissions Google Gemini API Key** (for both models) Redis Database** for user session and model routing MySQL Database** for hotel booking and guest data Google Sheets Account** (optional, for pricing or offer data) Step-by-Step Setup Configure Credentials Add all API credentials in n8n → Settings → Credentials (WhatsApp, Redis, MySQL, Google). Prepare Databases MySQL Tables Example: bookings(id, guest_name, room_type, check_in, check_out) rooms(id, type, rate, status) Ensure the MySQL user has read-only permissions. Set Up Redis Create Redis keys for each user: llm-user:<whatsapp_id> = { "modelIndex": 0 } TTL: 3600 seconds (1 hour). Connect Google Sheets (Optional) Add your sheet under Google Sheets OAuth2. Use it to manage room rates, discounts, or seasonal offers dynamically. WhatsApp Webhook Configuration In Meta’s Developer Console, set the webhook URL to your n8n instance. Select message updates to trigger the workflow. Testing the Workflow Send messages like “Who booked today?” or a voice message. Confirm responses include real data from MySQL and contextual replies. Key Features Text & voice support** for guest interactions Automatic AI model-switching** using Redis Session memory** for context-aware conversations Read-only SQL query generation** for database safety Google Sheets integration** for live pricing and availability Scalable design** supporting multiple LLM instances Example Guest Queries | Guest Query | AI Response Example | |--------------|--------------------| | “Who checked in today?” | “Two guests have checked in today: Mr. Ahmed (Room 203) and Ms. Priya (Room 410).” | | “How much is a deluxe room for two nights?” | “A deluxe room costs $120 per night. The total for two nights is $240.” | | “Do you have any discounts this week?” | “Yes! We’re offering a 10% weekend discount on all deluxe and suite rooms.” | | “Show me tomorrow’s check-outs.” | “Three check-outs are scheduled tomorrow: Mr. Khan (101), Ms. Lee (207), and Mr. Singh (309).” | Customization Options 🧩 Model Assignment Logic You can modify the Model Decider node to: Assign models based on user load, region, or priority level. Increase or decrease TTL in Redis for longer model persistence. 🧠 AI Agent Prompt Adjust the system prompt to control tone and response behavior — for example: Add multilingual support. Include upselling or booking confirmation messages. 🗂️ Database Expansion Extend MySQL to include: Staff schedules Maintenance records Restaurant reservations Then link new queries in the AI Agent node for richer responses. Tech Stack n8n** – Workflow automation & orchestration Google Gemini (PaLM)** – LLM for reasoning & generation Redis** – Model assignment & session management MySQL** – Booking & guest data storage Google Sheets** – Dynamic pricing reference WhatsApp Cloud API** – Messaging interface Outcome This workflow demonstrates how AI automation can transform hotel operations by combining WhatsApp communication, database intelligence, and multi-model AI reasoning. It’s a production-ready foundation for scalable, cost-optimized, AI-driven hospitality solutions that deliver fast, accurate, and personalized guest interactions.
by Michael A Putra
Demo Personalized Email This n8n workflow is built for AI and automation agencies to promote their workflows through an interactive demo that prospects can try themselves. The featured system is a deep personalized email demo. 🔄 How It Works Prospect Interaction A prospect starts the demo via Telegram. The Telegram bot (created with BotFather) connects directly to your n8n instance. Demo Guidance The RAG agent and instructor guide the user step-by-step through the demo. Instructions and responses are dynamically generated based on user input. Workflow Execution When the user triggers an action (e.g., testing the email demo), n8n runs the workflow. The workflow collects website data using Crawl4AI or standard HTTP requests. Email Demo The system personalizes and sends a demo email through SparkPost, showing the automation’s capability. Logging and Control Each user interaction is logged in your database using their name and id. The workflow checks limits to prevent misuse or spam. Error Handling If a low-CPU scraping method fails, the workflow automatically escalates to a higher-CPU method. ⚙️ Requirements Before setting up, make sure you have the following: n8n — Automation platform to run the workflow Docker — Required to run Crawl4AI Crawl4AI — For intelligent website crawling Telegram Account — To create your Telegram bot via BotFather SparkPost Account — To send personalized demo emails A database (e.g., PostgreSQL, MySQL, or SQLite) — To store log data such as user name and ID 🚀 Features Telegram interface** using the BotFather API Instructor and RAG agent** to guide prospects through the demo Flow generation limits per user ID** to prevent abuse Low-cost yet powerful web scraping**, escalating from low- to high-CPU flows if earlier ones fail 💡 Development Ideas Replace the RAG logic with your own query-answering and guidance method Remove the flow limit if you’re confident the demo can’t be misused Swap the personalized email demo with any other workflow you want to showcase 🧠 Technical Notes Telegram bot** created with BotFather Website crawl process:** Extract sub-links via /sitemap.xml, sitemap_index.xml, or standard HTTP requests Fall back to Crawl4AI if normal requests fail Fetch sub-link content via HTTPS or Crawl4AI as backup SparkPost** used for sending demo emails ⚙️ Setup Instructions 1. Create a Telegram Bot Use BotFather on Telegram to create your bot and get the API token. This token will be used to connect your n8n workflow to Telegram. 2. Create a Log Data Table In your database, create a table to store user logs. The table must include at least the following columns: name — to store the user’s name or Telegram username. id — to store the user’s unique identifier. 3. Install Crawl4AI with Docker Follow the installation guide from the official repository: 👉 https://github.com/unclecode/crawl4ai Crawl4AI** will handle website crawling and content extraction in your workflow. 📦 Notes This setup is optimized for low cost, easy scalability, and real-time interaction with prospects. You can customize each component — Telegram bot behavior, RAG logic, scraping strategy, and email workflow — to fit your agency’s demo needs. 👉 You can try the live demo here: @email_demo_bot
by Muhammadumar
This is the core AI agent used for isra36.com. Don't trust complex AI-generated SQL queries without double-checking them in a safe environment. That's where isra36 comes in. It automatically creates a test environment with the necessary data, generates code for your task, runs it to double-check for correctness, and handles errors if necessary. If you enable auto-fixing, isra36 will detect and fix issues on its own. If not, it will ask for your permission before making changes during debugging. In the end, you get thoroughly verified code along with full details about the environment it ran in. Setup It is an embedded chat for the website, but you can pin input data and run it on your own n8n instance. Input data sessionId: uuid\_v4. Required to handle ongoing conversations and to create table names (used as a prefix). threadId: string | nullable. If aiProvider is openai, conversation history is managed on OpenAI’s side. This is not needed in the first request—it will start a new conversation. For ongoing conversations, you must provide this value. You can get it from the OpenAIMainBrain node output after the first run. If you want to start a new conversation, just leave it as null. apiKey: string. Your API key for the selected aiProvider. aiProvider: string. Currently supported values: openai, openrouter. model: string. The AI model key (e.g., gpt-4.1, o3-mini, or any supported model key from OpenRouter). autoErrorFixing: boolean. If true, it will automatically fix errors encountered when running code in the environment. If false, it will ask for your permission before attempting a fix. chatInput: string. The user's prompt or message. currentDbSchemaWithData: string. A JSON representation of the database schema with sample data. Used to inform the AI about the current database structure during an ongoing conversation. Please use the '[]' value in the first request. Example string for filled db structure : '{"users":[{"id":1,"name":"John Doe","email":"john.d@example.com"},{"id":2,"name":"Jane Smith","email":"jane.s@example.com"}],"products":[{"product_id":101,"product_name":"Laptop","price":999.99}]}' Make sure to fill in your credentials: Your OpenAI or OpenRouter API key Access to a local PostgreSQL database for test execution You can view your generated tables using your preferred PostgreSQL GUI. We recommend DBeaver. Alternatively, you can activate the “Deactivated DB Visualization” nodes below. To use them, connect each to the most recent successful Set node and manually adjust the output. However, the easiest and most efficient method is to use a GUI. Workflow Explanation We store all input values in the localVariables node. Please use this node to get the necessary data. OpenAI has a built-in assistant that manages chat history on their side. For OpenRouter, we handle chat history locally. That’s why we use separate nodes like ifOpenAi and isOpenAi. Note that if logic can also be used inside nodes. The AutoErrorFixing loop will run only a limited number of times, as defined by the isMaxAutoErrorReached node. This prevents infinite loops. The Execute_AI_result node connects to the PostgreSQL test database used to execute queries. Guidance on customization This setup is built for PostgreSQL, but it can be adapted to any programming language, and the logic can be extended to any programming framework. To customize the logic for other programming languages: Change instruction parameter in localVariables node. Replace the Execute_AI_result PostgreSQL node with another executable node. For example, you can use the HTTP Request node. Update the GenerateErrorPrompt node's prompt parameter to generate code specific to your target language or framework. Any workflows built on top of this must credit the original author and be released under an open-source license.
by franck fambou
Overview This intelligent chatbot workflow enables natural language conversations with your documents, supporting multiple file formats including PDFs, Word documents, Excel spreadsheets, and text files. Built with advanced RAG (Retrieval-Augmented Generation) technology, this chatbot can understand, analyze, and answer questions about your document content with contextual accuracy and intelligent responses. How It Works Intelligent Document Processing & Conversation Pipeline: Multi-Format Document Ingestion**: Automatically processes and indexes various document formats (PDF, DOCX, XLSX, TXT, etc.) Smart Content Chunking**: Breaks down documents into meaningful segments while preserving context and relationships Vector Database Storage**: Creates searchable embeddings for fast and accurate information retrieval Contextual Conversation Engine**: Uses AI to understand user queries and retrieve relevant document sections Natural Language Responses**: Generates human-like responses with citations and source references Multi-Turn Conversations**: Maintains conversation history and context across multiple interactions Real-Time Processing**: Instant responses with live document updates and dynamic content refresh Setup Instructions Estimated Setup Time: 15-20 minutes Prerequisites n8n instance (v0.200.0 or higher recommended) OpenAI/Gemini API key for embeddings and chat completion Vector database service (optional: Pinecone, Weaviate, or Qdrant) File storage service (optional: Google Drive, Dropbox, AWS S3) Web server for chatbot interface (optional) Configuration Steps Configure Document Input Sources Set up file upload webhook for direct document submission Configure cloud storage watchers for automatic document processing Add support for multiple file formats and size limits Set up document validation and security checks Setup Document Processing Pipeline Configure text extraction engines for different file types Set up intelligent chunking parameters (chunk size, overlap, boundaries) Add metadata extraction for document categorization Configure OCR for scanned documents (optional) Configure Vector Database Set up your chosen vector database credentials Configure embedding model settings (Gemini models/text-embedding-004 recommended) Set up collection/index structure for document storage Configure search parameters and similarity thresholds Setup AI Chat Engine Add your AI service API credentials (Gemini, Claude, etc.) Configure conversation prompts and system instructions Set up context window management and token optimization Add response formatting and citation rules Configure Chat Interface Set up webhook endpoints for chat API Configure session management and conversation history Add authentication and rate limiting (optional) Set up real-time updates and streaming responses Setup Monitoring & Analytics Configure conversation logging and analytics Set up performance monitoring for response times Add usage tracking and cost monitoring Configure error handling and failover mechanisms Use Cases Business & Enterprise Knowledge Base Queries**: Ask questions about company policies, procedures, and documentation Contract Analysis**: Query legal documents, contracts, and compliance materials Training Materials**: Interactive learning with training manuals and educational content Financial Reports**: Analyze and discuss financial statements, budgets, and forecasts Research & Academia Research Paper Analysis**: Discuss findings, methodologies, and citations from academic papers Literature Reviews**: Compare and contrast multiple research documents Thesis Support**: Get insights from reference materials and research data Grant Proposals**: Analyze requirements and optimize proposal content Legal & Compliance Legal Document Review**: Query contracts, agreements, and legal texts Regulatory Compliance**: Understand compliance requirements from regulatory documents Case Law Research**: Analyze legal precedents and court decisions Policy Analysis**: Interpret organizational policies and procedures Technical Documentation API Documentation**: Interactive queries about technical specifications User Manuals**: Get help and guidance from product documentation Code Documentation**: Understand codebases and technical implementations Troubleshooting Guides**: Interactive problem-solving with technical guides Personal Productivity Document Summarization**: Get quick summaries of long documents Information Extraction**: Find specific data points across multiple documents Content Research**: Research topics across your personal document library Meeting Notes**: Query and analyze meeting transcripts and notes Key Features Advanced Document Processing Multi-Format Support**: PDF, DOCX, XLSX, TXT, PPTX, and more Intelligent Chunking**: Context-aware document segmentation Metadata Extraction**: Automatic categorization and tagging OCR Integration**: Process scanned documents and images with text Intelligent Conversation Contextual Understanding**: Maintains conversation context and document relationships Source Attribution**: Provides citations and references for all answers Multi-Document Queries**: Compare and analyze across multiple documents Follow-up Questions**: Natural conversation flow with clarifying questions Performance & Scalability Fast Retrieval**: Vector-based semantic search for instant responses Scalable Architecture**: Handle large document collections efficiently Batch Processing**: Process multiple documents simultaneously Caching System**: Optimized response times with intelligent caching Security & Privacy Document Encryption**: Secure storage and transmission of sensitive documents Access Control**: User-based permissions and document access restrictions Audit Logging**: Complete conversation and access audit trails Data Retention**: Configurable data retention and deletion policies Technical Architecture Document Processing Flow File Upload → Format Detection → Text Extraction → Content Chunking Metadata Extraction → Embedding Generation → Vector Storage → Index Creation Conversation Flow User Query → Intent Analysis → Vector Search → Context Retrieval Response Generation → Source Attribution → Answer Formatting → Delivery Supported File Formats Documents**: PDF, DOC, DOCX, RTF, TXT, MD Spreadsheets**: XLS, XLSX, CSV Presentations**: PPT, PPTX Images**: PNG, JPG (with OCR) Archives**: ZIP (auto-extracts supported formats) Web**: HTML, XML Integration Options Chat Interfaces Web Widget**: Embeddable chat widget for websites API Endpoints**: RESTful API for custom integrations Slack/Teams**: Direct integration with team collaboration tools Mobile Apps**: API-first design for mobile application integration Data Sources Cloud Storage**: Google Drive, Dropbox, OneDrive, AWS S3 Document Systems**: SharePoint, Confluence, Notion Email**: Process attachments from email systems CRM/ERP**: Integration with business systems Performance Specifications Response Time**: < 3 seconds for typical queries Document Capacity**: Supports collections of 10,000+ documents Concurrent Users**: Scales to handle multiple simultaneous conversations Accuracy**: >90% relevance for domain-specific queries Advanced Configuration Options Customization Custom Prompts**: Tailor AI behavior for specific use cases Branding**: Customize chat interface with your company branding Language Support**: Multi-language document processing and responses Domain Expertise**: Fine-tune for specific industries or domains Analytics & Monitoring Usage Analytics**: Track popular queries and document usage Performance Metrics**: Monitor response times and accuracy User Feedback**: Collect ratings and improve responses A/B Testing**: Test different configurations and prompts Troubleshooting & Support Common Issues Slow Responses**: Check vector database performance and API limits Inaccurate Answers**: Review chunking strategy and embedding quality Format Errors**: Verify document formats and processing capabilities Memory Issues**: Monitor token usage and context window limits Optimization Tips Use clear, specific questions for best results Ensure documents are well-formatted with proper headers Regular vector database maintenance for optimal performance Monitor API usage to optimize costs and performance
by Ronnie Craig
AI Email Assistant - Smart Email Processing & Response 🤖 A sophisticated n8n workflow that transforms your email management with AI-powered classification, automatic responses, and intelligent organization. 🎯 What This Workflow Does This advanced AI email assistant automatically: Analyzes** incoming emails using intelligent classification Categorizes** messages by priority, urgency, and type Generates** context-aware draft responses in your voice Organizes** emails with smart labeling and filing Alerts** you to urgent messages instantly Manages** attachments with cloud storage integration Perfect for busy professionals, customer service teams, and anyone drowning in email! ✨ Key Features 🧠 Intelligent Email Analysis Context-Aware Processing**: Understands email threads and conversation history Smart Classification**: Automatically categorizes by priority, urgency, and required actions Multi-Criteria Assessment**: Evaluates response needs, follow-up requirements, team involvement Dynamic Label Management**: Syncs with your Gmail labels for consistent organization 📝 AI-Powered Response Generation Professional Draft Creation**: Generates contextually appropriate responses Tone Matching**: Mirrors the formality and style of incoming emails Multiple Response Options**: Provides alternatives for complex inquiries Customizable Voice**: Adapts to your business communication style 🔔 Smart Notification System Urgent Email Alerts**: Instant notifications for high-priority messages Telegram/Slack Integration**: Get alerts where you work Smart Filtering**: Only notifies when truly urgent Quick Action Links**: Direct links to Gmail for immediate response 📎 Advanced Attachment Management Automatic Cloud Upload**: Saves attachments to Google Drive Smart File Naming**: Organized by date, sender, and content Duplicate Detection**: Prevents redundant uploads File Type Filtering**: Optional filtering for security 🏷️ Intelligent Organization Auto-Labeling**: Applies relevant Gmail labels automatically Progress Tracking**: Marks emails as "processed" or "digested" Priority Indicators**: Visual priority levels in your inbox Category-Based Sorting**: Groups similar emails together 🛠️ Setup Instructions Prerequisites n8n instance (cloud or self-hosted) Gmail account with API access OpenAI API key (or compatible AI service) Google Drive account (for attachments) Telegram bot (optional, for alerts) Step 1: Import the Workflow Download AI_Email_Assistant_Community_Template.json In n8n, navigate to Templates → Import from File Select the downloaded JSON file The workflow will import as inactive Step 2: Configure Credentials Gmail Setup: Create Gmail OAuth2 credentials in n8n Configure the following nodes: Email_Trigger Get Conversation Thread Get Latest Message Content Create Draft Response Assign Classification Label Mark as Processed Get All Gmail Labels Test connections to ensure proper authentication AI Model Setup: Configure the AI Language Model node Options include: OpenAI (GPT-4, GPT-3.5-turbo) Anthropic Claude (recommended) Local LLMs via Ollama Add your API credentials Test the connection Google Drive Setup (Optional): Create Google Drive OAuth2 credentials Configure nodes: Upload to Google Drive Check Existing Attachments Replace YOUR_GOOGLE_DRIVE_FOLDER_ID with your folder ID Create a dedicated folder for email attachments Telegram Alerts (Optional): Create a Telegram bot via @BotFather Get your chat ID Configure the Send Urgent Alert node Replace YOUR_TELEGRAM_CHAT_ID with your actual chat ID Step 3: Customize AI Instructions Email Classification (AI Email Classifier node): Review the classification criteria in the system message Adjust urgency keywords for your business Modify priority levels based on your needs Customize category definitions Response Generation (AI Response Generator node): Update the response guidelines Replace [YOUR NAME] with your actual name Adjust tone and style preferences Add company-specific response templates Step 4: Configure Gmail Labels Create Custom Labels in Gmail: High Priority Medium Priority Low Priority Needs Response Urgent Follow Up Required Processed (or use existing labels) Update Label IDs: Run the workflow once to get label IDs Replace YOUR_PROCESSED_LABEL_ID in the "Mark as Processed" node Update any hardcoded label references Step 5: Test and Deploy Testing Process: Send yourself a test email Monitor the workflow execution Verify classification accuracy Check draft response quality Confirm labeling works correctly Test urgent alert functionality Fine-Tuning: Adjust AI prompts based on test results Refine classification criteria Update response templates Modify notification preferences Go Live: Activate the workflow Monitor initial performance Adjust settings as needed 📊 Email Classification System Priority Levels High**: Urgent matters requiring immediate attention Medium**: Important but not time-critical Low**: Routine or informational messages Classification Categories toReply**: Direct questions or requests requiring response urgent**: Immediate business impact or crisis situations dateRelated**: Time-sensitive events or deadlines attachmentsToUpload**: Financial docs or important files requiresFollowUp**: Multi-step processes or ongoing projects forwardToTeam**: Cross-departmental or collaborative items Response Generation Guidelines Professional Tone**: Business casual, warm but professional Context Awareness**: Considers email thread history Structured Responses**: Clear paragraphs with actionable next steps Placeholder System**: Uses [PLACEHOLDER] for missing information Alternative Options**: Provides multiple response choices for complex inquiries 🔧 Advanced Customization File Type Filtering // In Get Specific File Types node, modify: if (mimeType === 'application/pdf' || mimeType === 'text/xml' || mimeType === 'image/jpeg') { // Process file } Custom Urgency Keywords Update the AI classifier prompt with your business-specific urgent terms: Keywords: "URGENT", "EMERGENCY", "CRITICAL", "ASAP", "IMMEDIATE" Custom terms: "CLIENT ESCALATION", "SYSTEM DOWN", "LEGAL DEADLINE" Response Templates Customize the response generator with your company voice: Greeting style: "Hi [Name]" vs "Dear [Name]" Closing: "Best Regards" vs "Thank you" vs "Cheers" Company-specific phrases and terminology Integration Options CRM Systems**: Add nodes to create tasks in your CRM Project Management**: Auto-create tickets in Jira, Asana, etc. Calendar Integration**: Schedule follow-ups automatically Slack/Teams**: Alternative notification channels 🚨 Troubleshooting Common Issues 1. Gmail Authentication Errors Verify OAuth2 credentials are active Check Gmail API quotas Ensure proper scopes are configured 2. AI Classification Inconsistency Review and refine classification prompts Add more specific examples Adjust confidence thresholds 3. Response Generation Problems Validate AI model configuration Check API key and quotas Test with simpler email examples 4. Attachment Upload Failures Verify Google Drive permissions Check folder ID configuration Ensure sufficient storage space 5. Missing Notifications Test Telegram bot configuration Verify chat ID is correct Check urgency classification logic Performance Optimization Rate Limiting**: Gmail has API quotas - monitor usage Batch Processing**: Workflow processes one email at a time Error Handling**: Built-in retry logic for reliability Resource Management**: Monitor AI API costs and usage 📈 Best Practices 1. Email Management Regular Monitoring**: Review classifications weekly Label Hygiene**: Keep Gmail labels organized Feedback Loop**: Manually correct misclassifications Archive Strategy**: Set up auto-archiving for processed emails 2. AI Optimization Prompt Engineering**: Continuously refine AI instructions Example Training**: Add specific examples for your business Context Limits**: Monitor token usage and costs Model Selection**: Choose appropriate AI model for your needs 3. Security Considerations Credential Management**: Regularly rotate API keys Data Privacy**: Review what data is sent to AI services Access Control**: Limit workflow access to authorized users Audit Logging**: Monitor workflow executions 4. Workflow Maintenance Regular Updates**: Keep n8n and node versions current Backup Strategy**: Export workflow configurations regularly Documentation**: Keep setup notes and customizations documented Testing**: Test major changes in development environment first 🤝 Contributing to the Community This workflow template demonstrates: Comprehensive AI Integration**: Multiple AI touchpoints working together Production-Ready Architecture**: Error handling, retry logic, and monitoring Extensive Documentation**: Clear setup and customization guidance Flexible Configuration**: Adaptable to different business needs Best Practice Examples**: Security, performance, and maintenance considerations 📄 License & Support This workflow is provided free to the n8n community under MIT License. Community Resources: n8n Community Forum for questions GitHub Issues for bug reports Documentation updates welcome Professional Support: For enterprise deployments or custom modifications, consider: n8n Cloud for managed hosting Professional services for complex integrations Custom AI model training for specific use cases Transform your email workflow today! 🚀 This AI Email Assistant reduces email processing time by up to 90% while ensuring no important message goes unnoticed. Perfect for busy professionals who want to stay responsive without being overwhelmed by their inbox.
by Davidson Ahuruezenma
AI-Powered Academic Assignment Generator This n8n workflow template automates the complete academic assignment generation process from student queries to professional document delivery. Students submit assignment requests via Telegram, and the workflow generates comprehensive, plagiarism-free academic content using Google Gemini AI, formats it into professional PDF documents, and delivers downloadable links while maintaining complete records. What does this workflow do? 📱 Telegram Integration**: Receives structured assignment requests from students 🤖 AI Content Generation**: Creates comprehensive academic answers (500+ words per question) 📄 Professional Formatting**: Generates university-standard HTML/PDF documents ☁️ Cloud Storage**: Automatically stores files in organized Google Drive folders 📊 Record Keeping**: Maintains complete assignment database in Google Sheets 🔄 End-to-End Automation**: Complete pipeline from query to document delivery How it works The workflow processes student assignment requests through 16 interconnected nodes, handling everything from input parsing to final document delivery: Input → AI Processing → Document Generation → Storage & Delivery Setup Requirements Credentials needed: Telegram Bot Token** (for receiving/sending messages) Google Gemini API Key** (for AI content generation) Google Sheets API** (for record keeping) Google Drive API** (for file storage) PDFCrowd API** (for PDF conversion) Pre-setup steps: Create a Telegram bot and obtain the bot token Set up Google Drive folder structure for file organization Create Google Sheets template with proper column headers Configure API rate limits and usage quotas Workflow Breakdown 🔌 Input Processing Nodes Student Query Intake Bot (Telegram Trigger) Student Query Intake Bot (Telegram Trigger) Listens for incoming student messages with assignment details Monitors specific chat ID for authorized users Triggers workflow when structured assignment requests are received Structured Data Parser (Code Node) Extracts student information using regex patterns Parses: Name, Faculty, Department, Level, Course, Registration Number Automatically sets current date and handles missing data Outputs clean JSON structure for AI processing 🤖 AI Processing Nodes Student Assignment Auto-Composer (LangChain Agent) Main AI orchestrator for assignment generation Uses structured prompts for consistent academic formatting Generates 500-word answers per question with APA citations Ensures plagiarism-free, original academic content Generator Model (Google Gemini Chat) Primary AI model for high-quality content generation Handles complex academic writing and formatting requirements Fallback Model Generator (Google Gemini - Gemma) Backup AI model ensuring workflow reliability Activates when primary model encounters issues Structured Output Parser (LangChain) Validates AI-generated content against JSON schema Enforces required field compliance and format consistency Auto-fixes common formatting issues 🔧 Processing & Error Handling Error Handler (Code Node) Handles text processing errors and data type issues Converts non-string values and provides error recovery Ensures workflow continuity even with problematic data Wait Node Introduces strategic 2-second delay for processing stability Allows AI processing to complete before next steps 📊 Data Management Nodes Edit Fields (Set Node) Maps AI output to Google Sheets column structure Ensures data consistency for database storage Long Essay Record Sheet (Google Sheets) Stores complete assignment records with metadata Maintains comprehensive student assignment database Uses Name field as unique identifier for record updates 📄 Document Generation Nodes Static HTML Builder (LangChain Agent) Converts structured data into professional HTML documents Applies academic formatting: Times New Roman, 12pt, double-spaced Creates university-standard document structure HTTP Request (PDF Conversion) Converts HTML to high-quality PDF using PDFCrowd API Maintains academic formatting and professional appearance Uses student name for file identification ☁️ Storage & Delivery Nodes Upload File (Google Drive) Stores generated PDFs in organized Drive folders Creates shareable links for easy access Maintains systematic file organization Send Text Message (Telegram) Delivers Google Drive download link to student Completes the automation cycle with instant access Input Format Students should format their Telegram messages as follows: Name: John Doe Faculty: Engineering Department: Computer Science Level: 200L Course: CSC 201 - Data Structures Reg number: 2024001234 Question: Explain the concept of Big O notation Compare different sorting algorithms Discuss the applications of binary trees Features ✨ Intelligent Processing Smart Input Parsing**: Handles unstructured text inputs automatically Multi-Question Support**: Processes complex assignment requirements Data Validation**: Ensures complete and accurate information capture 🎓 Academic Excellence University Standards**: Professional formatting and citation styles Original Content**: Plagiarism-free AI-generated assignments Comprehensive Answers**: 500+ words per question with detailed explanations 🛡️ Reliability & Error Handling Fallback Systems**: Multiple AI models for continuous operation Error Recovery**: Automatic handling of processing issues Data Integrity**: Schema validation and field verification Use Cases This workflow template is perfect for: 📚 Educational Institutions**: Automate student assignment processing and grading assistance 👨🎓 Academic Support Services**: Provide structured learning assistance and content generation 🏫 Online Learning Platforms**: Integrate assignment automation into educational systems 📝 Content Creation Services**: Generate academic-quality content for educational purposes 🤖 AI Learning Projects**: Implement complex AI workflows with multiple service integrations Output Examples Generated Assignment Features: Professional formatting** with Times New Roman, 12pt font, double-spacing Complete academic structure** including headers, student information, questions, and references Comprehensive answers** averaging 500+ words per question with detailed explanations Proper citations** in APA format with authentic academic references PDF delivery** through shareable Google Drive links Database Records: Complete student information tracking Assignment question and answer storage Timestamp and metadata preservation Easy retrieval and analysis capabilities Performance & Reliability Processing Time: 2-3 minutes per assignment Success Rate: >95% with fallback mechanisms Content Quality: University-standard academic writing Scalability: Handles multiple concurrent requests Error Recovery: Automatic retry and alternative processing paths Customization Options Easily configurable elements: Chat IDs**: Modify for different Telegram groups or users AI Models**: Switch between different Google Gemini models Document Formatting**: Adjust academic standards and styling Storage Locations**: Configure Google Drive folders and naming conventions Database Fields**: Modify Google Sheets columns and data structure Advanced customizations: Add support for different document formats (Word, LaTeX) Integrate additional AI providers (OpenAI, Claude, etc.) Implement grading and feedback mechanisms Add multi-language support Create batch processing capabilities Getting Started Import the workflow into your n8n instance Configure credentials for all required services Set up Telegram bot and obtain necessary permissions Create Google Drive folders and Google Sheets template AI-Powered Academic Assignment Test with sample data to ensure proper functionality Deploy and monitor for production use Tags academic education ai telegram google-sheets pdf-generation automation langchain assignment student-support
by vinci-king-01
How it works Turn Amazon into your personal competitive intelligence goldmine! This AI-powered workflow automatically monitors Amazon markets 24/7, delivering deep competitor insights and pricing intelligence that would take you 10+ hours of manual research weekly. Key Steps Daily Market Scan - Runs automatically at 6:00 AM UTC to capture fresh competitive data AI-Powered Analysis - Uses ScrapeGraphAI to intelligently extract pricing, product details, and market positioning Competitive Intelligence - Analyzes competitor strategies, pricing gaps, and market opportunities Keyword Goldmine - Identifies high-value keyword opportunities your competitors are missing Strategic Insights - Generates actionable recommendations for pricing and positioning Automated Reporting - Delivers comprehensive market reports directly to Google Docs Set up steps Setup time: 15-20 minutes Configure ScrapeGraphAI credentials - Add your ScrapeGraphAI API key for intelligent web scraping Set up Google Docs integration - Connect Google OAuth2 for automated report generation Customize Amazon search URL - Target your specific product category or market niche Configure IP rotation - Set up proxy rotation if needed for large-scale monitoring Test with sample products - Start with a small product set to validate data accuracy Set competitive alerts - Define thresholds for price changes and market opportunities Save 10+ hours weekly while staying ahead of your competition with real-time market intelligence!
by Paul
🚀 Google Search Console MCP Server 📋 Description This n8n workflow serves as a Model Context Protocol (MCP) server, connecting MCP-compatible AI tools (like Claude) directly to the Google Search Console APIs. With this workflow, users can automate critical SEO tasks and manage Google Search Console data effortlessly via MCP endpoints. Included Functionalities: 📌 List Verified Sites 📌 Retrieve Detailed Site Information 📌 Access Search Analytics Data 📌 Submit and Manage Sitemaps 📌 Request URL Indexing OAuth2 is fully supported for secure and seamless API interactions. 🛠️ Setup Instructions 🔑 Prerequisites n8n instance** (cloud or self-hosted) Google Cloud project with enabled APIs: Google Search Console API Web Search Indexing API OAuth2 Credentials from Google Cloud ⚙️ Workflow Setup Step 1: Import Workflow Open n8n, select "Import from JSON", and paste this workflow JSON. Step 2: Configure OAuth2 Credentials Navigate to Settings → Credentials. Add new credentials (Google OAuth2 API): Client ID and Client Secret from Google Cloud Scopes: https://www.googleapis.com/auth/webmasters.readonly https://www.googleapis.com/auth/webmasters https://www.googleapis.com/auth/indexing Step 3: Configure Webhooks Webhook URLs auto-generate in MCP Server Trigger node. Ensure webhooks are publicly accessible via HTTPS. Step 4: Testing Test your endpoints with sample HTTP requests to confirm everything is working correctly. 🎯 Usage Examples List Sites**: Fetch all verified Search Console sites. Get Site Info**: Get detailed information about a particular site. Search Analytics**: Pull metrics such as clicks, impressions, and rankings. Submit Sitemap**: Automatically submit sitemaps. Request URL Indexing**: Trigger Google's indexing for specific URLs instantly. 🚩 Use Cases & Applications SEO automation workflows AI-driven SEO analytics Real-time website performance monitoring Automated sitemap management
by Jean-Marie Rizkallah
🧩 Jamf Policies Export to Slack Quickly export and review your entire Jamf policy configuration—including triggers, frequencies, and scope—directly in Slack. This enables IT and security teams to audit policy setups without logging into Jamf or generating reports manually. ❗The Problem Jamf Pro lacks a straightforward way to quickly review or share a list of all configured policies, including key attributes like frequency, scope, or triggers. Security teams often need this for audit or compliance reviews, but navigating Jamf’s UI or exporting via the API is time-consuming. 🔧 This Fixes It This workflow fetches all policies, extracts the most relevant fields, compiles them into a csv file, and posts that readble file into a designated Slack channel—automatically or on demand. ✅ Prerequisites • A Jamf Pro API key (OAuth2) with read access to policies • A Slack app with permission to post files into your chosen channel 🔍 How it works • Manually trigger or use the webhook to initiate the flow • Retrieve all policies from Jamf via the XML API • Convert the XML response into JSON • Split and loop through each policy ID • Retrieve detailed data for each policy • Format relevant fields (ID, name, trigger, scope, etc.) • Convert the final data set into an .csv file • Upload the file to your Slack channel ⚙️ Set up steps • Takes ~10 minutes to configure • Set the Jamf BaseURL in the “Jamf Server” node • Configure Jamf OAuth2 credentials in the HTTP Request nodes • Adjust the fields for export in the “Set-fields” node • Set your Slack credentials and target channel in the “Post to Slack” node • Optional: Customize the exported fields or filename 🔄 Automation Ready Schedule this flow daily/weekly, or tie it to change events to keep your team informed.