by franck fambou
Overview This intelligent chatbot workflow enables natural language conversations with your documents, supporting multiple file formats including PDFs, Word documents, Excel spreadsheets, and text files. Built with advanced RAG (Retrieval-Augmented Generation) technology, this chatbot can understand, analyze, and answer questions about your document content with contextual accuracy and intelligent responses. How It Works Intelligent Document Processing & Conversation Pipeline: Multi-Format Document Ingestion**: Automatically processes and indexes various document formats (PDF, DOCX, XLSX, TXT, etc.) Smart Content Chunking**: Breaks down documents into meaningful segments while preserving context and relationships Vector Database Storage**: Creates searchable embeddings for fast and accurate information retrieval Contextual Conversation Engine**: Uses AI to understand user queries and retrieve relevant document sections Natural Language Responses**: Generates human-like responses with citations and source references Multi-Turn Conversations**: Maintains conversation history and context across multiple interactions Real-Time Processing**: Instant responses with live document updates and dynamic content refresh Setup Instructions Estimated Setup Time: 15-20 minutes Prerequisites n8n instance (v0.200.0 or higher recommended) OpenAI/Gemini API key for embeddings and chat completion Vector database service (optional: Pinecone, Weaviate, or Qdrant) File storage service (optional: Google Drive, Dropbox, AWS S3) Web server for chatbot interface (optional) Configuration Steps Configure Document Input Sources Set up file upload webhook for direct document submission Configure cloud storage watchers for automatic document processing Add support for multiple file formats and size limits Set up document validation and security checks Setup Document Processing Pipeline Configure text extraction engines for different file types Set up intelligent chunking parameters (chunk size, overlap, boundaries) Add metadata extraction for document categorization Configure OCR for scanned documents (optional) Configure Vector Database Set up your chosen vector database credentials Configure embedding model settings (Gemini models/text-embedding-004 recommended) Set up collection/index structure for document storage Configure search parameters and similarity thresholds Setup AI Chat Engine Add your AI service API credentials (Gemini, Claude, etc.) Configure conversation prompts and system instructions Set up context window management and token optimization Add response formatting and citation rules Configure Chat Interface Set up webhook endpoints for chat API Configure session management and conversation history Add authentication and rate limiting (optional) Set up real-time updates and streaming responses Setup Monitoring & Analytics Configure conversation logging and analytics Set up performance monitoring for response times Add usage tracking and cost monitoring Configure error handling and failover mechanisms Use Cases Business & Enterprise Knowledge Base Queries**: Ask questions about company policies, procedures, and documentation Contract Analysis**: Query legal documents, contracts, and compliance materials Training Materials**: Interactive learning with training manuals and educational content Financial Reports**: Analyze and discuss financial statements, budgets, and forecasts Research & Academia Research Paper Analysis**: Discuss findings, methodologies, and citations from academic papers Literature Reviews**: Compare and contrast multiple research documents Thesis Support**: Get insights from reference materials and research data Grant Proposals**: Analyze requirements and optimize proposal content Legal & Compliance Legal Document Review**: Query contracts, agreements, and legal texts Regulatory Compliance**: Understand compliance requirements from regulatory documents Case Law Research**: Analyze legal precedents and court decisions Policy Analysis**: Interpret organizational policies and procedures Technical Documentation API Documentation**: Interactive queries about technical specifications User Manuals**: Get help and guidance from product documentation Code Documentation**: Understand codebases and technical implementations Troubleshooting Guides**: Interactive problem-solving with technical guides Personal Productivity Document Summarization**: Get quick summaries of long documents Information Extraction**: Find specific data points across multiple documents Content Research**: Research topics across your personal document library Meeting Notes**: Query and analyze meeting transcripts and notes Key Features Advanced Document Processing Multi-Format Support**: PDF, DOCX, XLSX, TXT, PPTX, and more Intelligent Chunking**: Context-aware document segmentation Metadata Extraction**: Automatic categorization and tagging OCR Integration**: Process scanned documents and images with text Intelligent Conversation Contextual Understanding**: Maintains conversation context and document relationships Source Attribution**: Provides citations and references for all answers Multi-Document Queries**: Compare and analyze across multiple documents Follow-up Questions**: Natural conversation flow with clarifying questions Performance & Scalability Fast Retrieval**: Vector-based semantic search for instant responses Scalable Architecture**: Handle large document collections efficiently Batch Processing**: Process multiple documents simultaneously Caching System**: Optimized response times with intelligent caching Security & Privacy Document Encryption**: Secure storage and transmission of sensitive documents Access Control**: User-based permissions and document access restrictions Audit Logging**: Complete conversation and access audit trails Data Retention**: Configurable data retention and deletion policies Technical Architecture Document Processing Flow File Upload → Format Detection → Text Extraction → Content Chunking Metadata Extraction → Embedding Generation → Vector Storage → Index Creation Conversation Flow User Query → Intent Analysis → Vector Search → Context Retrieval Response Generation → Source Attribution → Answer Formatting → Delivery Supported File Formats Documents**: PDF, DOC, DOCX, RTF, TXT, MD Spreadsheets**: XLS, XLSX, CSV Presentations**: PPT, PPTX Images**: PNG, JPG (with OCR) Archives**: ZIP (auto-extracts supported formats) Web**: HTML, XML Integration Options Chat Interfaces Web Widget**: Embeddable chat widget for websites API Endpoints**: RESTful API for custom integrations Slack/Teams**: Direct integration with team collaboration tools Mobile Apps**: API-first design for mobile application integration Data Sources Cloud Storage**: Google Drive, Dropbox, OneDrive, AWS S3 Document Systems**: SharePoint, Confluence, Notion Email**: Process attachments from email systems CRM/ERP**: Integration with business systems Performance Specifications Response Time**: < 3 seconds for typical queries Document Capacity**: Supports collections of 10,000+ documents Concurrent Users**: Scales to handle multiple simultaneous conversations Accuracy**: >90% relevance for domain-specific queries Advanced Configuration Options Customization Custom Prompts**: Tailor AI behavior for specific use cases Branding**: Customize chat interface with your company branding Language Support**: Multi-language document processing and responses Domain Expertise**: Fine-tune for specific industries or domains Analytics & Monitoring Usage Analytics**: Track popular queries and document usage Performance Metrics**: Monitor response times and accuracy User Feedback**: Collect ratings and improve responses A/B Testing**: Test different configurations and prompts Troubleshooting & Support Common Issues Slow Responses**: Check vector database performance and API limits Inaccurate Answers**: Review chunking strategy and embedding quality Format Errors**: Verify document formats and processing capabilities Memory Issues**: Monitor token usage and context window limits Optimization Tips Use clear, specific questions for best results Ensure documents are well-formatted with proper headers Regular vector database maintenance for optimal performance Monitor API usage to optimize costs and performance
by Cordexa Technologies
This template monitors Google Drive folder for new files, extracts text from PDFs, images, text files, CSVs, and Google Docs., reads images with meta/llama-3.2-11b-vision-instruct, structures the result with nvidia/llama-3.3-nemotron-super-49b-v1.5, logs everything to Google Sheets, and sends a Telegram notification when processing finishes. ✨ What This Template Does Watches a specific Google Drive folder for new files with Google Drive Trigger. 📂 Downloads each new file with Google Drive before processing. ⬇️ Routes PDFs, images, text files, CSVs, and Google Docs through the correct extraction branch. 🔀 Extracts image text with meta/llama-3.2-11b-vision-instruct 🖼️ Structures extracted content into JSON fields with nvidia/llama-3.3-nemotron-super-49b-v1.5 through NVIDIA NIM. 🤖 Appends the final result to Google Sheets in Extract_Log. 📊 Sends a Telegram notification when processing is complete. 📬 Key Benefits Turns a Drive folder into a reusable intake point for mixed file types. ⏱️ Creates a searchable audit trail in Google Sheets for every processed file. 📚 Sends a lightweight Telegram notification without requiring Telegram as the input channel. ✅ Keeps the extraction and structuring logic reusable for internal ops or client delivery workflows. 🔁 Makes it easier to test multimodal document processing with free-tier NVIDIA NIM models. 💡 Features Google Drive Trigger configured for new files in a specific folder. 📥 Google Drive download step for binary file access before extraction. ⚙️ File-type routing with Switch and normalization with Code nodes. 🧠 Native n8n Extract from File nodes for PDF, TXT, and CSV parsing. 📄 NVIDIA NIM HTTP Request nodes for image OCR and structured JSON generation. 🤖 Google Sheets append logging with a fixed Extract_Log tab schema. 📈 Plain-text Telegram completion notifications with a fixed destination chat ID. 📨 Requirements n8n instance with access to Google Drive Trigger, Google Drive, Google Docs, Google Sheets, HTTP Request, Telegram, and Extract from File nodes. 🧰 Google Drive OAuth2 credential with access to the watched folder. 🔐 Google Docs OAuth2 credential with access to any Google Docs files you want to process. 📘 Google Sheets OAuth2 credential and a sheet with an Extract_Log tab. 📊 Telegram bot credential plus a valid destination chat ID for notifications. 🤝 NVIDIA NIM API key stored as an HTTP Header Auth credential. 🔑 A folder ID and Google Sheet ID added to the provided placeholders before activation. 🛠️ Target Audience Operations teams monitoring a shared Drive folder for inbound files. 🗂️ Founders and solo operators who want document extraction. 👤 Agencies building reusable back-office workflows for receipts, notes, and uploaded files. 🏢 Analysts who want structured text output logged into Google Sheets automatically. 📋 Automation builders testing file-driven multimodal extraction with Drive as the source. 🧪 Step-by-Step Setup Instructions Import the workflow and read every sticky note on the canvas before editing any nodes. 📝 Connect your Google Drive, Google Docs, Google Sheets, Telegram, and NVIDIA NIM credentials. 🔐 Replace REPLACE_WITH_GOOGLE_DRIVE_FOLDER_ID, REPLACE_WITH_GOOGLE_SHEET_ID, and REPLACE_WITH_TELEGRAM_CHAT_ID in the marked nodes. 📌 Create the Extract_Log tab with the required headers shown in the sticky notes. 📑 Test one file at a time in this order: PDF, TXT, CSV, image, then Google Docs file. 🧪 Confirm that each test adds one clean row to Google Sheets and sends one Telegram notification. ✅ Activate the workflow only after every supported path works end to end. 🚀 Built by Cordexa Technologies https://cordexa.tech | cordexatech@gmail.com
by Davidson Ahuruezenma
AI-Powered Academic Assignment Generator This n8n workflow template automates the complete academic assignment generation process from student queries to professional document delivery. Students submit assignment requests via Telegram, and the workflow generates comprehensive, plagiarism-free academic content using Google Gemini AI, formats it into professional PDF documents, and delivers downloadable links while maintaining complete records. What does this workflow do? 📱 Telegram Integration**: Receives structured assignment requests from students 🤖 AI Content Generation**: Creates comprehensive academic answers (500+ words per question) 📄 Professional Formatting**: Generates university-standard HTML/PDF documents ☁️ Cloud Storage**: Automatically stores files in organized Google Drive folders 📊 Record Keeping**: Maintains complete assignment database in Google Sheets 🔄 End-to-End Automation**: Complete pipeline from query to document delivery How it works The workflow processes student assignment requests through 16 interconnected nodes, handling everything from input parsing to final document delivery: Input → AI Processing → Document Generation → Storage & Delivery Setup Requirements Credentials needed: Telegram Bot Token** (for receiving/sending messages) Google Gemini API Key** (for AI content generation) Google Sheets API** (for record keeping) Google Drive API** (for file storage) PDFCrowd API** (for PDF conversion) Pre-setup steps: Create a Telegram bot and obtain the bot token Set up Google Drive folder structure for file organization Create Google Sheets template with proper column headers Configure API rate limits and usage quotas Workflow Breakdown 🔌 Input Processing Nodes Student Query Intake Bot (Telegram Trigger) Student Query Intake Bot (Telegram Trigger) Listens for incoming student messages with assignment details Monitors specific chat ID for authorized users Triggers workflow when structured assignment requests are received Structured Data Parser (Code Node) Extracts student information using regex patterns Parses: Name, Faculty, Department, Level, Course, Registration Number Automatically sets current date and handles missing data Outputs clean JSON structure for AI processing 🤖 AI Processing Nodes Student Assignment Auto-Composer (LangChain Agent) Main AI orchestrator for assignment generation Uses structured prompts for consistent academic formatting Generates 500-word answers per question with APA citations Ensures plagiarism-free, original academic content Generator Model (Google Gemini Chat) Primary AI model for high-quality content generation Handles complex academic writing and formatting requirements Fallback Model Generator (Google Gemini - Gemma) Backup AI model ensuring workflow reliability Activates when primary model encounters issues Structured Output Parser (LangChain) Validates AI-generated content against JSON schema Enforces required field compliance and format consistency Auto-fixes common formatting issues 🔧 Processing & Error Handling Error Handler (Code Node) Handles text processing errors and data type issues Converts non-string values and provides error recovery Ensures workflow continuity even with problematic data Wait Node Introduces strategic 2-second delay for processing stability Allows AI processing to complete before next steps 📊 Data Management Nodes Edit Fields (Set Node) Maps AI output to Google Sheets column structure Ensures data consistency for database storage Long Essay Record Sheet (Google Sheets) Stores complete assignment records with metadata Maintains comprehensive student assignment database Uses Name field as unique identifier for record updates 📄 Document Generation Nodes Static HTML Builder (LangChain Agent) Converts structured data into professional HTML documents Applies academic formatting: Times New Roman, 12pt, double-spaced Creates university-standard document structure HTTP Request (PDF Conversion) Converts HTML to high-quality PDF using PDFCrowd API Maintains academic formatting and professional appearance Uses student name for file identification ☁️ Storage & Delivery Nodes Upload File (Google Drive) Stores generated PDFs in organized Drive folders Creates shareable links for easy access Maintains systematic file organization Send Text Message (Telegram) Delivers Google Drive download link to student Completes the automation cycle with instant access Input Format Students should format their Telegram messages as follows: Name: John Doe Faculty: Engineering Department: Computer Science Level: 200L Course: CSC 201 - Data Structures Reg number: 2024001234 Question: Explain the concept of Big O notation Compare different sorting algorithms Discuss the applications of binary trees Features ✨ Intelligent Processing Smart Input Parsing**: Handles unstructured text inputs automatically Multi-Question Support**: Processes complex assignment requirements Data Validation**: Ensures complete and accurate information capture 🎓 Academic Excellence University Standards**: Professional formatting and citation styles Original Content**: Plagiarism-free AI-generated assignments Comprehensive Answers**: 500+ words per question with detailed explanations 🛡️ Reliability & Error Handling Fallback Systems**: Multiple AI models for continuous operation Error Recovery**: Automatic handling of processing issues Data Integrity**: Schema validation and field verification Use Cases This workflow template is perfect for: 📚 Educational Institutions**: Automate student assignment processing and grading assistance 👨🎓 Academic Support Services**: Provide structured learning assistance and content generation 🏫 Online Learning Platforms**: Integrate assignment automation into educational systems 📝 Content Creation Services**: Generate academic-quality content for educational purposes 🤖 AI Learning Projects**: Implement complex AI workflows with multiple service integrations Output Examples Generated Assignment Features: Professional formatting** with Times New Roman, 12pt font, double-spacing Complete academic structure** including headers, student information, questions, and references Comprehensive answers** averaging 500+ words per question with detailed explanations Proper citations** in APA format with authentic academic references PDF delivery** through shareable Google Drive links Database Records: Complete student information tracking Assignment question and answer storage Timestamp and metadata preservation Easy retrieval and analysis capabilities Performance & Reliability Processing Time: 2-3 minutes per assignment Success Rate: >95% with fallback mechanisms Content Quality: University-standard academic writing Scalability: Handles multiple concurrent requests Error Recovery: Automatic retry and alternative processing paths Customization Options Easily configurable elements: Chat IDs**: Modify for different Telegram groups or users AI Models**: Switch between different Google Gemini models Document Formatting**: Adjust academic standards and styling Storage Locations**: Configure Google Drive folders and naming conventions Database Fields**: Modify Google Sheets columns and data structure Advanced customizations: Add support for different document formats (Word, LaTeX) Integrate additional AI providers (OpenAI, Claude, etc.) Implement grading and feedback mechanisms Add multi-language support Create batch processing capabilities Getting Started Import the workflow into your n8n instance Configure credentials for all required services Set up Telegram bot and obtain necessary permissions Create Google Drive folders and Google Sheets template AI-Powered Academic Assignment Test with sample data to ensure proper functionality Deploy and monitor for production use Tags academic education ai telegram google-sheets pdf-generation automation langchain assignment student-support
by Dinakar Selvakumar
Complete AI support system using website data (RAG pipeline) This template provides a full end-to-end Retrieval-Augmented Generation (RAG) system using n8n. It includes two connected workflows: A data ingestion pipeline that crawls a website and stores its content in a vector database. A customer support chatbot that retrieves this knowledge and answers user queries in real time. Together, these workflows allow you to turn any public website into an intelligent AI-powered support assistant grounded in real business data. Use cases AI customer support chatbot for your website Internal company knowledge assistant Product FAQ automation Helpdesk or IT support bot AI receptionist for services Semantic search over company content How it works Ingestion workflow Discover all URLs from a website sitemap. Filter and normalize the URLs. Fetch each page and extract readable text. Clean HTML into plain text. Split text into overlapping chunks. Generate embeddings using OpenAI. Store vectors in Pinecone with metadata. Chatbot workflow A user sends a message via chat webhook. The agent queries Pinecone for relevant knowledge. Retrieved content is passed to OpenAI. OpenAI generates a grounded response. Short-term memory maintains conversation context. How to use Step 1 – Run ingestion Set your target website URL. Add Firecrawl, OpenAI, and Pinecone credentials. Create a Pinecone index. Execute the ingestion workflow. Wait until all pages are indexed. Step 2 – Run chatbot Deploy the chatbot workflow. Set the same Pinecone index and namespace. Copy the chat webhook URL. Connect it to a website, chat widget, or WhatsApp bot. Start chatting with your AI assistant. Requirements Firecrawl account OpenAI API key Pinecone account and index Public website to crawl Optional: frontend chat interface Good to know The chatbot never answers from memory for business data. All company knowledge comes from Pinecone. If Pinecone returns nothing, the bot fails safely. HTML cleaning is basic and can be replaced with: Mozilla Readability Jina Reader Unstructured Chunk size and overlap affect retrieval quality. Pinecone can be replaced with: Qdrant Weaviate Supabase Vector Chroma Customising this workflow You can extend this system by: Adding PDF or document loaders Scheduling ingestion daily or weekly Connecting CRM or ticketing systems Adding appointment booking tools Switching to local or open-source models Adding multilingual support Storing raw content in a database Adding feedback or logging What this n8n template demonstrates Real-world RAG architecture Web crawling pipelines Text chunking strategies Vector database integration AI agent orchestration Memory-controlled conversations Production-grade AI support systems End-to-end AI infrastructure with n8n Architecture overview This template follows a modern AI system design: Website → Ingestion → Embeddings → Pinecone → Retrieval → OpenAI → User It separates: Data preparation (offline) Knowledge storage Runtime inference This makes the system scalable, maintainable, and safe for production use. Need a custom setup? If you want a similar AI system built for your business (custom data sources, CRM integration, WhatsApp bots, booking systems, dashboards, or private deployments), feel free to reach out at dinakars2003@gmail.com. I help companies design and deploy production-ready AI workflows.
by Cristian Baño Belchí
How it works: Accesses a target website, searches for new PDFs, and downloads them automatically. Extracts content from each PDF and sends it to an AI for summarization. Delivers the AI-generated summary directly to a Discord channel. Marks processed URLs in Google Sheets to avoid duplicates. Set up steps: Configure the website URL in the HTTP Request node. Connect to Google Cloud API (enable Drive & Sheets) and link your spreadsheet. Set up an OpenRouter API key and choose your preferred AI model. Create a Discord webhook for notifications.
by Roman Rozenberger
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Who's it for Content creators, marketers, and researchers who need to monitor multiple RSS feeds and get AI-generated summaries without manual work. How it works This workflow automatically monitors RSS feeds, filters new articles from the last X days, checks for duplicates, and generates structured AI summaries. It fetches full article content, converts HTML to markdown, and uses Gemini AI to create consistent summaries with quick takeaways, key points, and practical insights. All data is saved to Google Sheets for easy access and sharing. The system processes RSS feeds in batches, ensuring no duplicate articles are processed twice by checking existing URLs in your Google Sheets. Each new article gets a comprehensive AI summary that includes the main message, key takeaways, important points, and practical applications. Requirements Google Sheets access OpenRouter API key for Gemini AI model or other language model RSS feed URLs to monitor How to set up Copy the template Google Sheet, add your RSS feeds in the "RSS FEEDS" tab, configure Google Sheets and OpenRouter credentials in n8n, and adjust the time filter in the Settings node. The workflow can run manually or on schedule every hour. How to customize Modify AI prompts for different summary styles, change the time filter duration, add more data fields to Google Sheets, or switch to a different AI model in the LLM Chat Model node.
by Ranjan Dailata
The Scrape and Analyze Amazon Product Info with Decodo + OpenAI workflow automates the process of extracting product information from an Amazon product page and transforming it into meaningful insights. The workflow then uses OpenAI to generate descriptive summaries, competitive positioning insights, and structured analytical output based on the extracted information. Disclaimer Please note - This workflow is only available on n8n self-hosted as it’s making use of the community node for the Decodo Web Scraping Who this is for? This workflow is ideal for: E-commerce product researchers Marketplace sellers (Amazon, Flipkart, Shopify, etc.) Competitive intelligence teams Product comparison bloggers and reviewers Pricing and product analytics engineers Automation builders needing AI-powered product insights What problem is this workflow solving? Manually extracting Amazon product details, ads, pricing, reviews, and competitive signals is: Time-consuming Requires switching across tools Difficult to analyze at scale Not structured for reporting Hard to compare products objectively This workflow automates: Web scraping of Amazon product pages Extraction of product features and ad listings AI-generated product summaries Competitive positioning analysis Generation of structured product insight output Export to Google Sheets for tracking and reporting What this workflow does This workflow performs an end-to-end product intelligence pipeline, including: Data Collection Scrapes an Amazon product page using Decodo Retrieves product details and advertisement placements Data Extraction Extracts: Product specs Key feature descriptions Ads data Supplemental metadata AI-Driven Analysis Generates: Descriptive product summary Competitive positioning insights Structured product insight schema Data Consolidation Merges descriptive, analytical, and structured outputs Export & Persistence Aggregates results Writes final dataset to Google Sheets for: tracking comparison reporting product research archives Setup Prerequisites If you are new to Decode, please signup on this link visit.decodo.com n8n instance** Decodo API credentials** OpenAI API credentials** Make sure to install the Decodo Community Node. Required Credentials Decodo API Go to Credentials Add Decodo API Enter API key Save as: Decodo Credentials account OpenAI API Go to Credentials Select OpenAI Enter API key Save as: OpenAi account Google Sheets Add Google Sheets OAuth Authorize via Google Save as desired account Inputs to configure Modify in Set the Input Fields node: product_url = https://www.amazon.in/Sony-DualSense-Controller-Grey-PlayStation/dp/B0BQXZ11B8 How to customize this workflow to your needs You can easily adapt this workflow for various use cases. Change the product being analyzed Modify: product_url Change AI model In OpenAI nodes: Replace gpt-4.1-mini Use Gemini, Claude, Mistral, Groq (if supported) Customize the insight schema Edit Product Insights node to include: sustainability markers sentiment extraction pricing bands safety compliance brand comparisons Expand data extraction You may extract: product reviews FAQs Q&A seller information delivery and logistics signals Change output destination Replace Google Sheets with: PostgreSQL MySQL Notion Slack Airtable Webhook delivery CSV export Turn it into a batch processor Loop over: multiple ASINs category listings search results pages Summary This workflow provides a complete automated product intelligence engine, combining Decodo’s scraping capabilities with OpenAI’s analytical reasoning to transform Amazon product pages into structured insights, competitive analysis, and summarized evaluations automatically stored for reporting and comparison.
by vinci-king-01
How it works Turn Amazon into your personal competitive intelligence goldmine! This AI-powered workflow automatically monitors Amazon markets 24/7, delivering deep competitor insights and pricing intelligence that would take you 10+ hours of manual research weekly. Key Steps Daily Market Scan - Runs automatically at 6:00 AM UTC to capture fresh competitive data AI-Powered Analysis - Uses ScrapeGraphAI to intelligently extract pricing, product details, and market positioning Competitive Intelligence - Analyzes competitor strategies, pricing gaps, and market opportunities Keyword Goldmine - Identifies high-value keyword opportunities your competitors are missing Strategic Insights - Generates actionable recommendations for pricing and positioning Automated Reporting - Delivers comprehensive market reports directly to Google Docs Set up steps Setup time: 15-20 minutes Configure ScrapeGraphAI credentials - Add your ScrapeGraphAI API key for intelligent web scraping Set up Google Docs integration - Connect Google OAuth2 for automated report generation Customize Amazon search URL - Target your specific product category or market niche Configure IP rotation - Set up proxy rotation if needed for large-scale monitoring Test with sample products - Start with a small product set to validate data accuracy Set competitive alerts - Define thresholds for price changes and market opportunities Save 10+ hours weekly while staying ahead of your competition with real-time market intelligence!
by Paul
🚀 Google Search Console MCP Server 📋 Description This n8n workflow serves as a Model Context Protocol (MCP) server, connecting MCP-compatible AI tools (like Claude) directly to the Google Search Console APIs. With this workflow, users can automate critical SEO tasks and manage Google Search Console data effortlessly via MCP endpoints. Included Functionalities: 📌 List Verified Sites 📌 Retrieve Detailed Site Information 📌 Access Search Analytics Data 📌 Submit and Manage Sitemaps 📌 Request URL Indexing OAuth2 is fully supported for secure and seamless API interactions. 🛠️ Setup Instructions 🔑 Prerequisites n8n instance** (cloud or self-hosted) Google Cloud project with enabled APIs: Google Search Console API Web Search Indexing API OAuth2 Credentials from Google Cloud ⚙️ Workflow Setup Step 1: Import Workflow Open n8n, select "Import from JSON", and paste this workflow JSON. Step 2: Configure OAuth2 Credentials Navigate to Settings → Credentials. Add new credentials (Google OAuth2 API): Client ID and Client Secret from Google Cloud Scopes: https://www.googleapis.com/auth/webmasters.readonly https://www.googleapis.com/auth/webmasters https://www.googleapis.com/auth/indexing Step 3: Configure Webhooks Webhook URLs auto-generate in MCP Server Trigger node. Ensure webhooks are publicly accessible via HTTPS. Step 4: Testing Test your endpoints with sample HTTP requests to confirm everything is working correctly. 🎯 Usage Examples List Sites**: Fetch all verified Search Console sites. Get Site Info**: Get detailed information about a particular site. Search Analytics**: Pull metrics such as clicks, impressions, and rankings. Submit Sitemap**: Automatically submit sitemaps. Request URL Indexing**: Trigger Google's indexing for specific URLs instantly. 🚩 Use Cases & Applications SEO automation workflows AI-driven SEO analytics Real-time website performance monitoring Automated sitemap management
by Jean-Marie Rizkallah
🧩 Jamf Policies Export to Slack Quickly export and review your entire Jamf policy configuration—including triggers, frequencies, and scope—directly in Slack. This enables IT and security teams to audit policy setups without logging into Jamf or generating reports manually. ❗The Problem Jamf Pro lacks a straightforward way to quickly review or share a list of all configured policies, including key attributes like frequency, scope, or triggers. Security teams often need this for audit or compliance reviews, but navigating Jamf’s UI or exporting via the API is time-consuming. 🔧 This Fixes It This workflow fetches all policies, extracts the most relevant fields, compiles them into a csv file, and posts that readble file into a designated Slack channel—automatically or on demand. ✅ Prerequisites • A Jamf Pro API key (OAuth2) with read access to policies • A Slack app with permission to post files into your chosen channel 🔍 How it works • Manually trigger or use the webhook to initiate the flow • Retrieve all policies from Jamf via the XML API • Convert the XML response into JSON • Split and loop through each policy ID • Retrieve detailed data for each policy • Format relevant fields (ID, name, trigger, scope, etc.) • Convert the final data set into an .csv file • Upload the file to your Slack channel ⚙️ Set up steps • Takes ~10 minutes to configure • Set the Jamf BaseURL in the “Jamf Server” node • Configure Jamf OAuth2 credentials in the HTTP Request nodes • Adjust the fields for export in the “Set-fields” node • Set your Slack credentials and target channel in the “Post to Slack” node • Optional: Customize the exported fields or filename 🔄 Automation Ready Schedule this flow daily/weekly, or tie it to change events to keep your team informed.
by David Ashby
Complete MCP server exposing all AWS Transcribe Tool operations to AI agents. Zero configuration needed - all 4 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every AWS Transcribe Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n AWS Transcribe Tool tool with full error handling 📋 Available Operations (4 total) Every possible AWS Transcribe Tool operation is included: 🔧 Transcriptionjob (4 operations) • Create a transcription job • Delete a transcription job • Get a transcription job • Get many transcription jobs 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native AWS Transcribe Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every AWS Transcribe Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by HoangSP
Name: AI-Powered Research Agent using Perplexity Sonar Description: This workflow acts as an AI-powered research assistant using the Perplexity Sonar model. When triggered by another workflow, it sends a user-defined prompt to the Perplexity API to retrieve up-to-date search results. The response is then parsed into a clean format for downstream processing. How it Works: Trigger: Activated from another workflow via Execute Workflow Trigger. Prompt Setup: Sets a system role message and user query dynamically. API Call: Sends a POST request to Perplexity's /chat/completions endpoint with your credentials. Response Handling: Extracts the message content from the API response. Output: Returns the result, ready for display or further processing. Requirements: A Perplexity AI API Key Set up authentication via Header Auth with Bearer token Ensure your account allows outbound HTTP requests in n8n Customization Tips: Modify the system prompt to suit your research domain Chain this workflow with other automation like blog creation, summaries, etc. Replace the output handling logic to fit into Google Sheets, Notion, or Telegram