by Yahor Dubrouski
Overview Build your own AI Prompt Hub inside n8n. This template lets ChatGPT automatically search your saved prompts in Notion using semantic embeddings from HuggingFace. Each time a user sends a message, the workflow finds the most relevant prompt based on meaning - not keywords. Perfect for developers who maintain dozens of prompts and want ChatGPT to pick the right one automatically. Key Features ๐ Semantic Prompt Search - Finds the best prompt using HuggingFace embeddings ๐ง AI Agent Integration - ChatGPT automatically calls the prompt-search workflow ๐ Notion Prompt Database - Store unlimited prompts with auto-generated embeddings โก Automatic Embedding Sync - Regenerates vectors when prompts change This template is ideal for: AI automations Prompt engineering DevOps and backend engineers who reuse prompts Teams managing large prompt libraries How it works The user sends any message to the ChatGPT interface The n8n AI Agent calls a sub-workflow that performs semantic search in Notion HuggingFace converts both the message and saved prompts into vector embeddings The workflow returns the most similar prompt, which ChatGPT can use automatically Setup Instructions (15โ20 minutes) Import this template into your n8n instance Set credentials for Notion, OpenAI, and HuggingFace Create a Notion database with: Prompt (Text) Embeddings (Text) Checksum (Text) Paste your Notion database ID in: โGet All Promptsโ โOn Page Updateโ โOn Page Createโ โGet All Prompts for Searchโ Enable the workflow and open the URL from โWhen chat message receivedโ to start chatting Type any request - the system will search for a matching prompt automatically Documentation & Demo Full documentation and examples: https://github.com/YahorDubrouski/ai-planner/blob/main/documentation/prompt-hub/README.md
by ้ท่ฐทใ็ๅฎ
Who is this for? This template is perfect for agencies, consultancies, freelancers, and project-based teams who want to eliminate repetitive onboarding tasks. If you're tired of manually creating folders, Slack channels, and project pages every time a new client signs a contract, this automation will save you hours. What this workflow does When a new contract PDF is uploaded to a designated Google Drive folder, this workflow automatically: Parses the filename to extract client name, project name, and contact email Creates a project folder structure in Google Drive with organized subfolders Creates a dedicated Slack channel for project communication Sets up a Notion project page with initial kickoff tasks Logs project details to a master Google Sheet for tracking Drafts a personalized welcome email using OpenAI GPT-4o-mini Notifies your team on Slack with all relevant links when complete Setup steps Time required: ~15 minutes Configure OAuth credentials for Google Drive, Gmail, Google Sheets, Slack, and Notion Add your OpenAI API key for AI-powered email drafting Update the "Set Config Variables" node with your specific IDs: Google Drive parent folder ID Notion database ID Google Sheet ID Slack notification channel ID Set up the trigger folder in Google Drive where contracts will be uploaded Prepare your Google Sheet with columns: Client, Project Code, Notion Link, Slack Channel, Drive Folder Requirements Google Workspace account (Drive, Gmail, Sheets) Slack workspace with bot permissions to create channels Notion workspace with API integration OpenAI API key File naming convention Upload PDF files using this format: ClientName_ProjectName_email@example.com.pdf Example: AcmeCorp_WebsiteRedesign_john@acme.com.pdf How to customize Add more subfolders: Duplicate the "Create Deliverables Subfolder" node Customize the email prompt: Edit the "AI Draft Welcome Email" node Add more Notion properties: Extend the "Create Notion Project Page" node Change notification format: Modify the "Notify Team on Slack" message
by Marth
Automated Employee Recognition Bot with Slack + Google Sheets + Gmail Description Turn employee recognition into an automated system. This workflow celebrates great work instantly it posts recognition messages on Slack, sends thank-you emails via Gmail, and updates your tracking sheet automatically. Your team feels appreciated. Your HR team saves hours. Everyone wins. โ๏ธ How It Works You add a new recognition in Google Sheets. The bot automatically celebrates it in Slack. The employee receives a thank-you email. HR gets notified and the sheet updates itself. ๐ง Setup Steps 1๏ธโฃ Prepare Your Google Sheet Create a sheet called โEmployee_Recognition_Listโ with these columns: Name | Department | Reason | Date | Email | Status | EmailStatus Then add one test row โ for example, your own name โ to see it work. 2๏ธโฃ Connect Your Apps Inside n8n: Google Sheets:** Connect your Google account so the bot can read the sheet. Slack:** Connect your Slack workspace to post messages in a channel (like #general). Gmail:** Connect your Gmail account so the bot can send emails automatically. 3๏ธโฃ (Optional) Add AI Personalization If you want the messages to sound more natural, add an OpenAI node with this prompt: > โWrite a short, friendly recognition message for {{name}} from {{dept}} who was recognized for {{reason}}. Keep it under 2 sentences.โ This makes your Slack and email messages feel human and genuine. 4๏ธโฃ Turn It On Once everythingโs connected: Save your workflow Set it to Active Add a new row in your Google Sheet The bot will instantly post on Slack and send a thank-you email ๐
by Ronnie Craig
AI Email Assistant - Smart Email Processing & Response ๐ค A sophisticated n8n workflow that transforms your email management with AI-powered classification, automatic responses, and intelligent organization. ๐ฏ What This Workflow Does This advanced AI email assistant automatically: Analyzes** incoming emails using intelligent classification Categorizes** messages by priority, urgency, and type Generates** context-aware draft responses in your voice Organizes** emails with smart labeling and filing Alerts** you to urgent messages instantly Manages** attachments with cloud storage integration Perfect for busy professionals, customer service teams, and anyone drowning in email! โจ Key Features ๐ง Intelligent Email Analysis Context-Aware Processing**: Understands email threads and conversation history Smart Classification**: Automatically categorizes by priority, urgency, and required actions Multi-Criteria Assessment**: Evaluates response needs, follow-up requirements, team involvement Dynamic Label Management**: Syncs with your Gmail labels for consistent organization ๐ AI-Powered Response Generation Professional Draft Creation**: Generates contextually appropriate responses Tone Matching**: Mirrors the formality and style of incoming emails Multiple Response Options**: Provides alternatives for complex inquiries Customizable Voice**: Adapts to your business communication style ๐ Smart Notification System Urgent Email Alerts**: Instant notifications for high-priority messages Telegram/Slack Integration**: Get alerts where you work Smart Filtering**: Only notifies when truly urgent Quick Action Links**: Direct links to Gmail for immediate response ๐ Advanced Attachment Management Automatic Cloud Upload**: Saves attachments to Google Drive Smart File Naming**: Organized by date, sender, and content Duplicate Detection**: Prevents redundant uploads File Type Filtering**: Optional filtering for security ๐ท๏ธ Intelligent Organization Auto-Labeling**: Applies relevant Gmail labels automatically Progress Tracking**: Marks emails as "processed" or "digested" Priority Indicators**: Visual priority levels in your inbox Category-Based Sorting**: Groups similar emails together ๐ ๏ธ Setup Instructions Prerequisites n8n instance (cloud or self-hosted) Gmail account with API access OpenAI API key (or compatible AI service) Google Drive account (for attachments) Telegram bot (optional, for alerts) Step 1: Import the Workflow Download AI_Email_Assistant_Community_Template.json In n8n, navigate to Templates โ Import from File Select the downloaded JSON file The workflow will import as inactive Step 2: Configure Credentials Gmail Setup: Create Gmail OAuth2 credentials in n8n Configure the following nodes: Email_Trigger Get Conversation Thread Get Latest Message Content Create Draft Response Assign Classification Label Mark as Processed Get All Gmail Labels Test connections to ensure proper authentication AI Model Setup: Configure the AI Language Model node Options include: OpenAI (GPT-4, GPT-3.5-turbo) Anthropic Claude (recommended) Local LLMs via Ollama Add your API credentials Test the connection Google Drive Setup (Optional): Create Google Drive OAuth2 credentials Configure nodes: Upload to Google Drive Check Existing Attachments Replace YOUR_GOOGLE_DRIVE_FOLDER_ID with your folder ID Create a dedicated folder for email attachments Telegram Alerts (Optional): Create a Telegram bot via @BotFather Get your chat ID Configure the Send Urgent Alert node Replace YOUR_TELEGRAM_CHAT_ID with your actual chat ID Step 3: Customize AI Instructions Email Classification (AI Email Classifier node): Review the classification criteria in the system message Adjust urgency keywords for your business Modify priority levels based on your needs Customize category definitions Response Generation (AI Response Generator node): Update the response guidelines Replace [YOUR NAME] with your actual name Adjust tone and style preferences Add company-specific response templates Step 4: Configure Gmail Labels Create Custom Labels in Gmail: High Priority Medium Priority Low Priority Needs Response Urgent Follow Up Required Processed (or use existing labels) Update Label IDs: Run the workflow once to get label IDs Replace YOUR_PROCESSED_LABEL_ID in the "Mark as Processed" node Update any hardcoded label references Step 5: Test and Deploy Testing Process: Send yourself a test email Monitor the workflow execution Verify classification accuracy Check draft response quality Confirm labeling works correctly Test urgent alert functionality Fine-Tuning: Adjust AI prompts based on test results Refine classification criteria Update response templates Modify notification preferences Go Live: Activate the workflow Monitor initial performance Adjust settings as needed ๐ Email Classification System Priority Levels High**: Urgent matters requiring immediate attention Medium**: Important but not time-critical Low**: Routine or informational messages Classification Categories toReply**: Direct questions or requests requiring response urgent**: Immediate business impact or crisis situations dateRelated**: Time-sensitive events or deadlines attachmentsToUpload**: Financial docs or important files requiresFollowUp**: Multi-step processes or ongoing projects forwardToTeam**: Cross-departmental or collaborative items Response Generation Guidelines Professional Tone**: Business casual, warm but professional Context Awareness**: Considers email thread history Structured Responses**: Clear paragraphs with actionable next steps Placeholder System**: Uses [PLACEHOLDER] for missing information Alternative Options**: Provides multiple response choices for complex inquiries ๐ง Advanced Customization File Type Filtering // In Get Specific File Types node, modify: if (mimeType === 'application/pdf' || mimeType === 'text/xml' || mimeType === 'image/jpeg') { // Process file } Custom Urgency Keywords Update the AI classifier prompt with your business-specific urgent terms: Keywords: "URGENT", "EMERGENCY", "CRITICAL", "ASAP", "IMMEDIATE" Custom terms: "CLIENT ESCALATION", "SYSTEM DOWN", "LEGAL DEADLINE" Response Templates Customize the response generator with your company voice: Greeting style: "Hi [Name]" vs "Dear [Name]" Closing: "Best Regards" vs "Thank you" vs "Cheers" Company-specific phrases and terminology Integration Options CRM Systems**: Add nodes to create tasks in your CRM Project Management**: Auto-create tickets in Jira, Asana, etc. Calendar Integration**: Schedule follow-ups automatically Slack/Teams**: Alternative notification channels ๐จ Troubleshooting Common Issues 1. Gmail Authentication Errors Verify OAuth2 credentials are active Check Gmail API quotas Ensure proper scopes are configured 2. AI Classification Inconsistency Review and refine classification prompts Add more specific examples Adjust confidence thresholds 3. Response Generation Problems Validate AI model configuration Check API key and quotas Test with simpler email examples 4. Attachment Upload Failures Verify Google Drive permissions Check folder ID configuration Ensure sufficient storage space 5. Missing Notifications Test Telegram bot configuration Verify chat ID is correct Check urgency classification logic Performance Optimization Rate Limiting**: Gmail has API quotas - monitor usage Batch Processing**: Workflow processes one email at a time Error Handling**: Built-in retry logic for reliability Resource Management**: Monitor AI API costs and usage ๐ Best Practices 1. Email Management Regular Monitoring**: Review classifications weekly Label Hygiene**: Keep Gmail labels organized Feedback Loop**: Manually correct misclassifications Archive Strategy**: Set up auto-archiving for processed emails 2. AI Optimization Prompt Engineering**: Continuously refine AI instructions Example Training**: Add specific examples for your business Context Limits**: Monitor token usage and costs Model Selection**: Choose appropriate AI model for your needs 3. Security Considerations Credential Management**: Regularly rotate API keys Data Privacy**: Review what data is sent to AI services Access Control**: Limit workflow access to authorized users Audit Logging**: Monitor workflow executions 4. Workflow Maintenance Regular Updates**: Keep n8n and node versions current Backup Strategy**: Export workflow configurations regularly Documentation**: Keep setup notes and customizations documented Testing**: Test major changes in development environment first ๐ค Contributing to the Community This workflow template demonstrates: Comprehensive AI Integration**: Multiple AI touchpoints working together Production-Ready Architecture**: Error handling, retry logic, and monitoring Extensive Documentation**: Clear setup and customization guidance Flexible Configuration**: Adaptable to different business needs Best Practice Examples**: Security, performance, and maintenance considerations ๐ License & Support This workflow is provided free to the n8n community under MIT License. Community Resources: n8n Community Forum for questions GitHub Issues for bug reports Documentation updates welcome Professional Support: For enterprise deployments or custom modifications, consider: n8n Cloud for managed hosting Professional services for complex integrations Custom AI model training for specific use cases Transform your email workflow today! ๐ This AI Email Assistant reduces email processing time by up to 90% while ensuring no important message goes unnoticed. Perfect for busy professionals who want to stay responsive without being overwhelmed by their inbox.
by franck fambou
Overview This intelligent chatbot workflow enables natural language conversations with your documents, supporting multiple file formats including PDFs, Word documents, Excel spreadsheets, and text files. Built with advanced RAG (Retrieval-Augmented Generation) technology, this chatbot can understand, analyze, and answer questions about your document content with contextual accuracy and intelligent responses. How It Works Intelligent Document Processing & Conversation Pipeline: Multi-Format Document Ingestion**: Automatically processes and indexes various document formats (PDF, DOCX, XLSX, TXT, etc.) Smart Content Chunking**: Breaks down documents into meaningful segments while preserving context and relationships Vector Database Storage**: Creates searchable embeddings for fast and accurate information retrieval Contextual Conversation Engine**: Uses AI to understand user queries and retrieve relevant document sections Natural Language Responses**: Generates human-like responses with citations and source references Multi-Turn Conversations**: Maintains conversation history and context across multiple interactions Real-Time Processing**: Instant responses with live document updates and dynamic content refresh Setup Instructions Estimated Setup Time: 15-20 minutes Prerequisites n8n instance (v0.200.0 or higher recommended) OpenAI/Gemini API key for embeddings and chat completion Vector database service (optional: Pinecone, Weaviate, or Qdrant) File storage service (optional: Google Drive, Dropbox, AWS S3) Web server for chatbot interface (optional) Configuration Steps Configure Document Input Sources Set up file upload webhook for direct document submission Configure cloud storage watchers for automatic document processing Add support for multiple file formats and size limits Set up document validation and security checks Setup Document Processing Pipeline Configure text extraction engines for different file types Set up intelligent chunking parameters (chunk size, overlap, boundaries) Add metadata extraction for document categorization Configure OCR for scanned documents (optional) Configure Vector Database Set up your chosen vector database credentials Configure embedding model settings (Gemini models/text-embedding-004 recommended) Set up collection/index structure for document storage Configure search parameters and similarity thresholds Setup AI Chat Engine Add your AI service API credentials (Gemini, Claude, etc.) Configure conversation prompts and system instructions Set up context window management and token optimization Add response formatting and citation rules Configure Chat Interface Set up webhook endpoints for chat API Configure session management and conversation history Add authentication and rate limiting (optional) Set up real-time updates and streaming responses Setup Monitoring & Analytics Configure conversation logging and analytics Set up performance monitoring for response times Add usage tracking and cost monitoring Configure error handling and failover mechanisms Use Cases Business & Enterprise Knowledge Base Queries**: Ask questions about company policies, procedures, and documentation Contract Analysis**: Query legal documents, contracts, and compliance materials Training Materials**: Interactive learning with training manuals and educational content Financial Reports**: Analyze and discuss financial statements, budgets, and forecasts Research & Academia Research Paper Analysis**: Discuss findings, methodologies, and citations from academic papers Literature Reviews**: Compare and contrast multiple research documents Thesis Support**: Get insights from reference materials and research data Grant Proposals**: Analyze requirements and optimize proposal content Legal & Compliance Legal Document Review**: Query contracts, agreements, and legal texts Regulatory Compliance**: Understand compliance requirements from regulatory documents Case Law Research**: Analyze legal precedents and court decisions Policy Analysis**: Interpret organizational policies and procedures Technical Documentation API Documentation**: Interactive queries about technical specifications User Manuals**: Get help and guidance from product documentation Code Documentation**: Understand codebases and technical implementations Troubleshooting Guides**: Interactive problem-solving with technical guides Personal Productivity Document Summarization**: Get quick summaries of long documents Information Extraction**: Find specific data points across multiple documents Content Research**: Research topics across your personal document library Meeting Notes**: Query and analyze meeting transcripts and notes Key Features Advanced Document Processing Multi-Format Support**: PDF, DOCX, XLSX, TXT, PPTX, and more Intelligent Chunking**: Context-aware document segmentation Metadata Extraction**: Automatic categorization and tagging OCR Integration**: Process scanned documents and images with text Intelligent Conversation Contextual Understanding**: Maintains conversation context and document relationships Source Attribution**: Provides citations and references for all answers Multi-Document Queries**: Compare and analyze across multiple documents Follow-up Questions**: Natural conversation flow with clarifying questions Performance & Scalability Fast Retrieval**: Vector-based semantic search for instant responses Scalable Architecture**: Handle large document collections efficiently Batch Processing**: Process multiple documents simultaneously Caching System**: Optimized response times with intelligent caching Security & Privacy Document Encryption**: Secure storage and transmission of sensitive documents Access Control**: User-based permissions and document access restrictions Audit Logging**: Complete conversation and access audit trails Data Retention**: Configurable data retention and deletion policies Technical Architecture Document Processing Flow File Upload โ Format Detection โ Text Extraction โ Content Chunking Metadata Extraction โ Embedding Generation โ Vector Storage โ Index Creation Conversation Flow User Query โ Intent Analysis โ Vector Search โ Context Retrieval Response Generation โ Source Attribution โ Answer Formatting โ Delivery Supported File Formats Documents**: PDF, DOC, DOCX, RTF, TXT, MD Spreadsheets**: XLS, XLSX, CSV Presentations**: PPT, PPTX Images**: PNG, JPG (with OCR) Archives**: ZIP (auto-extracts supported formats) Web**: HTML, XML Integration Options Chat Interfaces Web Widget**: Embeddable chat widget for websites API Endpoints**: RESTful API for custom integrations Slack/Teams**: Direct integration with team collaboration tools Mobile Apps**: API-first design for mobile application integration Data Sources Cloud Storage**: Google Drive, Dropbox, OneDrive, AWS S3 Document Systems**: SharePoint, Confluence, Notion Email**: Process attachments from email systems CRM/ERP**: Integration with business systems Performance Specifications Response Time**: < 3 seconds for typical queries Document Capacity**: Supports collections of 10,000+ documents Concurrent Users**: Scales to handle multiple simultaneous conversations Accuracy**: >90% relevance for domain-specific queries Advanced Configuration Options Customization Custom Prompts**: Tailor AI behavior for specific use cases Branding**: Customize chat interface with your company branding Language Support**: Multi-language document processing and responses Domain Expertise**: Fine-tune for specific industries or domains Analytics & Monitoring Usage Analytics**: Track popular queries and document usage Performance Metrics**: Monitor response times and accuracy User Feedback**: Collect ratings and improve responses A/B Testing**: Test different configurations and prompts Troubleshooting & Support Common Issues Slow Responses**: Check vector database performance and API limits Inaccurate Answers**: Review chunking strategy and embedding quality Format Errors**: Verify document formats and processing capabilities Memory Issues**: Monitor token usage and context window limits Optimization Tips Use clear, specific questions for best results Ensure documents are well-formatted with proper headers Regular vector database maintenance for optimal performance Monitor API usage to optimize costs and performance
by Cordexa Technologies
This template monitors Google Drive folder for new files, extracts text from PDFs, images, text files, CSVs, and Google Docs., reads images with meta/llama-3.2-11b-vision-instruct, structures the result with nvidia/llama-3.3-nemotron-super-49b-v1.5, logs everything to Google Sheets, and sends a Telegram notification when processing finishes. โจ What This Template Does Watches a specific Google Drive folder for new files with Google Drive Trigger. ๐ Downloads each new file with Google Drive before processing. โฌ๏ธ Routes PDFs, images, text files, CSVs, and Google Docs through the correct extraction branch. ๐ Extracts image text with meta/llama-3.2-11b-vision-instruct ๐ผ๏ธ Structures extracted content into JSON fields with nvidia/llama-3.3-nemotron-super-49b-v1.5 through NVIDIA NIM. ๐ค Appends the final result to Google Sheets in Extract_Log. ๐ Sends a Telegram notification when processing is complete. ๐ฌ Key Benefits Turns a Drive folder into a reusable intake point for mixed file types. โฑ๏ธ Creates a searchable audit trail in Google Sheets for every processed file. ๐ Sends a lightweight Telegram notification without requiring Telegram as the input channel. โ Keeps the extraction and structuring logic reusable for internal ops or client delivery workflows. ๐ Makes it easier to test multimodal document processing with free-tier NVIDIA NIM models. ๐ก Features Google Drive Trigger configured for new files in a specific folder. ๐ฅ Google Drive download step for binary file access before extraction. โ๏ธ File-type routing with Switch and normalization with Code nodes. ๐ง Native n8n Extract from File nodes for PDF, TXT, and CSV parsing. ๐ NVIDIA NIM HTTP Request nodes for image OCR and structured JSON generation. ๐ค Google Sheets append logging with a fixed Extract_Log tab schema. ๐ Plain-text Telegram completion notifications with a fixed destination chat ID. ๐จ Requirements n8n instance with access to Google Drive Trigger, Google Drive, Google Docs, Google Sheets, HTTP Request, Telegram, and Extract from File nodes. ๐งฐ Google Drive OAuth2 credential with access to the watched folder. ๐ Google Docs OAuth2 credential with access to any Google Docs files you want to process. ๐ Google Sheets OAuth2 credential and a sheet with an Extract_Log tab. ๐ Telegram bot credential plus a valid destination chat ID for notifications. ๐ค NVIDIA NIM API key stored as an HTTP Header Auth credential. ๐ A folder ID and Google Sheet ID added to the provided placeholders before activation. ๐ ๏ธ Target Audience Operations teams monitoring a shared Drive folder for inbound files. ๐๏ธ Founders and solo operators who want document extraction. ๐ค Agencies building reusable back-office workflows for receipts, notes, and uploaded files. ๐ข Analysts who want structured text output logged into Google Sheets automatically. ๐ Automation builders testing file-driven multimodal extraction with Drive as the source. ๐งช Step-by-Step Setup Instructions Import the workflow and read every sticky note on the canvas before editing any nodes. ๐ Connect your Google Drive, Google Docs, Google Sheets, Telegram, and NVIDIA NIM credentials. ๐ Replace REPLACE_WITH_GOOGLE_DRIVE_FOLDER_ID, REPLACE_WITH_GOOGLE_SHEET_ID, and REPLACE_WITH_TELEGRAM_CHAT_ID in the marked nodes. ๐ Create the Extract_Log tab with the required headers shown in the sticky notes. ๐ Test one file at a time in this order: PDF, TXT, CSV, image, then Google Docs file. ๐งช Confirm that each test adds one clean row to Google Sheets and sends one Telegram notification. โ Activate the workflow only after every supported path works end to end. ๐ Built by Cordexa Technologies https://cordexa.tech | cordexatech@gmail.com
by Davidson Ahuruezenma
AI-Powered Academic Assignment Generator This n8n workflow template automates the complete academic assignment generation process from student queries to professional document delivery. Students submit assignment requests via Telegram, and the workflow generates comprehensive, plagiarism-free academic content using Google Gemini AI, formats it into professional PDF documents, and delivers downloadable links while maintaining complete records. What does this workflow do? ๐ฑ Telegram Integration**: Receives structured assignment requests from students ๐ค AI Content Generation**: Creates comprehensive academic answers (500+ words per question) ๐ Professional Formatting**: Generates university-standard HTML/PDF documents โ๏ธ Cloud Storage**: Automatically stores files in organized Google Drive folders ๐ Record Keeping**: Maintains complete assignment database in Google Sheets ๐ End-to-End Automation**: Complete pipeline from query to document delivery How it works The workflow processes student assignment requests through 16 interconnected nodes, handling everything from input parsing to final document delivery: Input โ AI Processing โ Document Generation โ Storage & Delivery Setup Requirements Credentials needed: Telegram Bot Token** (for receiving/sending messages) Google Gemini API Key** (for AI content generation) Google Sheets API** (for record keeping) Google Drive API** (for file storage) PDFCrowd API** (for PDF conversion) Pre-setup steps: Create a Telegram bot and obtain the bot token Set up Google Drive folder structure for file organization Create Google Sheets template with proper column headers Configure API rate limits and usage quotas Workflow Breakdown ๐ Input Processing Nodes Student Query Intake Bot (Telegram Trigger) Student Query Intake Bot (Telegram Trigger) Listens for incoming student messages with assignment details Monitors specific chat ID for authorized users Triggers workflow when structured assignment requests are received Structured Data Parser (Code Node) Extracts student information using regex patterns Parses: Name, Faculty, Department, Level, Course, Registration Number Automatically sets current date and handles missing data Outputs clean JSON structure for AI processing ๐ค AI Processing Nodes Student Assignment Auto-Composer (LangChain Agent) Main AI orchestrator for assignment generation Uses structured prompts for consistent academic formatting Generates 500-word answers per question with APA citations Ensures plagiarism-free, original academic content Generator Model (Google Gemini Chat) Primary AI model for high-quality content generation Handles complex academic writing and formatting requirements Fallback Model Generator (Google Gemini - Gemma) Backup AI model ensuring workflow reliability Activates when primary model encounters issues Structured Output Parser (LangChain) Validates AI-generated content against JSON schema Enforces required field compliance and format consistency Auto-fixes common formatting issues ๐ง Processing & Error Handling Error Handler (Code Node) Handles text processing errors and data type issues Converts non-string values and provides error recovery Ensures workflow continuity even with problematic data Wait Node Introduces strategic 2-second delay for processing stability Allows AI processing to complete before next steps ๐ Data Management Nodes Edit Fields (Set Node) Maps AI output to Google Sheets column structure Ensures data consistency for database storage Long Essay Record Sheet (Google Sheets) Stores complete assignment records with metadata Maintains comprehensive student assignment database Uses Name field as unique identifier for record updates ๐ Document Generation Nodes Static HTML Builder (LangChain Agent) Converts structured data into professional HTML documents Applies academic formatting: Times New Roman, 12pt, double-spaced Creates university-standard document structure HTTP Request (PDF Conversion) Converts HTML to high-quality PDF using PDFCrowd API Maintains academic formatting and professional appearance Uses student name for file identification โ๏ธ Storage & Delivery Nodes Upload File (Google Drive) Stores generated PDFs in organized Drive folders Creates shareable links for easy access Maintains systematic file organization Send Text Message (Telegram) Delivers Google Drive download link to student Completes the automation cycle with instant access Input Format Students should format their Telegram messages as follows: Name: John Doe Faculty: Engineering Department: Computer Science Level: 200L Course: CSC 201 - Data Structures Reg number: 2024001234 Question: Explain the concept of Big O notation Compare different sorting algorithms Discuss the applications of binary trees Features โจ Intelligent Processing Smart Input Parsing**: Handles unstructured text inputs automatically Multi-Question Support**: Processes complex assignment requirements Data Validation**: Ensures complete and accurate information capture ๐ Academic Excellence University Standards**: Professional formatting and citation styles Original Content**: Plagiarism-free AI-generated assignments Comprehensive Answers**: 500+ words per question with detailed explanations ๐ก๏ธ Reliability & Error Handling Fallback Systems**: Multiple AI models for continuous operation Error Recovery**: Automatic handling of processing issues Data Integrity**: Schema validation and field verification Use Cases This workflow template is perfect for: ๐ Educational Institutions**: Automate student assignment processing and grading assistance ๐จโ๐ Academic Support Services**: Provide structured learning assistance and content generation ๐ซ Online Learning Platforms**: Integrate assignment automation into educational systems ๐ Content Creation Services**: Generate academic-quality content for educational purposes ๐ค AI Learning Projects**: Implement complex AI workflows with multiple service integrations Output Examples Generated Assignment Features: Professional formatting** with Times New Roman, 12pt font, double-spacing Complete academic structure** including headers, student information, questions, and references Comprehensive answers** averaging 500+ words per question with detailed explanations Proper citations** in APA format with authentic academic references PDF delivery** through shareable Google Drive links Database Records: Complete student information tracking Assignment question and answer storage Timestamp and metadata preservation Easy retrieval and analysis capabilities Performance & Reliability Processing Time: 2-3 minutes per assignment Success Rate: >95% with fallback mechanisms Content Quality: University-standard academic writing Scalability: Handles multiple concurrent requests Error Recovery: Automatic retry and alternative processing paths Customization Options Easily configurable elements: Chat IDs**: Modify for different Telegram groups or users AI Models**: Switch between different Google Gemini models Document Formatting**: Adjust academic standards and styling Storage Locations**: Configure Google Drive folders and naming conventions Database Fields**: Modify Google Sheets columns and data structure Advanced customizations: Add support for different document formats (Word, LaTeX) Integrate additional AI providers (OpenAI, Claude, etc.) Implement grading and feedback mechanisms Add multi-language support Create batch processing capabilities Getting Started Import the workflow into your n8n instance Configure credentials for all required services Set up Telegram bot and obtain necessary permissions Create Google Drive folders and Google Sheets template AI-Powered Academic Assignment Test with sample data to ensure proper functionality Deploy and monitor for production use Tags academic education ai telegram google-sheets pdf-generation automation langchain assignment student-support
by Oneclick AI Squad
This automated n8n workflow converts any technical documentation or blog post URL into a professional, step-by-step developer tutorial video complete with AI-generated narration, code syntax highlighting, terminal command animations, and visual diagrams. The system intelligently analyzes documentation structure, extracts code examples, generates natural voiceover narration, creates synchronized visual scenes, and automatically publishes the finished video to YouTube with SEO-optimized descriptions. Fundamental Aspects Webhook-Based Trigger**: Accepts HTTP POST requests containing a documentation URL to initiate the automated video creation pipeline on-demand. Intelligent Content Extraction**: Fetches HTML content, parses documentation structure, extracts code blocks with language detection, identifies headings for organization, and cleans irrelevant elements like navigation and scripts. AI-Powered Tutorial Planning**: Uses Claude AI to analyze documentation content and generate a comprehensive tutorial outline including section titles, duration estimates, narration scripts, visual types (code/terminal/diagram), and learning outcomes. Professional Audio Generation**: Converts narration scripts into high-quality audio using Google Cloud Text-to-Speech with natural-sounding neural voices, proper pacing, and timing synchronization. Dynamic Visual Scene Creation**: Generates code editor scenes with syntax highlighting and typewriter effects, terminal animations with command execution sequences, flowchart diagrams with progressive reveals, and text overlays with key points. Automated Video Rendering**: Combines audio narration with visual scenes using Remotion API to render publication-ready videos in 1080p resolution at 30fps with smooth transitions. Multi-Platform Distribution**: Automatically uploads completed videos to YouTube with AI-generated titles and descriptions, backs up to Google Drive for archival, and returns comprehensive metadata via webhook response. Setup Instructions Import the Workflow into n8n**: Download the workflow JSON file and import via n8n interface under "Workflows" โ "Import from File" option. Configure Claude AI (Anthropic) Credentials**: Navigate to the "Analyze with Claude AI" node and click the credentials dropdown. Create new Anthropic credentials using your API key from console.anthropic.com. Ensure you have access to Claude Sonnet 4 model (claude-sonnet-4-20250514). Save and test the connection to verify API access. Set Up Google Cloud Text-to-Speech**: Go to Google Cloud Console and enable the Text-to-Speech API. Create a service account with "Cloud Text-to-Speech User" role. Generate and download a JSON key file for the service account. In n8n, navigate to "Generate Audio with Google TTS" node and add service account credentials. Upload the JSON key file when prompted. Configure Remotion API for Video Rendering**: Sign up for a Remotion account at remotion.dev and obtain API credentials. In the "Render Video with Remotion" node, add HTTP Header Auth credentials. Set authorization header with your Remotion API key. Ensure you have a Remotion composition named "TutorialVideo" deployed. Note: You may need to create a custom Remotion project for code highlighting and terminal animations. Add YouTube OAuth2 Credentials**: Navigate to "Upload to YouTube" node and create YouTube OAuth2 credentials. Follow Google's OAuth flow to authorize n8n to upload videos on your behalf. Ensure your YouTube account has upload permissions and is verified for videos longer than 15 minutes. Configure default privacy settings (public, unlisted, or private) in node parameters. Configure Google Drive Backup**: Go to "Backup to Google Drive" node and add Google Drive OAuth2 credentials. Authorize n8n to access your Google Drive. Optionally specify a folder ID in node options to organize video backups. Activate Webhook Endpoint**: Activate the workflow using the toggle switch in the top-right corner. Copy the webhook URL from the "Webhook Trigger" node (appears after activation). The URL will be in format: https://your-n8n-instance.com/webhook/create-video. Test the Workflow**: Send a test POST request to the webhook URL using curl, Postman, or HTTPie: curl -X POST https://your-n8n-instance.com/webhook/create-video \ -H "Content-Type: application/json" \ -d '{"documentationUrl": "https://docs.example.com/getting-started"}' Monitor the execution in n8n's "Executions" tab to track progress through each node. Check YouTube and Google Drive for the generated video (processing may take 5-15 minutes depending on content length). Verify Output Quality**: Review the generated video for audio quality, code highlighting accuracy, and pacing. Check YouTube description for proper formatting of prerequisites and learning outcomes. Ensure code snippets are readable and terminal animations are properly synchronized. Technical Dependencies Claude AI (Anthropic)**: For intelligent content analysis, tutorial outline generation, section structuring, and narration script writing with natural language processing. Google Cloud Text-to-Speech**: For converting narration scripts into professional-quality audio with neural voice models (en-US-Neural2-J recommended for technical content). Remotion API**: For programmatic video rendering, scene composition, code syntax highlighting, terminal animations, and transition effects (requires custom React components). YouTube Data API v3**: For automated video uploads, metadata management, thumbnail generation, and playlist organization. Google Drive API**: For backup storage, file sharing, and archival of raw video files with organized folder structures. n8n Platform**: For workflow orchestration, webhook handling, conditional logic, error handling, and execution monitoring. JavaScript Runtime**: For custom content parsing, JSON manipulation, code language detection, timing calculations, and data transformation in Code nodes. Customization Possibilities Voice Customization**: Change narrator voice in "Generate Narration Script" node by modifying the voice parameter. Google TTS offers multiple voices (male, female, different accents). Adjust speed (0.25-4.0) and pitch (-20 to +20) for different pacing styles. Use different voices for intro/outro vs main content. Video Branding**: Add custom intro/outro animations by modifying Remotion composition. Include your logo, channel name, and subscribe animations. Customize color schemes in code editor themes (Dracula, Monokai, Solarized, One Dark). Add watermarks or corner branding throughout video. Code Editor Themes**: Change syntax highlighting themes in "Create Visual Scenes" node. Popular options include Dracula (default), VS Code Dark+, GitHub Light, Monokai Pro, Nord. Adjust font sizes, line spacing, and highlighting animation speeds for readability. Content Filtering**: Add pre-processing logic to filter specific documentation sections. Skip changelog entries, API reference tables, or installation instructions if not needed. Focus on tutorial-style content only. Add minimum/maximum content length thresholds. Multi-Language Support**: Extend the workflow to detect documentation language and use appropriate TTS voices. Support Spanish (es-ES), French (fr-FR), German (de-DE), Japanese (ja-JP), and other languages. Generate localized titles and descriptions. Advanced Visual Types**: Add screen recording capabilities for live demonstrations. Include animated flowcharts using Mermaid or D3.js. Generate architecture diagrams from code structure. Add picture-in-picture video of an instructor or animated avatar. Tutorial Complexity Detection**: Use Claude AI to assess documentation difficulty level and adjust pacing accordingly. Beginner content gets slower narration and more detailed explanations. Advanced content can move faster with less repetition. Interactive Elements**: Generate timestamp chapters for YouTube with clickable sections. Create accompanying blog post or GitHub repository with code examples. Generate quiz questions based on content for learning validation. Quality Assurance**: Add validation nodes to check video quality before upload. Verify audio levels are balanced, code is readable at 1080p, and total duration matches expectations. Implement retry logic for failed renders. Batch Processing**: Extend webhook to accept multiple URLs for bulk video generation. Create playlists automatically for related documentation pages. Schedule sequential uploads to avoid flooding your channel. Analytics Integration**: Track video performance by connecting to YouTube Analytics API. Monitor view counts, engagement rates, and audience retention. Use insights to improve future video generation parameters. Cost Optimization**: Implement caching for previously processed documentation URLs to avoid redundant API calls. Use cheaper TTS voices for internal testing. Compress videos before upload while maintaining quality. Set API rate limits to control costs. Custom Remotion Components**: Build specialized React components for your tech stack (e.g., database schema visualizers, API request/response animations, deployment pipeline diagrams). Create reusable templates for common tutorial patterns. Notification System**: Add email or Slack notifications when videos complete processing. Include video URLs, processing time, and any errors encountered. Send daily summaries of generated videos. SEO Enhancement**: Use Claude AI to generate SEO-optimized titles, descriptions, and tags. Research trending keywords in your niche. Auto-generate closed captions and subtitles for accessibility and searchability. Explore More AI Video Automation: Contact us to design custom video automation workflows for product demos, educational content, marketing videos, or AI-powered content creation pipelines tailored to your business needs.
by Jordan
This n8n template demonstrates how to automate YouTube content repurposing using AI. Upload a video to Google Drive and automatically generate transcriptions, A/B testable titles, AI thumbnails, short-form clips with captions, and YouTube descriptions with chapter timestamps. Use cases include: Content creators who publish 1-2 long-form videos per week and need to extract 5-10 short-form clips, YouTube agencies managing multiple channels, or automation consultants building content systems for clients. Good to know Processing time is approximately 10-15 minutes per video depending on length Cost per video is roughly $1.00 (transcription $0.65, AI generation $0.35) YouTube captions take 10-60 minutes to generate after upload - the workflow includes automatic polling to check when captions are ready Manual steps still required: video clipping (using provided timestamps), social media posting, and YouTube A/B test setup How it works When a video is uploaded to Google Drive, the workflow automatically triggers and creates an Airtable record The video URL is sent to AssemblyAI (via Apify) for transcription with H:MM:SS.mmm timestamps GPT-4o-mini analyzes the transcript and generates 3 title variations optimized for A/B testing When you click "Generate thumbnail" in Airtable, your prompt is optimized and sent to Kie.ai's Nano Banana Pro model with 2 reference images for consistent branding After uploading to YouTube, the workflow polls YouTube's API every 5 minutes to check if auto-generated captions are ready Once captions are available, click "Generate clips" and Grok 4.1 Fast analyzes the transcript to identify 3-8 elite clips (45+ seconds each) with proper start/end boundaries and action-oriented captions GPT-4o-mini generates a YouTube description with chapter timestamps based on the transcript All outputs are saved to Airtable: titles, thumbnail, clip timestamps with captions, and description How to use Duplicate the provided Airtable base template and connect it to your n8n instance Create a Google Drive folder for uploading edited videos After activating the workflow, copy webhook URLs and paste them into Airtable button formulas and automations Upload your edited video to the designated Google Drive folder to trigger the system The workflow automatically generates titles and begins transcription Add your thumbnail prompt and 2 reference images to Airtable, then click "Generate thumbnail" Upload the video to YouTube as unlisted, paste the video ID into Airtable, and check the box to trigger clip generation Use the provided timestamps to manually clip videos in your editor Copy titles, thumbnail, clips, and description from Airtable to publish across platforms Requirements Airtable account (Pro plan recommended for automations) Google Drive for video upload monitoring Apify account for video transcription via AssemblyAI actor OpenAI API key for title and description generation (GPT-4o-mini) OpenRouter API key for clip identification (Grok 4.1 Fast) Kie.ai account for AI thumbnail generation (Nano Banana Pro model) YouTube Data API credentials for caption polling Customising this workflow Tailor the system prompts to your content niche by asking Claude to adjust them without changing the core structure Modify the clip identification criteria (length, caption style, number of clips) in the Grok prompt Adjust thumbnail generation style by updating the image prompt optimizer Add custom fields to Airtable for tracking performance metrics or additional metadata Integrate with additional platforms like TikTok or Instagram APIs for automated posting
by Dinakar Selvakumar
Complete AI support system using website data (RAG pipeline) This template provides a full end-to-end Retrieval-Augmented Generation (RAG) system using n8n. It includes two connected workflows: A data ingestion pipeline that crawls a website and stores its content in a vector database. A customer support chatbot that retrieves this knowledge and answers user queries in real time. Together, these workflows allow you to turn any public website into an intelligent AI-powered support assistant grounded in real business data. Use cases AI customer support chatbot for your website Internal company knowledge assistant Product FAQ automation Helpdesk or IT support bot AI receptionist for services Semantic search over company content How it works Ingestion workflow Discover all URLs from a website sitemap. Filter and normalize the URLs. Fetch each page and extract readable text. Clean HTML into plain text. Split text into overlapping chunks. Generate embeddings using OpenAI. Store vectors in Pinecone with metadata. Chatbot workflow A user sends a message via chat webhook. The agent queries Pinecone for relevant knowledge. Retrieved content is passed to OpenAI. OpenAI generates a grounded response. Short-term memory maintains conversation context. How to use Step 1 โ Run ingestion Set your target website URL. Add Firecrawl, OpenAI, and Pinecone credentials. Create a Pinecone index. Execute the ingestion workflow. Wait until all pages are indexed. Step 2 โ Run chatbot Deploy the chatbot workflow. Set the same Pinecone index and namespace. Copy the chat webhook URL. Connect it to a website, chat widget, or WhatsApp bot. Start chatting with your AI assistant. Requirements Firecrawl account OpenAI API key Pinecone account and index Public website to crawl Optional: frontend chat interface Good to know The chatbot never answers from memory for business data. All company knowledge comes from Pinecone. If Pinecone returns nothing, the bot fails safely. HTML cleaning is basic and can be replaced with: Mozilla Readability Jina Reader Unstructured Chunk size and overlap affect retrieval quality. Pinecone can be replaced with: Qdrant Weaviate Supabase Vector Chroma Customising this workflow You can extend this system by: Adding PDF or document loaders Scheduling ingestion daily or weekly Connecting CRM or ticketing systems Adding appointment booking tools Switching to local or open-source models Adding multilingual support Storing raw content in a database Adding feedback or logging What this n8n template demonstrates Real-world RAG architecture Web crawling pipelines Text chunking strategies Vector database integration AI agent orchestration Memory-controlled conversations Production-grade AI support systems End-to-end AI infrastructure with n8n Architecture overview This template follows a modern AI system design: Website โ Ingestion โ Embeddings โ Pinecone โ Retrieval โ OpenAI โ User It separates: Data preparation (offline) Knowledge storage Runtime inference This makes the system scalable, maintainable, and safe for production use. Need a custom setup? If you want a similar AI system built for your business (custom data sources, CRM integration, WhatsApp bots, booking systems, dashboards, or private deployments), feel free to reach out at dinakars2003@gmail.com. I help companies design and deploy production-ready AI workflows.
by Rahul Joshi
๐ Description This workflow analyzes real-time stock market sentiment and intent from public social media discussions and converts those signals into operations-ready actions. It exposes a webhook endpoint where a stock-marketโrelated query can be submitted (for example, a stock, sector, index, or market event). The workflow then scans Twitter/X and Instagram for recent public discussions that indicate buying interest, selling pressure, fear, uncertainty, or emerging opportunities. An AI agent classifies each signal by intent type, sentiment, urgency, and strength. These insights are transformed into a prioritized Asana task for market or research teams and a concise Slack alert for leadership visibility. Built-in validation and error handling ensure reliable execution and fast debugging. This automation removes the need for manual social monitoring while keeping teams informed of emerging market risks and opportunities. โ ๏ธ Deployment Disclaimer This template is designed for self-hosted n8n installations only. It relies on external MCP tools and custom AI orchestration that are not supported on n8n Cloud. โ๏ธ What This Workflow Does (Step-by-Step) ๐ Receive Stock Market Query (Webhook Trigger) Accepts an external POST request containing a stock market query. ๐งพ Extract Stock Market Query from Payload Normalizes and prepares the query for analysis. ๐ Analyze Social Media for Stock Market Intent (AI) Scans public Twitter/X and Instagram posts to detect actionable market intent signals. ๐ก Social Intelligence Data Fetch (MCP Tool) Retrieves relevant social data from external intelligence sources. ๐ง Transform Market Intent Signals into Ops-Ready Actions (AI) Structures insights into priorities, summaries, and recommended actions. ๐งน Parse Structured Ops Payload Validates and safely parses AI-generated JSON for downstream use. ๐ Create Asana Task for Market Signal Review Creates a prioritized task with key signals, context, and recommendations. ๐ฃ Send Market Risk & Sentiment Alert to Slack Delivers an executive-friendly alert summarizing risks or opportunities. ๐จ Error Handler โ Slack Alert Posts detailed error information if any workflow step fails. ๐งฉ Prerequisites โข Self-hosted n8n instance โข OpenAI and Azure OpenAI API credentials โข MCP (Xpoz) social intelligence credentials โข Asana OAuth credentials โข Slack API credentials ๐ Setup Instructions Deploy the workflow on a self-hosted n8n instance Configure the webhook endpoint and test with a sample query Connect OpenAI, Azure OpenAI, MCP, Asana, and Slack credentials Set the correct Asana workspace and project ID Select the Slack channel for alerts ๐ Customization Tips โข Adjust intent and sentiment classification rules in AI prompts โข Modify task priority logic or due-date rules โข Extend outputs to email reports or dashboards if required ๐ก Key Benefits โ Real-time market sentiment detection from social media โ Converts unstructured signals into actionable tasks โ Provides leadership-ready Slack alerts โ Eliminates manual market monitoring โ Built-in validation and error visibility ๐ฅ Perfect For Market research teams Investment and strategy teams Operations and risk teams Founders and analysts tracking market sentiment
by Henry
Who is this for? This workflow is ideal for Gmail users and teams who receive a high volume of emails and want to streamline inbox management. It suits professionals seeking to organize messages automatically, including sales teams, project managers, support staff, and anyone who benefits from automated email categorization. What problem is this workflow solving? / Use case Manually labeling emails is time-consuming and can lead to inconsistent organization. This automated n8n workflow uses Gmail and OpenAI to analyze incoming messages and apply the appropriate labels, such as "Quotation", "Inquiry", "Project progress", and "Notification", based on contentโimproving productivity and ensuring important messages are prioritized. What this workflow does The workflow retrieves new Gmail messages, analyzes their content with OpenAI, and automatically assigns pre-defined Gmail labels that match the emailโs intent. This ensures emails are sorted efficiently using AI-powered content analysis and Gmailโs labeling system. Setup Ensure Gmail labels (e.g., "Quotation", "Inquiry") are created in your Gmail account. Connect your Gmail and OpenAI accounts as credentials in n8n. Import the workflow into your n8n instance and update node configurations to match your Gmail label names. How to customize this workflow to your needs Edit or add Gmail labels both in your Gmail account and within the workflow logic. Adjust the prompt or parameters sent to OpenAI to better match your categorization style. Expand or refine the list of label categories to fit your teamโs or businessโs requirements.