by David Ashby
Complete MCP server exposing all Mandrill Tool operations to AI agents. Zero configuration needed - all 2 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Mandrill Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Mandrill Tool tool with full error handling 📋 Available Operations (2 total) Every possible Mandrill Tool operation is included: 💬 Message (2 operations) • Send a message based on a template • Send a message based on HTML 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Mandrill Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Mandrill Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Angel Menendez
Analyze Emails for Security Insights Who is this for? This workflow is ideal for security teams, IT Ops professionals, and managed service providers (MSPs) responsible for monitoring and validating email traffic. It’s especially useful for organizations that need to identify potential phishing attempts, spam, or compromised accounts by analyzing email headers and IP reputation. What problem is this workflow solving? This workflow helps identify malicious or suspicious emails by verifying email authentication headers (SPF, DKIM, DMARC) and analyzing the reputation of the originating IP address. By automating these checks, it reduces manual analysis time and flags potential threats efficiently. What this workflow does Email Monitoring:** Polls a specified Microsoft Outlook folder for new emails in real-time. Header Analysis:** Retrieves and processes email headers to extract critical information such as authentication results and the sender’s IP address. IP Reputation Check:** Leverages external APIs (IP Quality Score and IP-API) to analyze the originating IP for potential spam or malicious activity. Authentication Validation:** Validates SPF, DKIM, and DMARC headers, determining if the email passes industry-standard authentication protocols. Data Aggregation and Reporting:** Combines all analyzed data into a unified format, ready for reporting or integration into downstream systems. Webhook Integration:** Outputs the findings via a webhook, enabling integration with alerting tools or security information and event management (SIEM) platforms. Setup Connect to Outlook: Configure the Microsoft Outlook trigger node with valid OAuth2 credentials. Specify the email folder to monitor for new messages. API Keys (Optional): Obtain an API key for IP Quality Score (https://ipqualityscore.com). Ensure the IP-API endpoint is accessible. This step is optional as ipqualityscore.com will provide a limited number of free lookups each month. See more details here. Webhook Configuration: Set up a webhook endpoint to receive the output of the workflow. Optional Adjustments: Customize polling intervals in the trigger node. Modify header filters or extend the validation logic as needed. How to customize this workflow to your needs Add Alerts:** Use the Respond to Webhook node to trigger notifications in Slack, email, or any other communication channel. Integrate with SIEM:** Forward the workflow output to SIEM tools like Splunk or ELK Stack for further analysis. Modify Validation Rules:** Update SPF, DKIM, or DMARC logic in the Set nodes to align with your organization’s security policies. Expand IP Analysis:** Add more APIs or services to enrich IP reputation data, such as VirusTotal or AbuseIPDB. This workflow provides a robust foundation for email security monitoring and can be tailored to fit your organization's unique requirements. With its modular design and integration options, it’s a versatile tool to enhance your cybersecurity operations.
by Ferenc Erb
Overview Transform your Bitrix24 Open Line channels with an intelligent chatbot that leverages Retrieval-Augmented Generation (RAG) technology to provide accurate, document-based responses to customer inquiries in real-time. Use Case This workflow is designed for organizations that want to enhance their customer support capabilities in Bitrix24 by providing automated, knowledge-based responses to customer inquiries. It's particularly useful for: Customer service teams handling repetitive questions Support departments with extensive documentation Sales teams needing quick access to product information Organizations looking to provide 24/7 customer support What This Workflow Does Smart Document Processing Automatically processes uploaded PDF documents Splits documents into manageable chunks Generates vector embeddings for semantic understanding Indexes content for efficient retrieval AI-Powered Responses Utilizes Google Gemini AI to generate natural language responses Constructs answers based on relevant document content Maintains conversation context for coherent interactions Provides fallback responses when information is not available Vector Database Integration Stores document embeddings in Qdrant vector database Enables semantic search beyond simple keyword matching Retrieves the most relevant information for each query Maintains a persistent knowledge base that grows over time Webhook Handler Processes incoming messages from Bitrix24 Open Line channels Handles authentication and security validation Routes different types of events to appropriate handlers Manages session and conversation state Event Routing Intelligently routes different event types: ONIMBOTMESSAGEADD: Processes new user messages ONIMBOTJOINCHAT: Handles bot joining a conversation ONAPPINSTALL: Manages application installation ONIMBOTDELETE: Handles bot deletion Document Management Organizes processed documents in designated folders Tracks document processing status Moves indexed documents to appropriate locations Maintains document metadata for reference Interactive Menu Provides menu-based options for common user requests Customizable menu items and responses Easy navigation for users seeking specific information Fallback to operator option when needed Technical Architecture Components Webhook Handler: Receives and validates incoming requests from Bitrix24 Credential Manager: Securely manages authentication tokens and API keys Event Router: Directs events to appropriate processing functions Document Processor: Handles document loading, chunking, and embedding Vector Store: Qdrant database for storing and retrieving document embeddings Retrieval System: Searches for relevant document chunks based on user queries LLM Integration: Google Gemini model for generating natural language responses Response Manager: Formats and sends responses back to Bitrix24 Integration Points Bitrix24 API**: For bot registration, message handling, and user interaction Ollama API**: For generating document embeddings Qdrant API**: For vector storage and retrieval Google Gemini API**: For AI-powered response generation Setup Instructions Prerequisites Active Bitrix24 account with Open Line channels enabled Access to n8n workflow system Ollama API credentials Qdrant vector database access Google Gemini API key Configuration Steps Initial Setup Import the workflow into your n8n instance Configure credentials for all services Set up webhook endpoints Bitrix24 Configuration Create a new Bitrix24 application Configure webhook URLs Set appropriate permissions Install the application to your Bitrix24 account Document Storage Create a designated folder in Bitrix24 for knowledge base documents Configure folder paths in the workflow settings Upload initial documents to be processed Bot Configuration Customize bot name, avatar, and description Configure welcome messages and menu options Set up fallback responses Testing Verify successful installation Test document processing pipeline Send test queries to evaluate response qu
by Sean Lon
Personal Portfolio CV Rag Chatbot - with Conversation Store and Email Summary Target Audience This template is perfect for: Individuals looking to create a working professional and interactive personal portfolio chatbot. Developers interested in integrating RAG Chatbot functionality with conversation storage. 1. Description Create a stunning Personal Portfolio CV with integrated RAG Chatbot capabilities, including conversation storage and daily email summaries. 2.Features: Training: Setup Ingestion stage Upload your CV to Google Drive and let the Drive trigger updates to read your resume cv and convert it into your vector database (RAG purpose). Modify any parts as needed. Chat & Track: Use any frontend/backend interface to call the chat API and chat history API. Reporting Daily Chat Conversations: Receive daily automatic summaries of chat conversations. Data stored via NocoDB. 3.Setup Guide: Step-by-Step Instructions: Ensure all credentials are ready. Follow the notes provided. Ingestion: Upload your CV to Google Drive. The Drive triggers RAG update in your vector database. You can change the folder name, files and indexname of the vector database accordingly. Chat: Use any frontend/backend interface to call the chat API (refer to the notes for details) . [optional] Use any frontend/backend interface to call the update chat history API (refer to the notes for details). 3.Tracking Chat: Get daily automatic summaries of chat conversations.Format email conversations report as you like. You are ready to go!
by Luka Zivkovic
Complete Telegram Trivia Bot with AI Question Generation Build a fully-featured Telegram trivia bot that automatically generates fresh questions daily using OpenAI and tracks user progress with NocoDB. Perfect for communities, education, or entertainment! ✨ Key Features 🤖 AI Question Generation: Automatically creates 40+ new trivia questions daily across 8 categories 📊 Smart User Management: Tracks scores, prevents question repeats, maintains leaderboards 🎮 Game Mechanics: Star-based difficulty scoring, answer history, progress tracking 🏆 Competitive Elements: Real-time leaderboards with emoji rankings and user positioning 🛡️ Robust Architecture: Error handling, state management, and data validation 🚀 Perfect For Community Engagement**: Keep Telegram groups active with daily trivia challenges Educational Content**: Create learning experiences with categorized questions Business Applications**: Employee training, customer engagement, lead generation Personal Projects**: Learn n8n automation while building something fun 📱 Supported Commands /start - Welcome new users with setup instructions /question - Get personalized trivia questions (never repeats correctly answered ones) /score - View current points and statistics /leaderboard - See top 10 players with rankings /stats - Detailed accuracy and performance metrics /help - Complete command reference 🔧 How It Works User Journey: User sends /question command to bot System checks their answer history to avoid repeats Displays fresh question with multiple choice options Processes answer, updates score based on difficulty stars Saves complete answer history for future filtering AI Content Pipeline: Daily scheduler triggers question generation OpenAI creates 5 questions per category (8 categories total) Questions automatically saved to NocoDB with difficulty ratings Content includes explanations and proper formatting 🛠️ Set Up Steps Prerequisites: n8n instance (cloud or self-hosted) NocoDB database (free tier works) OpenAI API key (Not required if you want to add questions yourself) Telegram bot token Database Setup: Create 3 NocoDB tables with the exact field specifications provided in the sticky notes. The workflow includes complete schema documentation. Configuration Time: ~15 minutes for database setup + API keys Detailed Setup Instructions: All setup steps, database schemas, and configuration details are documented in the workflow's sticky notes for easy implementation. 📈 Advanced Features Question History Tracking**: Users never see correctly answered questions again Difficulty-Based Scoring**: 1-5 star rating system with corresponding points Category Management**: 8 different trivia categories for variety State Management**: Proper game flow with idle/waiting states Error Handling**: Graceful fallbacks for all edge cases Scalable Architecture**: Supports unlimited concurrent users 🎯 Business Applications Lead Generation**: Capture user data through engaging trivia Employee Training**: Create custom questions for onboarding Customer Engagement**: Keep users active in your Telegram community Educational Tools**: Subject-specific learning with progress tracking Event Activation**: Conferences, workshops, or team building 💡 Customization Options Modify question categories for your niche Adjust scoring systems and difficulty levels Add custom commands and features Integrate with other platforms or APIs Create specialized question sets 🔗 Get Started Ready to build your own AI-powered trivia bot? Start with n8n and follow the comprehensive setup guide included in this workflow template. Next Steps: Import this workflow template Follow the database setup instructions in sticky notes Configure your API credentials Test with sample questions Launch your trivia bot! Turn your friend group into trivia champions with AI-generated questions that spark friendly competition!
by lin@davoy.tech
This workflow template, "Personal Assistant to Note Messages and Extract Namecard Information" is designed to streamline the processing of incoming messages on the LINE messaging platform. It integrates with powerful tools like Microsoft Teams , Microsoft To Do , OneDrive , and OpenRouter.ai to handle tasks such as saving notes, extracting namecard information, and organizing images. Whether you’re managing personal productivity or automating workflows for teams, this template offers a versatile and customizable solution. By leveraging this workflow, you can automate repetitive tasks, improve collaboration, and enhance efficiency in handling LINE messages. Who Is This Template For? This template is ideal for: Professionals: Who want to save important messages, extract data from namecards, or organize images automatically. Teams: Looking to integrate LINE messages into tools like Microsoft Teams and Microsoft To Do for better collaboration. Developers: Seeking to build intelligent workflows that process text, images, and other inputs from LINE. Business Owners: Who need to manage customer interactions, follow-ups, and task tracking efficiently. What Problem Does This Workflow Solve? Managing incoming messages on LINE can be time-consuming, especially when dealing with diverse input types like text, images, and namecards. This workflow solves that problem by: Automatically identifying and routing different message types (text, images, namecards) to appropriate actions. Extracting structured data from namecards and saving it for follow-up tasks. Uploading images to OneDrive and saving text messages to Microsoft Teams or Microsoft To Do for easy access. Sending real-time feedback to users via LINE to confirm that their messages have been processed. What This Workflow Does Receive Messages via LINE Webhook: The workflow is triggered whenever a user sends a message (text, image, or other types) to the LINE bot. Display Loading Animation: A loading animation is displayed to reassure the user that their request is being processed. Route Input Types: The workflow uses a Switch node to determine the type of input: Text Starting with "T": Adds the message as a task in Microsoft To Do. Plain Text: Saves the message in Microsoft Teams under a designated channel (e.g., "Notes"). Images: Identifies whether the image is a namecard, handwritten note, or other content, then processes accordingly. Unsupported formats trigger a polite response indicating the limitation. Process Namecards: *Images * If the image is identified as a namecard, the workflow extracts structured data (e.g., name, email, phone number) using OpenRouter.ai and saves it to Microsoft To Do for follow-up tasks. Save Images to OneDrive: Images are uploaded to OneDrive, renamed based on their unique message ID, and linked in Microsoft Teams for reference. Send Feedback via LINE: The workflow replies to the user with confirmation messages, such as "[ Task Created ]" or "[ Message Saved ]." Setup Guide Pre-Requisites Access to the LINE Developers Console to configure your webhook and bot. Accounts for Microsoft Teams , Microsoft To Do, and OneDrive with API access. An OpenRouter.ai account with credentials to access models like GPT-4o. Basic knowledge of APIs, webhooks, and JSON formatting. Step-by-Step Setup 1) Configure the LINE Webhook: Go to the LINE Developers Console and set up a webhook to receive incoming messages. Copy the Webhook URL from the Line Webhook node and paste it into the LINE Console. Remove any "test" configurations when moving to production. 2) Set Up Microsoft Integrations: Connect your Microsoft Teams, Microsoft To Do, and OneDrive accounts to the respective nodes in the workflow. 3) Set Up OpenRouter.ai: Create an account on OpenRouter.ai and obtain your API credentials. Connect your credentials to the OpenRouter nodes in the workflow. Test the Workflow: Simulate sending text, images, and namecards to the LINE bot to verify that all actions are processed correctly. How to Customize This Workflow to Your Needs Add More Actions: Extend the workflow to handle additional input types or integrate with other tools. Enhance Image Processing: Use advanced OCR tools to improve text extraction from complex images. Customize Feedback Messages: Modify the reply format to include emojis, links, or other formatting options. Expand Use Cases: Adapt the workflow for specific industries, such as sales or customer support, by tailoring the actions to relevant tasks. Why Use This Template? Versatile Automation: Handles multiple input types (text, images, namecards) with ease. Seamless Integration: Connects LINE messages to popular productivity tools like Microsoft Teams and To Do. Structured Data Extraction: Extracts and organizes data from namecards, saving time and effort. Real-Time Feedback: Keeps users informed about the status of their requests with instant notifications.
by David Ashby
Complete MCP server exposing 1 Buy Marketing API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Buy Marketing API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Buy Marketing API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ebay.com/buy/marketing/v1_beta • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (1 total) 🔧 Merchandised_Product (1 endpoints) • GET /merchandised_product: Fetch Merchandised Products 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Buy Marketing API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 1 Recommendation API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Recommendation API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Recommendation API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ebay.com{basePath} • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (1 total) 🔧 Find (1 endpoints) • POST /find: Get Promoted Listings Recommendations 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Recommendation API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Fabrizio Terzi
AI-Driven Handbook Generator with Multi-Agent Orchestration (Pyragogy AI Village) This n8n workflow is a modular, multi-agent AI orchestration system designed for the collaborative generation of Markdown-based handbooks. Inspired by peer learning and open publishing workflows, it simulates a content pipeline where specialized AI agents act in defined roles, enabling true AI–human co-creation and iterative refinement. This project is a core component of Pyragogy, an open framework dedicated to ethical cognitive co-creation, peer AI–human learning, and human-in-the-loop automation for open knowledge systems. It implements the master orchestration architecture for the Pyragogy AI Village, managing a complex sequence of AI agents to process input, perform review, synthesis, and archiving, with a crucial human oversight step for final approval. How It Works: A Deep Dive into the Workflow's Architecture The workflow orchestrates a sophisticated content generation and review process, ideal for creating AI-driven knowledge bases or handbooks with human oversight. Webhook Trigger & Input:* The process begins when the workflow receives a JSON input via a *Webhook** (specifically at /webhook/pyragogy/process). This input typically includes details like the handbook's title, initial text, and relevant tags. Database Verification:* It first verifies the connection to a *PostgreSQL database** to ensure data persistence. Meta-Orchestrator:* A powerful *Meta-Orchestrator** (powered by gpt-4o from OpenAI) analyzes the initial request. Its role is to dynamically determine and activate the optimal sequence of specialized AI agents required to fulfill the input, ensuring tasks are dynamically routed and assigned based on each agent’s responsibility. Agent Execution & Iteration:** Each activated agent executes its step using OpenAI or custom endpoints. This involves: Content Generation: Agents like the Summarizer and the Synthesizer generate new content or refine existing text. Peer Review Board: A crucial aspect is the Peer Review Board, comprised of AI agents like the Peer Reviewer, the Sensemaking Agent, and the Prompt Engineer. This board evaluates the output for quality, coherence, and accuracy. Reprocessing & Redrafting: If the review agents flag a major_issue, they trigger redrafting loops by generating specific feedback for the Synthesizer. This mechanism ensures iterative refinement until the content meets the required standards. Human-in-the-Loop (HITL) Review:* For final approval, particularly for the Archivist agent's output, a *human review process* is initiated. An email is sent to a human reviewer, prompting them to approve, reject, or comment via a "Wait for Webhook" node. This ensures *human oversight** and quality control. Content Persistence & Versioning:** If the content is approved by the human reviewer: It's saved to a PostgreSQL database (specifically to the handbook_entries and agent_contributions tables). Optionally, the content can be committed to a GitHub repository for version control, provided the necessary environment variables are configured. Notifications:* The final output and the sequence of executed agents can be sent as a notification to *Slack**, if configured. Observe the dynamic loop: orchestrate → assign → generate → review (AI/human) → store Included AI Agents This workflow leverages a suite of specialized AI agents, each with a distinct role in the content pipeline: Meta-Orchestrator:** Determines the optimal sequence of agents to execute based on the input. Summarizer Agent:** Summarizes text into key points (e.g., 3 key points). Synthesizer Agent:** Synthesizes new text and effectively incorporates reprocessing feedback from review agents. Peer Reviewer Agent:** Reviews generated text, highlighting strengths, weaknesses, and suggestions, and indicates major_issue flags. Sensemaking Agent:** Analyzes input within existing context, identifying patterns, gaps, and areas for improvement. Prompt Engineer Agent:** Refines or generates prompts for subsequent agents, optimizing their output. Onboarding/Explainer Agent:** Provides explanations of the process or offers guidance to users. Archivist Agent:** Prepares content for the handbook, manages the human review process, and handles archiving to the database and GitHub. Setup Steps & Prerequisites To get this powerful workflow up and running, follow these steps: Import the Workflow: Import the pyragogy_master_workflow.json (or generate-collaborative-handbooks-with-gpt4o-multi-agent-orchestration-human-review.json) into your n8n instance. Connect Credentials: Postgres: Set up a Postgres Pyragogy DB credential (ID: pyragogy-postgres). OpenAI: Configure an OpenAI Pyragogy credential (ID: pyragogy-openai) for all OpenAI agents. GPT-4o is highly suggested for optimal performance. Email Send: Set up a configured email credential (e.g., for sending human review requests). Define Environment Variables: Define essential environment variables (an .env.template is included in the repository). These include: API base for OpenAI. Database connection details. (Optional) GitHub: For content persistence and versioning, configure GITHUB_ACCESS_TOKEN, GITHUB_REPOSITORY_OWNER, and GITHUB_REPOSITORY_NAME. (Optional) Slack: For notifications, configure SLACK_WEBHOOK_URL. Send a sample payload to your webhook URL (/webhook/pyragogy/process): { "title": "History of Peer Learning", "text": "Peer learning is an educational approach where students learn from and with each other...", "tags": ["education", "pedagogy"], "requireHitl": true } Ideal For This workflow is perfectly suited for: Educators and researchers exploring AI-assisted publishing and co-authoring with AI. Knowledge teams looking to automate content pipelines for internal or external documentation. Anyone building collaborative Markdown-driven tools or AI-powered knowledge bases. Documentation & Contributions: An Open Source and Collaborative Project This workflow is an open-source project and community-driven. Its development is transparent and open to everyone. We warmly invite you to: Review it:** Contribute your analysis, identify potential improvements, or report issues. Remix it:** Adapt it to your specific needs, integrate new features, or modify it for a different use case. Improve it:** Propose and implement changes that enhance its efficiency, robustness, or capabilities. Share it back:** Return your contributions to the community, either through pull requests or by sharing your implementations. Every contribution is welcome and valued! All relevant information for verification, improvement, and collaboration can be found in the official repository: 🔗 GitHub – pyragogy-handbook-n8n-workflow
by Jimleuk
This n8n template builds a meeting assistant that compiles timely reminders of upcoming meetings filled with email history and recent LinkedIn activity of other people on the invite. This is then discreetly sent via WhatsApp ensuring the user is always prepared, informed and ready to impress! How it works A scheduled trigger fires hourly to check for upcoming personal meetings. When found, the invite is analysed by an AI agent to pull email and LinkedIn details of the other invitees. 2 subworkflows are then triggered for each invitee to (1) search for last email correspondence with them and (2) scrape their LinkedIn profile + recent activity for social updates. Using both available sources, another AI agent is used to summarise this information and generate a short meeting prep message for the user. The notification is finally sent to the user's WhatsApp, allowing them ample time to review. How to use There are a lot of moving parts in this template so in it's current form, it's best to use this for personal rather than team calendars. The LinkedIn scraping method used in this workflow requires you to paste in your LinkedIn cookies from your browser which essentially let's n8n impersonate you. You can retrieve this from dev console or ask someone technical for help! Note: It may be wise to switch to other LinkedIn scraping approaches which do not impersonate your own account for production. Requirements OpenAI for LLM Gmail for Email Google Calendar for upcoming events WhatsApp Business account for notifications Customising this workflow Try adding information sources which are relevant to you and your invitees. Such as company search, other social media sites etc. Create an on-demand version which doesn't rely on the scheduled trigger. Sometimes you want to know prepare for meetings hours or days in advance where this could help immensely.
by Alexandra Spalato
YouTube Content Repurposing Automation Who's it for This workflow is for content creators, marketers, agencies, coaches, and businesses who want to maximize their YouTube content ROI by automatically generating multiple content assets from single videos. It's especially useful for professionals who want to: Repurpose YouTube videos into blogs, social posts, newsletters, and tutorials without manual effort Scale their content production across multiple channels and platforms Create consistent, high-quality content derivatives while saving time and resources Build automated content systems that generate multiple revenue streams Maintain active presence across social media, email, and blog platforms simultaneously What problem is this workflow solving Content creators face significant challenges when trying to maximize their video content: Time-intensive manual repurposing: Converting one YouTube video into multiple content formats traditionally requires hours of manual writing, editing, and formatting across different platforms. Inconsistent content quality: Manual repurposing often leads to varying quality levels and missed opportunities to optimize content for specific platforms. High costs for content services: Hiring ghostwriters or content agencies to repurpose videos can cost thousands of dollars monthly. Scaling bottlenecks: Manual processes prevent creators from efficiently scaling their content across multiple channels and formats. This workflow solves these problems by automatically extracting YouTube video transcripts, using AI to generate multiple high-quality content formats (tutorials, blog posts, social media content, newsletters), and organizing everything in Airtable for easy management and distribution. How it works Automated Video Processing Starts with a manual trigger and retrieves YouTube URLs from your Airtable configuration, processing only videos marked as "selected" while filtering out those marked for deletion. Intelligent Transcript Extraction Uses Scrape Creator API to extract video transcripts, automatically cleaning and formatting the text for optimal AI processing and content generation. Multi-Format Content Generation Leverages OpenRouter models, o you can easily test different AI models and choose the one that delivers the best results for your needs: Step-by-step tutorials with code snippets and technical details YouTube scripts with hooks, titles, and conclusions Blog posts optimized for lead generation Structured summaries with key takeaways LinkedIn posts with engagement triggers Newsletter content for email marketing Twitter/X posts for social media Smart Content Filtering Processes only the content types you've selected in Airtable, ensuring efficient resource usage and faster execution times. Automated Content Organization Matches and combines all generated content pieces by URL, then updates your Airtable with complete, ready-to-use content assets organized by type and source video. How to set up Required credentials OpenRouter API key** Airtable Personal Access Token** Scrape Creators API Key** - For YouTube transcript extraction and processing Airtable base setup Create an Airtable base with one main table: Videos Table: title** (Single line text): Video title for reference url** (URL): YouTube video URL to process Status** (Single select): Options: "selected", "delete", "processed" output** (Multiple select): Content types to generate summary tutorial blog-post linkedin newsletter tweeter youtube summary** (Long text): Generated video summary tutorial** (Long text): Generated step-by-step tutorial key_take_aways** (Long text): Extracted key insights blog_post** (Long text): Generated blog post content linkedin** (Long text): LinkedIn post content newsletter** (Long text): Email newsletter content tweeter** (Long text): Twitter/X post content youtube_titles** (Long text): YouTube video title suggestions youtube_hook** (Long text): Video opening hooks youtube_steps** (Long text): Video step breakdowns youtube_conclusion** (Long text): Video ending/CTAs API Configuration Scrape Creator Setup: Sign up for Scrape Creator API Obtain your API key from the dashboard Configure the HTTP Request node with your credentials Set the endpoint to: https://api.scrapecreators.com/v1/youtube/video/transcript OpenAI Setup: Create an OpenRouter account and generate an API key Workflow Configuration Import the workflow JSON into your n8n instance Update all credential references with your API keys Configure the Airtable nodes with your base and table IDs Test the workflow with a single video URL first Requirements n8n instance** (self-hosted or cloud) Active API subscriptions** for OpenRouter (or the LLM or your choice), Airtable, and Scrape Creator YouTube video URLs** - Must be publicly accessible videos with available transcripts Airtable account** - Free tier sufficient for most use cases How to customize the workflow Modify content generation prompts Edit the LLM Chain nodes to customize content style and format: Tutorial node**: Adjust technical depth and formatting preferences Blog post node**: Modify tone, length, and CTA strategies LinkedIn node**: Customize engagement hooks and professional tone Newsletter node**: Tailor subject lines and email marketing approach Adjust AI model selection Update the OpenRouter Chat Model to use different models Add new content formats Create additional LLM Chain nodes for new content types: Instagram captions TikTok scripts Podcast descriptions Course outlines
by Custom Workflows AI
Introduction The "Automatic Weekly Digital PR Stories Suggestions" workflow is a sophisticated automated system designed to identify trending news stories on Reddit, analyze public sentiment through comment analysis, extract key information from source articles, and generate strategic angles for potential digital PR campaigns. This workflow leverages the power of social media trends, natural language processing, and AI-driven analysis to deliver curated, sentiment-analyzed news opportunities for PR professionals. Operating on a weekly schedule, the workflow searches Reddit for posts related to specified topics, filters them based on engagement metrics, and performs a deep analysis of both the content and public reaction. It then generates comprehensive reports that include story opportunities, audience insights, and strategic recommendations. These reports are automatically compiled, stored in Google Drive, and shared with team members via Mattermost for immediate collaboration. This workflow solves the time-consuming process of manually monitoring social media for trending stories, analyzing public sentiment, and identifying PR opportunities. By automating these tasks, PR professionals can focus on strategy development and execution rather than spending hours on research and analysis. Who is this for? This workflow is designed for digital PR professionals, content marketers, communications teams, and media relations specialists who need to stay on top of trending stories and public sentiment to develop timely and effective PR campaigns. It's particularly valuable for: PR agencies managing multiple clients across different industries In-house PR teams needing to identify media opportunities quickly Content marketers looking for trending topics to create timely content Communications professionals monitoring public perception of industry news Users should have basic familiarity with n8n workflows and the PR strategy development process. While technical knowledge of the integrated APIs is not required to use the workflow, some understanding of Reddit, sentiment analysis, and PR campaign development would be beneficial for interpreting and acting on the generated reports. What problem is this workflow solving? Digital PR professionals face several challenges that this workflow addresses: Information Overload: Manually monitoring social media platforms for trending stories is time-consuming and often results in missed opportunities. Sentiment Analysis Complexity: Understanding public perception of news stories requires reading through hundreds of comments and identifying patterns, which is labor-intensive and subjective. Content Extraction: Visiting multiple news sources to read and analyze articles takes significant time. Strategic Angle Development: Identifying unique PR angles that leverage trending stories and public sentiment requires synthesizing large amounts of information. Team Collaboration: Sharing findings and insights with team members in a structured format can be cumbersome. By automating these processes, the workflow enables PR professionals to quickly identify trending stories with PR potential, understand public sentiment, and develop strategic angles based on comprehensive analysis, all while maintaining a structured approach to team collaboration. What this workflow does Overview The workflow automatically identifies trending posts on Reddit related to specified topics, analyzes both the content of linked articles and public sentiment from comments, and generates comprehensive PR strategy reports. These reports include story opportunities, audience insights, and strategic recommendations based on the analysis. The final reports are compiled, stored in Google Drive, and shared with team members via Mattermost. Process Topic Selection and Reddit Search: The workflow starts with a list of topics specified in the "Set Data" node It searches Reddit for posts related to these topics Posts are filtered based on upvotes and other criteria to focus on trending content Comment Analysis: For each post, the workflow retrieves comments It extracts the top 30 comments based on score Using Claude AI, it analyzes the comments to understand: Overall sentiment Dominant narratives Audience insights PR implications Content Analysis: The workflow extracts the content of the linked article using Jina AI It analyzes the content to identify: Core story elements Technical aspects Narrative opportunities Viral elements PR Strategy Development: Based on the combined analysis of comments and content, the workflow generates: First-mover story opportunities Trend-amplifier story ideas Priority rankings Execution roadmap Strategic recommendations Report Generation and Distribution: The workflow compiles comprehensive reports for each post Reports are converted to text files All files are compressed into a ZIP archive The archive is uploaded to Google Drive A link to the archive is shared with team members via Mattermost Setup To set up this workflow, follow these steps: Import the Workflow: Download the workflow JSON file Import it into your n8n instance Configure API Credentials: Reddit: Add a new credential "Reddit OAuth2 API" by following the guide at https://docs.n8n.io/integrations/builtin/credentials/reddit/ Anthropic: Add a new credential "Anthropic Account" by following the guide at https://docs.n8n.io/integrations/builtin/credentials/anthropic/ Google Drive: Add a new credential "Google Drive OAuth2 API" by following the guide at https://docs.n8n.io/integrations/builtin/credentials/google/oauth-single-service/ Configure the "Set Data" Node: Set your interested topics (one per line) Add your Jina API key (obtain from https://jina.ai/api-dashboard/key-manager) Configure the Mattermost Node: Update your Mattermost instance URL Set your Webhook ID and Channel Follow the guide at https://developers.mattermost.com/integrate/webhooks/incoming/ for webhook setup Adjust the Schedule (Optional): The workflow is set to run every Monday at 6am Modify the "Schedule Trigger" node if you need a different schedule Test the Workflow: Run the workflow manually to ensure all connections are working properly Check the output to verify the reports are being generated correctly How to customize this workflow to your needs This workflow can be customized in several ways to better suit your specific requirements: Topic Selection: Modify the topics in the "Set Data" node to focus on industries or subjects relevant to your PR strategy Add multiple topics to cover different client interests or market segments Filtering Criteria: Adjust the "Upvotes Requirement Filtering" node to change the minimum upvotes threshold Modify the filtering conditions to include or exclude certain types of posts Analysis Parameters: Customize the prompts in the "Comments Analysis," "News Analysis," and "Stories Report" nodes to focus on specific aspects of the content or comments Adjust the temperature settings in the Anthropic Chat Model nodes to control the creativity of the AI responses Report Format: Modify the "Set Final Report" node to change the structure or content of the final reports Add or remove sections based on your specific reporting needs Distribution Method: Replace or supplement the Mattermost notification with email notifications, Slack messages, or other communication channels Add additional storage options beyond Google Drive Schedule Frequency: Change the "Schedule Trigger" node to run the workflow more or less frequently Set up multiple triggers for different topics or clients Integration with Other Systems: Add nodes to integrate with your CRM, content management system, or project management tools Create connections to automatically populate content calendars or task management systems