by Guillaume Duvernay
This template provides a fully automated system for monitoring news on any topic you choose. It leverages Linkup's AI-powered web search to find recent, relevant articles, extracts key information like the title, date, and summary, and then neatly organizes everything in an Airtable base. Stop manually searching for updates and let this workflow deliver a curated news digest directly to your own database, complete with a Slack notification to let you know when it's done. This is the perfect solution for staying informed without the repetitive work. Who is this for? Marketing & PR professionals:** Keep a close eye on industry trends, competitor mentions, and brand sentiment. Analysts & researchers:** Effortlessly gather source material and data points on specific research topics. Business owners & entrepreneurs:** Stay updated on market shifts, new technologies, and potential opportunities without dedicating hours to reading. Anyone with a passion project:** Easily follow developments in your favorite hobby, field of study, or area of interest. What problem does this solve? Eliminates manual searching:** Frees you from the daily or weekly grind of searching multiple news sites for relevant articles. Centralizes information:** Consolidates all relevant news into a single, organized, and easily accessible Airtable database. Provides structured data:** Instead of just a list of links, it extracts and formats key information (title, summary, URL, date) for each article, ready for review or analysis. Keeps you proactively informed:** The automated Slack notification ensures you know exactly when new information is ready, closing the loop on your monitoring process. How it works Schedule: The workflow runs automatically based on a schedule you set (the default is weekly). Define topics: In the Set news parameters node, you specify the topics you want to monitor and the time frame (e.g., news from the last 7 days). AI web search: The Query Linkup for news node sends your topics to Linkup's API. Linkup's AI searches the web for relevant news articles and returns a structured list containing each article's title, URL, summary, and publication date. Store in Airtable: The workflow loops through each article found and creates a new record for it in your Airtable base. Notify on Slack: Once all the news has been stored, a final notification is sent to a Slack channel of your choice, letting you know the process is complete and how many articles were found. Setup Configure the trigger: Adjust the Schedule Trigger node to set the frequency and time you want the workflow to run. Set your topics: In the Set news parameters node, replace the example topics with your own keywords and define the news freshness that you'd like to set. Connect your accounts: Linkup: Add your Linkup API key in the Query Linkup for news node. Linkup's free plan includes €5 of credits monthly, enough for about 1,000 runs of this workflow. Airtable: In the Store one news node, select your Airtable account, then choose the Base and Table where you want to save the news. Slack: In the Notify in Slack node, select your Slack account and the channel where you want to receive notifications. Activate the workflow: Toggle the workflow to "Active", and your automated news monitoring system is live! Taking it further Change your database:* Don't use Airtable? Easily swap the *Airtable* node for a *Notion, **Google Sheets, or any other database node to store your news. Customize notifications:* Replace the *Slack* node with a *Discord, **Telegram, or Email node to get alerts on your preferred platform. Add AI analysis:** Insert an AI node after the Linkup search to perform sentiment analysis on the news summaries, categorize articles, or generate a high-level overview before saving them.
by Danger
How it Works This meta-workflow is designed to intelligently scan all your active workflows in n8n, identify those that contain Webhook nodes, and automatically generate a Swagger (OpenAPI) specification based on them. The output Swagger document reflects all accessible endpoints from your Webhook nodes, making it easier to: Visualize your API structure Share your endpoints Integrate with tools like Postman or Swagger UI Enhanced Parameter Support If you want the Swagger to reflect request parameters (e.g., query or body fields), you can annotate your Webhook nodes using the Note section. When configured properly, these annotations enrich your Swagger documentation with parameter names, types, and descriptions. Setup Steps Add the WebhookDocs to n8n Import the WebhookDocs JSON file into your n8n instance. Activate the WebhookDocs (you can also use the test-endpoint) Annotate Webhook Nodes (Optional but Recommended) To enable parameter documentation, open the Note section of each Webhook node and add annotations in the following format: //@body field_name string description //@query field_name string description Open the page https://n8n.youristance.com/webhook/swagger
by Stephan Koning
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. **Alternatively, you can delete the community node and use the HTTP node instead. ** Most email agent templates are fundamentally broken. They're stateless—they have no long-term memory. An agent that can't remember past conversations is just a glorified auto-responder, not an intelligent system. This workflow is Part 1 of building a truly agentic system: creating the brain. Before you can have an agent that replies intelligently, you need a knowledge base for it to draw from. This system uses a sophisticated parser to automatically read, analyze, and structure every incoming email. It then logs that intelligence into a persistent, long-term memory powered by mem0. The Problem This Solves Your inbox is a goldmine of client data, but it's unstructured, and manually monitoring it is a full-time job. This constant, reactive work prevents you from scaling. This workflow solves that "system problem" by creating an "always-on" engine that automatically processes, analyzes, and structures every incoming email, turning raw communication into a single source of truth for growth. How It Works This is an autonomous, multi-stage intelligence engine. It runs in the background, turning every new email into a valuable data asset. Real-Time Ingest & Prep: The system is kicked off by the Gmail Trigger, which constantly watches your inbox. The moment a new email arrives, the workflow fires. That email is immediately passed to the Set Target Email node, which strips it down to the essentials: the sender's address, the subject, and the core text of the message (I prefer using the plain text or HTML-as-text for reliability). While this step is optional, it's a good practice for keeping the data clean and orderly for the AI. AI Analysis (The Brain): The prepared text is fed to the core of the system: the AI Agent. This agent, powered by the LLM of your choice (e.g., GPT-4), reads and understands the email's content. It's not just reading; it's performing analysis to: Extract the core message. Determine the sentiment (Positive, Negative, Neutral). Identify potential red flags. Pull out key topics and keywords. The agent uses Window Buffer Memory to recall the last 10 messages within the same conversation thread, giving it the context to provide a much smarter analysis. Quality Control (The Parser): We don't trust the AI's first draft blindly. The analysis is sent to an Auto-fixing Output Parser. If the initial output isn't in a perfect JSON format, a second Parsing LLM (e.g., Mistral) automatically corrects it. This is our "twist" that guarantees your data is always perfectly structured and reliable. Create a Permanent Client Record: This is the most critical step. The clean, structured data is sent to mem0. The analysis is now logged against the sender's email address. This moves beyond just tracking conversations; it builds a complete, historical intelligence file on every person you communicate with, creating an invaluable, long-term asset. Optional Use: For back-filling historical data, you can disable the Gmail Trigger and temporarily connect a Gmail "Get Many" node to the Set Target Email node to process your backlog in batches. Setup Requirements To deploy this system, you'll need the following: An active n8n instance. Gmail** API credentials. An API key for your primary LLM (e.g., OpenAI). An API key for your parsing LLM (e.g., Mistral AI). An account with mem0.ai for the memory layer.
by Jimleuk
This n8n template demonstrates how to use OpenAI's Responses API with existing LLM and AI Agent nodes. Though I would recommend just waiting for official support, if you're impatient and would like a round-about way to integrate OpenAI's responses API into your existing AI workflows then this template is sure to satisfy! This approach implements a simple API wrapper for the Responses API using n8n's builtin webhooks. When the base url is pointed to these webhooks using a custom OpenAI credential, it's possible to intercept the request and remap for compatibility. How it works An OpenAI subnode is attached to our agent but has a special custom credential where the base_url is changed to point at this template's webhooks. When executing a query, the agent's request is forwarded to our mini chat completion workflow. Here, we take the default request and remap the values to use with a HTTP node which is set to query the Responses API. Once a response is received, we'll need to remap the output for Langchain compatibility. This just means the LLM or Agent node can parse it and respond to the user. There are two response formats, one for streaming and one for non-streaming responses. How to use You must activate this workflow to be able to use the webhooks. Create the custom OpenAI credential as instructed. Go to your existing AI workflows and replace the LLM node with the custom OpenAI credential. You do not need to copy anything else over to the existing template. Requirements OpenAI account for Responses API Customising this workflow Feel free to experiment with other LLMs using this same technique! Keep up to date with the Responses API announcements and make modifications as required.
by Yang
Who is this for? This workflow is for content creators, social media managers, marketing teams, and virtual assistants who want to automatically repurpose YouTube videos into ready-to-post social media content. If you need to quickly turn long-form videos into short posts for platforms like Instagram, Facebook, or LinkedIn, this workflow saves you hours of manual work. What problem is this workflow solving? Manually extracting ideas from YouTube videos, writing captions, creating images, and preparing social media posts takes a lot of time and effort. This workflow automates the entire process: it reads the video, generates posts with captions and AI images, and organizes everything into Airtable. It lets you focus more on growing your audience instead of spending hours repurposing content. What this workflow does Watches a YouTube channel RSS feed for new videos. Extracts the video transcript automatically using Dumpling AI. Summarizes and transforms the transcript into 3 social media captions (Instagram, Facebook, LinkedIn) using OpenAI. Generates 3 unique AI image prompts. Sends the prompts to Dumpling AI to create realistic social media images. Saves the captions and attaches the AI images into Airtable, ready for posting. Setup RSS Feed Setup Get your YouTube channel’s RSS feed URL. Insert the URL into the RSS Trigger node. This will monitor for new YouTube uploads automatically. Dumpling AI Setup for Transcript Extraction Sign up at Dumpling AI. Get your Dumpling AI API Key. In the first HTTP Request node after the RSS trigger, insert your API Key (use HTTP Header Authentication). This sends the YouTube URL to Dumpling AI’s /extract-transcript endpoint. OpenAI Setup for Caption and Prompt Generation Get your OpenAI API Key. In the OpenAI node, connect your account. The AI will: Generate 3 platform-specific captions. Generate 3 creative prompts to design images related to the video. Edit Fields Node This node organizes the generated captions and prompts into separate fields for easy Airtable mapping. Captions are split for Instagram, Facebook, and LinkedIn. Dumpling AI Setup for AI Image Generation After the Edit Fields node, the second HTTP Request node sends the image prompt to Dumpling AI’s /generate-image endpoint. This returns a realistic AI-generated image. Airtable Setup for Saving Posts (Without Image First) Create a new base in Airtable with the following fields: Platform (Single select: Instagram, Facebook, LinkedIn) Content (Long text field) Image (Attachment field) Connect your Airtable Personal Access Token to the Airtable node. The Airtable node saves the generated captions into separate records, initially without images. Upload Generated Images Back to Airtable The third HTTP Request node PATCHES the Airtable record. It updates the Image field with the generated AI image from Dumpling AI. Credentials Required Dumpling AI API Key (for transcript extraction and AI image generation) OpenAI API Key (for caption and prompt creation) Airtable Personal Access Token (for inserting and updating records) How to customize this workflow to your needs Change the OpenAI prompt to generate captions in your brand tone (e.g., friendly, professional, witty). Modify the image prompts to match your design style better. Adjust the Airtable base fields if you want to add more platforms or content formats. Add scheduling tools like Buffer or Metricool to automatically post from Airtable. ⚡ Quick Tips Make sure Dumpling AI credits are active to allow transcript and image generation. Set Airtable permissions properly so PATCH requests can update attachments.
by Custom Workflows AI
Introduction This workflow offers a streamlined solution for uploading multiple files to a GitHub repository simultaneously using GitHub's REST API. It addresses a significant limitation of n8n's native GitHub node, which only supports single-file uploads at a time. By leveraging GitHub's Git Data API, this workflow creates a new Git tree containing multiple files, commits this tree, and updates the target branch—all in a single automated process. The workflow is particularly valuable for automation scenarios that require batch file operations, such as deploying website updates, publishing documentation, or maintaining configuration files across repositories. It eliminates the need for multiple separate API calls when working with multiple files, making your automation more efficient and less prone to partial update issues. By abstracting the complexities of GitHub's Git Data API into a reusable workflow, it provides a practical solution for developers, content managers, and DevOps professionals who need to programmatically manage repository content at scale. Who is this for? This workflow is designed for: Developers and DevOps engineers who need to automate file updates in GitHub repositories Content managers who regularly publish multiple files to GitHub-hosted websites or documentation Automation specialists looking to integrate GitHub operations into larger workflows Teams using n8n for CI/CD processes who need to push code or configuration changes Users should have basic familiarity with GitHub concepts (repositories, branches, commits) and should be comfortable obtaining and using GitHub Personal Access Tokens. While the workflow handles the API complexity, users should understand the fundamentals of version control to effectively utilize and customize it. What problem is this workflow solving? This workflow addresses several key challenges: Limited batch operations: n8n's native GitHub node only supports uploading one file at a time, making multi-file operations cumbersome and inefficient. API complexity: GitHub's Git Data API requires multiple sequential calls with interdependent data to create commits with multiple files, which is complex to implement manually. Automation bottlenecks: Without this workflow, automating multi-file updates would require either multiple separate API calls (risking partial updates) or custom scripting outside of n8n. Consistency issues: When files need to be updated together (e.g., code and corresponding documentation), this workflow ensures they're committed in a single atomic operation. By solving these issues, the workflow enables reliable, atomic updates of multiple files, maintaining repository consistency and simplifying automation processes. What this workflow does Overview This workflow uses GitHub's REST API to push multiple files to a repository in a single operation. It follows Git's internal model by: Retrieving the current state of the repository Creating a new tree with the files to be added or updated Creating a new commit with this tree Updating the branch reference to point to the new commit Process Initialization: The workflow starts with a manual trigger and sets up GitHub credentials and repository information. File Content Definition: Two "Set" nodes define the content for the files to be uploaded. Repository State Retrieval: The workflow fetches the latest commit SHA for the specified branch It then retrieves the base tree SHA from this commit Tree Creation: A new Git tree is created that includes both files (file1.txt and file2.txt), specifying their paths and content. Commit Creation: A new commit is created with the specified commit message, referencing the new tree and the parent commit. Branch Update: Finally, the branch reference is updated to point to the new commit, making the changes visible in the repository. Setup To use this workflow: Import the workflow: Download the workflow JSON and import it into your n8n instance. Create a GitHub Personal Access Token: Go to GitHub Settings → Developer Settings → Personal Access Tokens → Fine-grained tokens Create a new token with "Contents" permission (Read and write) for your target repository Configure the workflow: Update the "Set Github Info" node with: Your GitHub Personal Access Token Your GitHub username Your repository name The target branch (default is "main") A commit message Define file content: Modify the "File 1" and "File 2" nodes with the content you want to upload Adjust file paths if needed: In the "Create new tree" node, update the file paths if you want to change where the files are stored in the repository Save and run the workflow: Click "Test workflow" to execute the process. How to customize this workflow to your needs This workflow can be adapted in several ways: Add more files: Create additional "Set" nodes for more file content In the "Create new tree" node, add more tree entries following the same pattern (path, mode, type, content) Change file locations: Modify the "path" parameters in the "Create new tree" node to place files in different directories Dynamic file content: Replace the static content in the "File" nodes with data from other sources Use previous nodes or HTTP requests to generate file content dynamically Conditional file updates: Add IF nodes to determine which files should be updated based on certain conditions Create separate branches in your workflow for different update scenarios Scheduled updates: Replace the manual trigger with a Schedule node to run the workflow at specific intervals Combine with other triggers like Webhook or database events to push files when certain events occur Error handling: Add Error Trigger nodes to handle potential API failures Implement notification nodes to alert you of successful pushes or failures
by Vitorio Magalhães
Auto-publish NASA APOD to LinkedIn with AI translation and hashtags Transform NASA's daily astronomical wonders into engaging LinkedIn content automatically. This workflow fetches NASA's Astronomy Picture of the Day, translates it to Brazilian Portuguese using AI, generates strategic hashtags, and publishes everything to your LinkedIn profile with the stunning space image attached. Who's it for Content creators, astronomy enthusiasts, science communicators, and anyone wanting to share high-quality educational content consistently on LinkedIn. Perfect for Portuguese-speaking professionals who want to engage their network with fascinating space discoveries while building their personal brand as a science advocate. How it works The workflow runs on a daily schedule and handles the complete content pipeline automatically. It fetches the latest NASA APOD through the official API, including both the image and detailed explanation. The English description gets professionally translated to selected language using Google Gemini 2.5 Flash, while maintaining scientific accuracy and terminology. Smart hashtag generation combines fixed branding tags with content-specific ones, mixing Portuguese and English for maximum reach. The final post includes the NASA image, translated description, and strategic hashtags, then gets published to your LinkedIn profile automatically. How to set up You'll need accounts for Google AI Studio (free), LinkedIn Developer (free), and a Telegram bot for notifications. The setup takes about 15 minutes and uses only free services and APIs. First, create your Google AI Studio account and get an API key for the AI translation services. Then set up a LinkedIn OAuth2 application to enable posting permissions. Create a Telegram bot through BotFather and get your chat ID for notifications. Configure the Settings node with your Telegram chat ID and preferred language. The workflow comes with all prompts and configurations ready to use. Test each component individually before activating the daily automation. Requirements LinkedIn account with posting permissions Google AI Studio API key (free tier available) Telegram bot token and your chat ID Basic understanding of OAuth2 setup for LinkedIn NASA API key (optional - demo key included) All services used have generous free tiers, making this workflow completely free to operate indefinitely. How to customize the workflow The centralized Settings node makes customization simple. Change the target language from Brazilian Portuguese to any other language by updating the translate_to_language variable. Modify the posting schedule in the CRON trigger to match your preferred timing. Customize the post template in the "Create Final Post Text" node to match your personal brand voice. Adjust the hashtag strategy by editing the AI prompt in the "Generate Hashtags" node. Add additional social platforms by duplicating the LinkedIn publisher with different credentials. The AI prompts can be fine-tuned for different writing styles or specific astronomical topics. You can also extend the workflow to include additional content processing, image enhancements, or cross-posting to multiple platforms while maintaining the core NASA APOD automation.
by explorium
Explorium Prospects Search Chatbot Template Download the following json file and import it to a new n8n workflow: mcp\_to\_prospects\_to\_csv.json Overview This n8n workflow creates a chatbot that understands natural language requests for finding business prospects and automatically: Interprets your query using AI (Claude Sonnet 3.7) Converts it to proper Explorium API filters Validates the API request structure Fetches prospect data from Explorium Exports results as a downloadable CSV file Perfect for sales teams, recruiters, and business development professionals who need to quickly find and export targeted prospect lists without learning complex API syntax. Key Features Natural Language Interface**: Simply describe who you're looking for in plain English Smart Query Translation**: AI converts your request to valid API parameters Built-in Validation**: Ensures API calls meet Explorium's requirements Error Recovery**: Automatically retries with corrections if validation fails Pagination Support**: Handles large result sets automatically CSV Export**: Clean, formatted output ready for CRM import Conversation Memory**: Maintains context for follow-up queries Example Queries The chatbot understands queries like: "Find marketing directors at SaaS companies in New York with 50-200 employees" "Get me CTOs from fintech startups in California" "Show me sales managers at healthcare companies with revenue over $10M" "Find engineers at Microsoft with 3-5 years experience" "Get customer service leads from e-commerce companies in Europe" Prerequisites Before setting up this workflow, ensure you have: n8n instance with chat interface enabled Anthropic API key for Claude Explorium API credentials (Bearer token) - Get explorium api key Basic understanding of n8n chat workflows Supported Filters The chatbot can search using these criteria: Company Filters Size**: 1-10, 11-50, 51-200, 201-500, 501-1000, 1001-5000, 5001-10000, 10001+ employees Revenue**: Ranges from $0-500K up to $10T+ Age**: 0-3, 3-6, 6-10, 10-20, 20+ years Location**: Countries, regions, cities Industry**: Google categories, NAICS codes, LinkedIn categories Name**: Specific company names Prospect Filters Job Level**: CXO, VP, Director, Manager, Senior, Entry, etc. Department**: Sales, Marketing, Engineering, Finance, HR, etc. Experience**: Total months and current role duration Location**: Country and region codes Contact Info**: Filter by email/phone availability Installation & Setup Step 1: Import the Workflow Copy the workflow JSON from the template In n8n: Workflows → Add Workflow → Import from File Paste the JSON and click Import Step 2: Configure Anthropic Credentials Click on the Anthropic Chat Model1 node Under Credentials, click Create New Add your Anthropic API key Name: "Anthropic API" Save credentials Step 3: Configure Explorium Credentials You'll need to set up Explorium credentials in two places: For MCP Client: Click on the MCP Client node Under Credentials, create new Header Auth Add your authentication header (usually Authorization: Bearer YOUR_TOKEN) Save credentials For API Calls: Click on the Prospects API Call node Use the same Header Auth credentials created above Verify the API endpoint is correct Step 4: Activate the Workflow Save the workflow Click the Active toggle to enable it The chat interface will now be available Step 5: Access the Chat Interface Click on the When chat message received node Copy the webhook URL Access this URL in your browser to start chatting How It Works Workflow Architecture Chat Trigger: Receives natural language queries from users Memory Buffer: Maintains conversation context AI Agent: Interprets queries and generates API parameters Validation: Checks API structure against Explorium requirements API Call: Fetches prospect data with pagination Data Processing: Formats results for CSV export File Conversion: Creates downloadable CSV file Processing Flow User Query → AI Interpretation → Validation → API Call → CSV Export ↑ ↓ └──── Error Correction Loop ←──────┘ Validation Rules The workflow validates: Filter keys are allowed by Explorium API Values match expected formats (e.g., valid country codes) Range filters have proper gte/lte values No duplicate values in arrays Required structure is maintained Usage Guide Basic Conversation Flow Start with your query: "Find me VPs of Sales at software companies in the US" Bot processes and responds: Generates API filters Validates the structure Fetches data Returns CSV download link Refine if needed: "Can you also include directors and filter for companies with 100+ employees?" Query Tips Be specific**: Include job titles, departments, company details Use standard terms**: "CTO" instead of "Chief Technology Officer" Specify locations**: Use country names or standard codes Include size/revenue**: Helps narrow results effectively Advanced Queries Combine multiple criteria: "Find engineering managers and senior engineers at B2B SaaS companies in New York and California with 50-500 employees and revenue over $5M who have been in their role for at least 1 year" Output Format The CSV file includes: Prospect ID Name (first, last, full) Location (country, region, city) LinkedIn profile Experience summary Skills and interests Company details Job information Business ID Troubleshooting Common Issues "Validation failed" errors Check that your query uses supported filter values Ensure location names are spelled correctly Verify company sizes/revenues match allowed ranges No results returned Broaden your search criteria Check if the company exists in Explorium's database Verify filter combinations aren't too restrictive Chat not responding Ensure workflow is activated Check all credentials are properly configured Verify webhook URL is accessible Large result sets timing out Try adding more specific filters Limit results by location or company size Use the size parameter (max 10,000) Error Messages The bot provides clear feedback: Invalid filters**: Shows which filters aren't supported Value errors**: Lists correct options for each field API failures**: Explains connection or authentication issues Performance Optimization Best Practices Start broad, then narrow: Begin with basic criteria and add filters Use business IDs: When targeting specific companies Limit by contact info: Add has_email: true for actionable leads Batch by location: Process regions separately for large searches API Limits Maximum 10,000 results per search Pagination handles up to 100 records per page Rate limits apply based on your Explorium subscription Customization Options Modify AI Behavior Edit the AI Agent system message to: Change response format Add custom filters Adjust interpretation logic Include additional instructions Extend Functionality Add nodes to: Send results via email Import directly to CRM Schedule recurring searches Create custom reports Integration Ideas Connect to Slack for team queries Add to CRM workflows Create lead scoring systems Build automated outreach campaigns Security Considerations API credentials are stored securely in n8n Chat sessions are isolated No prospect data is stored permanently CSV files are generated on-demand Support Resources For issues with: n8n platform**: Check n8n documentation Explorium API**: Contact Explorium support Anthropic/Claude**: Refer to Anthropic docs Workflow logic**: Review node configurations
by Oneclick AI Squad
This automated n8n workflow checks daily class schedules, syncs upcoming classes to Google Calendar, and sends reminder notifications to students via email or SMS. Perfect for educational institutions to keep students informed about their daily classes and schedule changes. What This Workflow Does: Automatically checks class schedules every day Identifies today's classes and upcoming sessions Syncs class information to Google Calendar Sends personalized reminders to enrolled students Tracks reminder delivery status and logs activities Handles both email and SMS notification preferences Main Components Daily Schedule Check** - Triggers daily to check class schedules Read Class Schedule** - Retrieves today's class schedule from database/Excel Filter Today's Classes** - Identifies classes happening today Has Classes Today?** - Checks if there are any classes scheduled Read Student Contacts** - Gets student contact information for enrolled classes Sync to Google Calendar** - Creates/updates events in Google Calendar Create Student Reminders** - Generates personalized reminder messages Split Into Batches** - Processes reminders in manageable batches Email or SMS?** - Routes based on student communication preferences Prepare Email Reminders** - Creates email reminder content Prepare SMS Reminders** - Creates SMS reminder content Read Reminder Log** - Checks previous reminder history Update Reminder Log** - Records sent reminders Save Reminder Log** - Saves updated log data Essential Prerequisites Class schedule database/Excel file with student enrollments Student contact database with email and phone numbers Google Calendar API access and credentials SMTP server for email notifications SMS service provider (Twilio, etc.) for text reminders Reminder log file for tracking sent notifications Required Data Files: class_schedule.xlsx: Class ID | Class Name | Date | Time | Duration Instructor | Room | Students Enrolled | Status student_contacts.xlsx: Student ID | Name | Email | Phone | Preferred Contact Program | Class IDs | Active Status reminder_log.xlsx: Log ID | Date | Student ID | Class ID | Contact Method Status | Sent Time | Response Key Features ⏰ Daily Automation:** Runs automatically every day 📅 Calendar Sync:** Syncs classes to Google Calendar 📧 Smart Reminders:** Sends email or SMS based on preference 👥 Batch Processing:** Handles multiple students efficiently 📊 Activity Logging:** Tracks all reminder activities 🔄 Duplicate Prevention:** Avoids sending multiple reminders 📱 Multi-Channel:** Supports both email and SMS notifications Quick Setup Import workflow JSON into n8n Configure daily trigger schedule Set up class schedule and student contact files Connect Google Calendar API credentials Configure SMTP server for emails Set up SMS service provider (Twilio) Test with sample class data Activate workflow Parameters to Configure schedule_file_path: Path to class schedule file contacts_file_path: Path to student contacts file google_calendar_id: Google Calendar ID for syncing google_api_credentials: Google Calendar API credentials smtp_host: Email server settings smtp_user: Email username smtp_password: Email password sms_api_key: SMS service API key sms_phone_number: SMS sender phone number Sample Reminder Messages Email:** "Hi [Name], reminder: [Class Name] starts at [Time] in [Room]. See you there!" SMS:** "[Name], your [Class Name] class starts at [Time] in [Room]. Don't miss it!" Use Cases Daily class reminders for students Schedule change notifications Exam and assignment deadline alerts Teacher absence notifications Room change announcements
by Yaron Been
Create your own intelligent Telegram bot that summarizes articles and processes commands automatically. This powerful workflow turns Telegram into your personal AI assistant, handling /help, /summary <URL>, and /img <prompt> commands with intelligent responses - perfect for teams, content creators, and anyone wanting smart automation in their messaging. 🚀 What It Does Smart Command Processing: Automatically recognizes and routes /help, /summary, and /img commands to appropriate AI-powered responses. Article Summarization: Fetches any URL, extracts content, and generates professional 10-12 bullet point summaries using OpenAI. Image Generation: Processes image prompts and integrates with AI image generation services. Help System: Provides users with clear command instructions and usage examples. 🎯 Key Benefits ✅ Personal AI Assistant: Get instant article summaries in Telegram ✅ Team Productivity: Share quick content summaries with colleagues ✅ Content Research: Rapidly digest articles and web content ✅ 24/7 Availability: Bot works around the clock without maintenance ✅ Easy Commands: Simple /summary <link> format anyone can use ✅ Scalable: Handles multiple users and requests simultaneously 🏢 Perfect For Content Teams & Researchers Journalists quickly summarizing news articles Marketing teams researching competitor content Students processing academic papers and articles Analysts digesting industry reports Business Applications Team Communication**: Share article insights in group chats Research Assistance**: Quick content analysis for decision making Content Curation**: Summarize articles for newsletters or reports Knowledge Sharing**: Help teams stay informed efficiently ⚙️ What's Included Complete Bot Workflow: Ready-to-deploy Telegram bot with all commands AI Integration: OpenAI-powered content summarization and processing Smart Routing: Intelligent command recognition and response system Error Handling: Robust system handles invalid commands gracefully Extensible Design: Easy to add new commands and features 🔧 Quick Setup Requirements n8n Platform**: Cloud or self-hosted instance Telegram Bot Token**: Create via @BotFather (free, 5 minutes) OpenAI API**: For content summarization (pay-per-use) Basic Configuration**: Follow 15-minute setup guide 📱 User Experience Simple Commands: /help - Show available commands /summary https://example.com - Get article summary /img sunset over mountains - Generate image (with supported APIs) Sample Summary Output: 📄 Article Summary: • Company reports 40% revenue growth in Q3 2024 • New AI features driving customer acquisition • Expansion into European markets planned for 2025 • Investment in R&D increased by 25% this quarter • Customer satisfaction scores improved to 94% • Three new product launches scheduled for next year • Remote work policy made permanent post-pandemic • Sustainability goals on track to meet 2025 targets • Partnership with major tech firm announced • Stock price up 15% following earnings report 🎨 Customization Options Command Extensions: Add custom commands for specific workflows Response Formatting: Customize summary style and length Multi-Language: Support different languages for international teams Integration APIs: Connect additional AI services and tools User Permissions: Control who can use specific commands Analytics: Track usage patterns and popular content 🏷️ Tags & Categories #telegram-bot #ai-automation #content-summarization #article-processing #team-productivity #openai-integration #smart-assistant #workflow-automation #messaging-bot #content-research #ai-agent #n8n-workflow #business-automation #telegram-integration #ai-powered-bot 💡 Use Case Examples News Team: Quickly summarize breaking news articles for editorial meetings Marketing Agency: Research competitor content and industry trends efficiently Sales Team: Digest industry reports and share insights with prospects Remote Team: Keep everyone informed with summarized company updates 📈 Expected Results 80% faster** content research and analysis 50% more articles** processed per day vs manual reading 100% team accessibility** through familiar Telegram interface 24/7 availability** for global teams across time zones 🛠️ Setup & Support Quick Start: Deploy your bot in 15 minutes with included guide Video Tutorial: Complete walkthrough available Template Commands: Pre-built responses and formatting Expert Support: Direct help from workflow creator 📞 Get Help & Resources YouTube: https://www.youtube.com/@YaronBeen/videos 💼 Professional Support LinkedIn: https://www.linkedin.com/in/yaronbeen/ 📧 Direct Help Email: Yaron@nofluff.online - Response within 24 hours Ready to build your intelligent Telegram assistant? Get this AI Telegram Bot Agent and transform your messaging app into a powerful content processing tool. Perfect for teams, researchers, and anyone who wants AI-powered assistance directly in Telegram. Stop manually reading long articles. Start getting instant, intelligent summaries with simple commands.
by Mary Newhauser
RAG over a PDF with Weaviate This workflow allows you to upload a PDF file and ask questions about it using the Question and Answer Chain and the Weaviate Vector Store nodes. Who it's for This workflow is the simplest possible implementation of RAG with Weaviate in n8n. It's intended to act as an extendable template for RAG over your own documents. Prerequisites An existing Weaviate cluster. You can view instructions for setting up a local cluster with Docker here or a Weaviate Cloud cluster here. API keys to generate embeddings and power chat models. We use OpenAI, but feel free to switch out the models as you like. Self-hosted n8n instance. See this video for how to get set up in just three minutes. How it works Part 1: Manually upload data In this example, we manually upload a 100+ page article from arXiv called "A Survey of Large Language Models". But you can replace this with your own more advanced data pipeline, if you wish. Part 2: Embed and load data into Weaviate collection Here, we generate embeddings for the full-text of the article and store them in Weaviate. Part 3: Perform RAG over PDF file with Weaviate In this part of the workflow, you can enter your query by running the Chat Node and get a RAG response grounded in context via the Question and Answer Chain node. How to run the workflow Go through the prerequisites, creating a Weaviate cluster (can be local or cloud), downloading self-hosted n8n, and adding your API keys and other credentials. Select the embedding and chat models you'd like to use. Upload a PDF file you want to ask questions about. Execute the rest of the workflow.
by darrell_tw
How it works Fetch all workflows from your n8n instance. Filter workflows that contain nodes with a modelId setting. Extract the node names, model IDs, model names, workflow names, and workflow URLs. Save the extracted information into a connected Google Sheet. Set up steps Connect your n8n API credentials. Connect your Google Sheets account. Replace "Your n8n domain" with your actual domain URL. Use this Google Sheet template to create a new sheet for results. Setup typically takes 5 minutes. Be cautious: if you have over 100 workflows, performance may be impacted. Notes Sticky notes inside the workflow provide extra guidance. This workflow clears old sheet data before writing new results. Make sure your n8n instance allows API access. Result Example Update: It didn't detect the AI model in tool originally. Now it's fixed! Update 20250429: Support 1.91.0 with open node directly! Optimize the url with node id.