by Onur
Description: Create Social Media Content from Telegram with AI This n8n workflow empowers you to effortlessly generate social media content and captivating image prompts, all powered by AI. Simply send a topic request through Telegram (as a voice or text message), and watch as the workflow conducts research, crafts engaging social media posts, and creates detailed image prompts ready for use with your preferred AI art generation tool. What does this workflow do? This workflow streamlines the content creation process by automating research, social media content generation, and image prompt creation, triggered by a simple Telegram message. Who is this for? Social Media Managers:** Quickly generate engaging content and image ideas for various platforms. Content Creators:** Overcome writer's block and discover fresh content ideas with AI assistance. Marketing Teams:** Boost productivity by automating social media content research and drafting. Anyone** looking to leverage AI for efficient and creative social media content creation. Benefits Effortless Content and Image Prompt Generation:** Automate the creation of social media posts and detailed image prompts. AI-Powered Creativity:** Leverage the power of LLMs to generate original content ideas and captivating image prompts. Increased Efficiency:** Save time and resources by automating the research and content creation process. Voice-to-Content:** Use voice messages to request content, making content creation even more accessible. Enhanced Engagement:** Create high-quality, attention-grabbing content that resonates with your audience. How it Works Receive Request: The workflow listens for incoming voice or text messages on Telegram containing your content request. Process Voice (if necessary): If the message is a voice message, it's transcribed into text using OpenAI's Whisper API. AI Takes Over: The AI agent, powered by an OpenAI Chat Model and SerpAPI, conducts online research based on your request. Content and Image Prompt Generation: The AI agent generates engaging social media content and a detailed image prompt based on the research. Image Generation (Optional): You can use the generated image prompt with your preferred AI art generation tool (e.g., DALL-E, Stable Diffusion) to create a visual. Output: The workflow provides you with the social media content and the detailed image prompt, ready for you to use or refine. n8n Nodes Used Telegram Trigger Switch Telegram (for fetching voice messages) OpenAI (Whisper API for voice-to-text) Set (for preparing variables) AI Agent (with OpenAI Chat Model and SerpAPI tool) HTTP Request (for optional image generation) Extract from File (for optional image processing) Set (for final output) Prerequisites Active n8n instance Telegram account with a bot OpenAI API key SerpAPI account Hugging Face API key (if you want to generate images within the workflow) Setup Import the workflow JSON into your n8n instance. Configure the Telegram Trigger node with your Telegram bot token. Set up the OpenAI and SerpAPI credentials in the respective nodes. If you want to generate images directly within the workflow, configure the HTTP Request node with your Hugging Face API key. Test the workflow by sending a voice or text message to your Telegram bot with a topic request. This workflow combines the convenience of Telegram with the power of AI to provide a seamless content creation experience. Start generating engaging social media content today!
by Matt Chong
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Who is this for? If your inbox is full of unread emails, this workflow is for you. Instead of reading through them one by one, let AI do the sorting. It reads your emails and flags only what needs action. What does it solve? This workflow reads your unread Gmail emails and uses AI to decide what’s important — and what’s not. It labels emails that need your attention, identifies receipts, and trashes everything else. No more manual reading. Just an inbox that uses AI to take care of itself. How it works Every hour, the workflow runs automatically. It searches for unread emails in your Gmail inbox. For each email: It extracts the content and sends it to OpenAI. The AI returns one of four labels: Action, Receipt, Informational or Spam. Based on the label: Emails are marked with the appropriate label. Or moved to trash it is spam. It marks the email as read once processed. How to set up? Connect these services in your n8n credentials: Gmail (OAuth2) OpenAI (API key) Create the Gmail labels: In your Gmail account, create these labels exactly as written: Action, Receipt, and Informational The workflow will apply these labels based on AI classification. How to customize this workflow to your needs Change the AI prompt to detect more types of emails like Meeting or Newsletter. Add more branches to the Switch node to apply custom logic. Change the schedule to fit your workflow. By default, it runs every hour, but you can update this in the Schedule Trigger node.
by Javier Hita
Follow me on LinkedIn for more! Category: Lead Generation, Data Collection, Business Intelligence Tags: lead-generation, google-maps, rapidapi, business-data, contact-extraction, google-sheets, duplicate-prevention, automation Difficulty Level: Intermediate Estimated Setup Time: 15-20 minutes Template Description Overview This powerful n8n workflow automates the extraction of comprehensive business information from Google Maps using keyword-based searches via RapidAPI's Local Business Data service. Perfect for lead generation, market research, and competitive analysis, this template intelligently gathers business data including contact details, social media profiles, and location information while preventing duplicates and optimizing API usage. Key Features 🔍 Keyword-Based Google Maps Scraping**: Search for any business type in any location using natural language queries 📧 Contact Information Extraction**: Automatically extracts emails, phone numbers, and social media profiles (LinkedIn, Instagram, Facebook, etc.) 🚫 Smart Duplicate Prevention**: Two-level duplicate detection saves 50-80% on API costs by skipping processed searches and preventing duplicate business entries 📊 Google Sheets Integration**: Seamless data storage with automatic organization and structure 🌍 Multi-Location Support**: Process multiple cities, regions, or countries in a single workflow execution ⚡ Rate Limiting & Error Handling**: Built-in delays and error handling ensure reliable, uninterrupted execution 💰 Cost Optimization**: Intelligent batching and duplicate prevention minimize API usage and costs 📱 Comprehensive Data Collection**: Gather business names, addresses, ratings, reviews, websites, verification status, and more Prerequisites Required Services & Accounts RapidAPI Account with subscription to "Local Business Data" API Google Account for Google Sheets integration n8n Instance (cloud or self-hosted) Required Credentials RapidAPI HTTP Header Authentication** for Local Business Data API Google Sheets OAuth2** for data storage and retrieval Setup Instructions Step 1: RapidAPI Configuration Create RapidAPI Account Sign up at RapidAPI.com Navigate to "Local Business Data" API Subscribe to a plan (Basic plan supports 1000 requests/month) Get API Credentials Copy your X-RapidAPI-Key from the API dashboard Note the host: local-business-data.p.rapidapi.com Configure n8n Credential In n8n: Settings → Credentials → Create New Type: HTTP Header Auth Name: RapidAPI Local Business Data Add headers: X-RapidAPI-Key: YOUR_API_KEY X-RapidAPI-Host: local-business-data.p.rapidapi.com Step 2: Google Sheets Setup Enable Google Sheets API Go to Google Cloud Console Enable Google Sheets API for your project Create OAuth2 credentials Configure n8n Credential In n8n: Settings → Credentials → Create New Type: Google Sheets OAuth2 API Follow OAuth2 setup process Create Google Sheet Structure Create a new Google Sheet with these tabs: keyword_searches sheet: | select | query | lat | lon | country_iso_code | |--------|-------|-----|-----|------------------| | X | Restaurants Madrid | 40.4168 | -3.7038 | ES | | X | Hair Salons Brooklyn | 40.6782 | -73.9442 | US | | X | Coffee Shops Paris | 48.8566 | 2.3522 | FR | stores_data sheet: The workflow will automatically create columns for business data including: business_id, name, phone_number, email, website, full_address, rating, review_count, linkedin, instagram, query, lat, lon, and 25+ more fields Step 3: Workflow Configuration Import the Workflow Copy the provided JSON In n8n: Import from JSON Update Placeholder Values Replace YOUR_GOOGLE_SHEET_ID with your actual Google Sheet ID Update credential references to match your setup Configure Search Parameters (Optional) Adjust limit: 1-100 results per query (default: 100) Modify zoom: 10-18 search radius (default: 13) Change language: EN, ES, FR, etc. (default: EN) How It Works Workflow Process Load Search Criteria: Reads queries marked with "X" from keyword_searches sheet Load Existing Data: Retrieves previously processed data for duplicate detection Filter New Searches: Smart merge identifies only new query+location combinations Process Each Location: Sequential processing prevents API overload Configure Parameters: Prepares search parameters from sheet data API Request: Calls RapidAPI to extract business information Parse Data: Structures and cleans all business information Save Results: Stores new leads in stores_data sheet Rate Limiting: 10-second delay between requests Loop: Continues until all new searches are processed Duplicate Prevention Logic Search Level: Compares new queries against existing data using query+latitude combination, skipping already processed searches. Business Level: Each business receives a unique business_id to prevent duplicate entries even across different searches. Data Extracted Business Information Business name, full address, phone number Website URL, Google My Business rating and review count Business type, price level, verification status Geographic coordinates (latitude/longitude) Detailed location breakdown (street, city, state, country, zip) Contact Details Email addresses (when publicly available) Social media profiles: LinkedIn, Instagram, Facebook, Twitter, YouTube, TikTok, Pinterest Additional phone numbers Direct Google Maps and reviews links Search Metadata Original search query and parameters Extraction timestamp and geographic data API response details for tracking Use Cases Lead Generation Generate targeted prospect lists for B2B sales Build location-specific customer databases Create industry-specific contact lists Develop territory-based sales strategies Market Research Analyze competitor density in target markets Study business distribution
by Hybroht
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. JSON Architect - Dynamically Generate JSON Output Formats for Any AI Agent Overview Version: 1.0 The JSON Architect Workflow is designed to instruct AI agents on the required JSON structure for a given context and create the appropriate JSON output format. This workflow ensures that the generated JSON is validated and tested, providing a reliable JSON output format for use in various applications. ✨ Features Dynamic JSON Generation**: Automatically generate the JSON format based on the input requirements. Validation and Testing**: Validate the generated JSON format and test its functionality, ensuring reliability before output. Iterative Improvement**: If the generated JSON is invalid or fails testing, the workflow will attempt to regenerate it until successful or until a defined maximum number of rounds is reached. Structured Output**: The final output is the generated JSON output format, making it easy to integrate with other systems or workflows. 👤 Who is this for? This workflow is ideal for developers, data scientists, and businesses that require dynamic JSON structures for the responses of AI agents. It is particularly useful for those involved in procedural generation, data interchange formats, configuration management and machine learning model input/output. 💡 What problem does this solve? The workflow addresses the challenge of generating optimal JSON structures by automating the process of creation, validation, and testing. This approach ensures that the JSON format is appropriate for its intended use, reducing errors and enhancing the overall quality of data interchange. Use-Case examples: 🔄 Data Interchange Formats 🛠️ Procedural Generation 📊 Machine Learning Model Input/Output ⚙️ Configuration Management 🔍 What this workflow does The workflow orchestrates a process where AI agents generate, validate, and test JSON output formats based on the provided input. This approach leads to a more refined and functional JSON output parser. 🔄 Workflow Steps Input & Setup: The initial input is provided, and the workflow is configured with necessary parameters. Round Start: Initiates the round of JSON construction, ensuring the input is as expected. JSON Generation & Validation: Generates and validates the JSON output format according to the input. JSON Test: Verifies whether the generated JSON output format works as intended. Validation or Test Fails: If the JSON fails validation or testing, the process loops back to the Round Start for correction. Final Output: The final output is generated based on successful JSON construction, providing a cohesive response. 📌 Expected Input input**: The input that requires a proper JSON structure. max_rounds**: The maximum number of rounds before stopping the loop if it fails to produce and test a valid JSON structure. Suggested: 10. rounds**: The initial number of rounds. Default: 0. 📦 Expected Output input**: The original input used to create the JSON structure. json_format_name**: A snake_case identifier for the generated JSON format. Useful if you plan to reuse it for multiple AI agents or Workflows. json_format_usage**: A description of how to use the JSON output format in an input. Meant to be used by AI agents receiving the JSON output format in their output parser. json_format_valid_reason**: The reason provided by the AI agents explaining why this JSON format works for the input. json_format_structure: The JSON format itself, intended for application through the **Advanced JSON Output Parser custom node. json_format_input: The **input after the JSON output format ( json_format_structure ) has been applied in an AI agent's output parser. 📌 Example An example that includes both the input and the final output is provided in a note within the workflow. ⚙️ n8n Setup Used n8n Version**: 1.100.1 n8n-nodes-advanced-output-parser**: 1.0.1 Running n8n via**: Podman 4.3.1 Operating System**: Linux ⚡ Requirements to Use/Setup 🔐🔧 Credentials & Configuration Obtain the necessary LLM API key and permissions to utilize the workflow effectively. This workflow is dependent on a custom node for dynamically inputting JSON output formats called n8n-nodes-advanced-output-parser. You can find the repository here. Warning: As of 2025-07-09, the custom node creator has warned that this node is not production-ready. Beware when using it in production environments without being aware of its readiness. ⚠️ Notes, Assumptions & Warnings This workflow assumes that users have a basic understanding of n8n and JSON configuration. This workflow assumes that users have access to the necessary API keys and permissions to utilize the Mistral API or other LLM APIs. Ensure that the input provided to the AI agents is clear and concise to avoid confusion in the JSON generation process. Ambiguous inputs may lead to invalid or irrelevant JSON output formats. ℹ️ About Us This workflow was developed by the Hybroht team of AI enthusiasts and developers dedicated to enhancing the capabilities of AI through collaborative processes. Our goal is to create tools that harness the possibilities of AI technology and more.
by German Velibekov
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Transform email overload into actionable insights with this automated daily digest workflow that intelligently summarizes categorized emails using AI. Who's it for This workflow is perfect for busy professionals, content creators, and newsletter subscribers who need to stay informed without spending hours reading through multiple emails. Whether you're tracking industry news, monitoring competitor updates, or managing content subscriptions, this automation helps you extract key insights efficiently. How it works The workflow runs automatically each morning at 9 AM, fetching emails from a specific Gmail label received in the last 24 hours. Each email is processed through OpenAI's language model using LangChain to create concise, readable summaries that preserve important links and formatting. All summaries are then combined into a single, well-formatted digest email and sent to your inbox, replacing dozens of individual emails with one comprehensive overview. How to set up Create a Gmail label for emails you want summarized (e.g., "Tech News", "Industry Updates") Configure credentials for both Gmail OAuth2 and OpenAI API in their respective nodes Update the Gmail label ID in the "Get mails (last 24h)" node with your specific label Set your email address in the "Send Digested mail" node Adjust the schedule in the Schedule Trigger if you prefer a different time than 9 AM Test the workflow with a few labeled emails to ensure proper formatting Requirements Gmail account with OAuth2 authentication configured OpenAI API account and valid API key At least one Gmail label set up for email categorization Basic understanding of n8n workflow execution How to customize the workflow Change summarization style: Modify the prompt in the "Summarization Mails" node to adjust tone, length, or format of summaries. You can make summaries more technical, casual, or focus on specific aspects like action items. Adjust time range: Change the receivedAfter parameter in the Gmail node to fetch emails from different time periods (last 2 days, last week, etc.). Multiple labels: Duplicate the Gmail retrieval section to process multiple labels and combine them into categories within your digest. Add filtering: Insert additional conditions to filter emails by sender, subject keywords, or other criteria before summarization. Custom formatting: Modify the "Combine Subject and Body" code node to change the HTML structure, add styling, or include additional metadata like email timestamps or priority indicators.
by Shun Fukuchi
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Automated Research Reports with AI and Tavily Search An intelligent research automation workflow designed for Japanese users that transforms user queries into comprehensive HTML reports via email. Using Google Gemini AI and Tavily search, this workflow generates optimized search queries, conducts multi-perspective research, and delivers structured analysis reports in Japanese. Who's it for Content creators, researchers, analysts, and businesses in Japan who need comprehensive research reports on various topics without manual information gathering. Particularly valuable for Japanese professionals conducting competitive analysis, market research, and technical comparisons who prefer reports in their native language. How it works The workflow follows a strategic four-step process: Query Optimization: Google Gemini AI analyzes user input and generates three optimized search queries for comprehensive coverage Multi-Query Research: Tavily's advanced search executes all queries with deep search parameters and AI-generated answers Report Synthesis: Another Gemini AI model consolidates findings, eliminates duplicates, and structures information into readable HTML format Email Delivery: Gmail automatically sends the final HTML report to specified recipients Requirements Google Gemini API credentials (for three separate AI nodes) Tavily API credentials for advanced search functionality Gmail authentication for email delivery Basic n8n workflow execution permissions How to set up Configure API credentials in all Google Gemini and Tavily nodes Update email settings in the "Send a message" node with your recipient address Customize your query in the "Edit Fields" node (default: "n8nとdifyの違い") Test the workflow to ensure all connections work properly How to customize the workflow Research depth: Increase max_results in Tavily search for more comprehensive data gathering. Query optimization: Modify system prompts in the Query Generator for domain-specific searches. Report format: Adjust the Report Agent's system message to change output structure, language, or focus areas. Multi-recipient delivery: Duplicate the Gmail node for multiple email destinations. The workflow processes Japanese and English queries effectively, with built-in support for Japanese language output, making it ideal for Japanese professionals who need multilingual research capabilities. Advanced search parameters ensure high-quality, relevant results for professional research applications.
by Jimleuk
This n8n template demonstrates a simple approach to using AI to automate the generation of blog content which aligns to your organisation's brand voice and style by using examples of previously published articles. In a way, it's quick and dirty "training" which can get your automated content generation strategy up and running for very little effort and cost whilst you evaluate our AI content pipeline. How it works In this demonstration, the n8n.io blog is used as the source of existing published content and 5 of the latest articles are imported via the HTTP node. The HTML node is extract the article bodies which are then converted to markdown for our LLMs. We use LLM nodes to (1) understand the article structure and writing style and (2) identify the brand voice characteristics used in the posts. These are then used as guidelines in our final LLM node when generating new articles. Finally, a draft is saved to Wordpress for human editors to review or use as starting point for their own articles. How to use Update Step 1 to fetch data from your desired blog or change to fetch existing content in a different way. Update Step 5 to provide your new article instruction. For optimal output, theme topics relevant to your brand. Requirements A source of text-heavy content is required to accurately breakdown the brand voice and article style. Don't have your own? Maybe try your competitors? OpenAI for LLM - though I recommend exploring other models which may give subjectively better results. Wordpress for blog but feel free to use other preferred publishing platforms. Customising this workflow Ideally, you'd want to "train" your agent on material which is similar to your output ie. your social media post may not get the best results from your blog content due to differing formats. Typically, this brand voice extraction exercise should run once and then be cached somewhere for reuse later. This would save on generation time and overall cost of the workflow.
by Yaron Been
Transform chaotic support requests into organized, actionable insights automatically. This intelligent workflow captures support tickets from forms, uses AI to categorize and analyze sentiment, stores everything in organized databases, and delivers comprehensive analytics reports to your team - eliminating manual sorting while providing valuable business intelligence. 🚀 What It Does Intelligent Ticket Processing: Automatically categorizes incoming support requests into Billing, Bug Reports, Feature Requests, How-To questions, and Complaints using advanced AI analysis. Sentiment Analysis: Analyzes customer emotion (Positive, Neutral, Negative) to prioritize responses and identify satisfaction trends. Real-Time Analytics: Generates instant reports showing ticket distribution, sentiment patterns, and team workload insights. Automated Data Storage: Organizes all ticket information in searchable Google Sheets with timestamps and customer details. Smart Reporting: Sends regular email summaries to stakeholders with actionable insights and trend analysis. 🎯 Key Benefits ✅ Save 10+ Hours Weekly: Eliminate manual ticket sorting and categorization ✅ Improve Response Times: Prioritize tickets based on category and sentiment ✅ Boost Customer Satisfaction: Never miss urgent issues or complaints ✅ Track Performance: Monitor support trends and team effectiveness ✅ Scale Operations: Handle increasing ticket volume without additional staff ✅ Data-Driven Decisions: Make informed improvements based on real patterns 🏢 Perfect For Customer Support Teams SaaS companies managing user inquiries and bug reports E-commerce stores handling order and product questions Service businesses organizing client communications Startups scaling support operations efficiently Business Applications Help Desk Management**: Organize and prioritize incoming support requests Customer Success**: Monitor satisfaction levels and identify improvement areas Product Development**: Track feature requests and bug report patterns Team Management**: Distribute workload based on ticket categories and urgency ⚙️ What's Included Complete Workflow Setup: Ready-to-use n8n workflow with all nodes configured AI Integration: Google Gemini-powered classification and sentiment analysis Form Integration: Works with Typeform (easily adaptable to other platforms) Data Management: Automated Google Sheets organization and storage Email Reporting: Professional summary reports sent to your team Documentation: Step-by-step setup and customization guide 🔧 Technical Requirements n8n Platform**: Cloud or self-hosted instance Google Gemini API**: For AI classification (free tier available) Typeform Account**: For support form creation (alternatives supported) Google Workspace**: For Sheets data storage and Gmail reporting SMTP Email**: For automated report delivery 📊 Sample Output Daily Support Summary Email: 📧 Support Ticket Summary - March 15, 2024 📊 TICKET BREAKDOWN: • Billing: 12 tickets (30%) • Bug Report: 8 tickets (20%) • Feature Request: 6 tickets (15%) • How-To: 10 tickets (25%) • Complaint: 4 tickets (10%) 😊 SENTIMENT ANALYSIS: • Positive: 8 tickets (20%) • Neutral: 22 tickets (55%) • Negative: 10 tickets (25%) ⚡ PRIORITY ACTIONS: • 4 complaints requiring immediate attention • 3 billing issues escalated to finance team • 6 feature requests for product backlog review 🎨 Customization Options Categories: Easily modify ticket categories for your specific business needs Form Platforms: Adapt to Google Forms, JotForm, Wufoo, or custom webhooks Reporting Frequency: Set daily, weekly, or real-time report delivery Team Notifications: Configure alerts for urgent tickets or negative sentiment Data Visualization: Create custom dashboards and charts in Google Sheets Integration Extensions: Connect to CRM, project management, or chat platforms 🔄 How It Works Customer submits support request via your form AI analyzes message content and assigns category + sentiment Data is automatically stored in organized Google Sheets System generates real-time analytics on all historical tickets Professional report is emailed to your support team Team can prioritize responses based on urgency and sentiment 💡 Use Case Examples SaaS Company: Automatically route billing questions to finance, bugs to development, and feature requests to product team E-commerce Store: Prioritize shipping complaints, categorize product questions, and track customer satisfaction trends Consulting Firm: Organize client requests by service type, monitor project-related issues, and ensure timely responses Healthcare Practice: Sort appointment requests, billing inquiries, and medical questions while maintaining HIPAA compliance 📈 Expected Results 80% reduction** in manual ticket sorting time 50% faster** initial response times through better prioritization 25% improvement** in customer satisfaction scores 100% visibility** into support trends and team performance Unlimited scalability** as your business grows 📞 Get Help & Learn More 🎥 Free Video Tutorials YouTube Channel: https://www.youtube.com/@YaronBeen/videos 💼 Professional Support LinkedIn: https://www.linkedin.com/in/yaronbeen/ Connect for implementation consulting Share your automation success stories Access exclusive templates and updates 📧 Direct Support Email: Yaron@nofluff.online Technical setup assistance Custom workflow modifications Integration with existing systems Response within 24 hours 🏆 Why Choose This Workflow Proven Results: Successfully deployed across 100+ businesses worldwide Expert Created: Built by automation specialist with 10+ years experience Continuously Updated: Regular improvements and new features added Money-Back Guarantee: Full refund if not satisfied within 30 days Lifetime Support: Ongoing help and updates included with purchase
by Seven Liu
Who’s it for 👥 This template is perfect for content creators, marketers, and researchers managing WeChat public account articles! 🚀 It’s ideal for n8n newcomers or anyone wanting to save time on manual content analysis, especially if you use Google Sheets for tracking. 📊 Whether you’re into AI, 欧阳良宜, or automation, this is for you! 😄 How it works / What it does 🔧 This workflow automates the retrieval, filtering, classification, and summarization of WeChat articles. 🌐 It reads RSS feed links from a Google Sheet, filters articles from the last 10 days ⏳, cleans HTML content 🧹, classifies them as relevant or not 🎯, generates insightful Chinese summaries with AI 🤖, and saves results to Google Sheets and Notion. 📝 Outputs are Slack-formatted for team collaboration! 💬 How to set up 🛠️ Prepare Google Sheets: Use your own documentId (replace the example) and set up sheets "Save Initial Links" (gid=198451233) and "Save Processed Data" (gid=1936091950). 📋 Configure Credentials: Add Google Sheets and OpenAI API credentials—avoid hardcoding keys! 🔐 Set RSS Feed: Update the rss_feed_url in the "RSS Read" node with your WeChat RSS feed. 🌐 Customize AI: Tweak "Relevance Classification" and "Basic LLM Chain" prompts for your topics (e.g., 欧阳良宜, AI). 🎨 Notion (Optional): Swap the databaseId (e.g., 22e79d55-2675-8055-a143-d55302c3c1b1) with your own. 📚 Run Workflow: Trigger manually via the "When clicking ‘Execute workflow’" node. 🚀 Requirements ✅ n8n account with Google Sheets and OpenAI integrations. Access to a WeChat public account RSS feed. Basic JSON and node config knowledge. How to customize the workflow 🎛️ Topic Adjustment: Update categories in "Relevance Classification" for new topics (e.g., "technology", "education"). 🌱 Summary Length: Modify the LLM prompt in "Basic LLM Chain" to adjust length or style. ✂️ Output Destination: Add Slack or Email nodes for more outputs. 📩 Date Filter: Change the "IF (Filter by Date)" condition (e.g., 7 days instead of 10). ⏰ Scalability: Use a "Schedule Trigger" node for automation. ⏳
by Jah coozi
AI Medical Symptom Checker & Health Assistant A responsible, privacy-focused health information assistant that provides general health guidance while maintaining strict safety protocols and medical disclaimers. ⚠️ IMPORTANT DISCLAIMER This tool provides general health information only and is NOT a substitute for professional medical advice, diagnosis, or treatment. Always consult qualified healthcare providers for medical concerns. 🚀 Key Features Safety First Emergency Detection**: Automatically identifies emergency situations Immediate Escalation**: Provides emergency numbers for critical cases Clear Disclaimers**: Every response includes medical disclaimers No Diagnosis**: Never attempts to diagnose conditions Professional Referral**: Always recommends consulting healthcare providers Core Functionality Symptom Information**: General information about common symptoms Wellness Guidance**: Health tips and preventive care Medication Reminders**: General medication information Multi-Language Support**: Serve diverse communities Privacy Protection**: No data storage, anonymous processing Resource Links**: Connects to trusted health resources 🎯 Use Cases General Health Information: Learn about symptoms and conditions Pre-Appointment Preparation: Organize questions for doctors Wellness Education: General health and prevention tips Emergency Detection: Immediate guidance for critical situations Health Resource Navigation: Find appropriate care providers 🛡️ Safety Protocols Emergency Keywords Detection Chest pain, heart attack, stroke Breathing difficulties Severe bleeding, unconsciousness Allergic reactions, poisoning Mental health crises Response Guidelines Never diagnoses conditions Never prescribes medications Always includes disclaimers Encourages professional consultation Provides emergency numbers when needed 🔧 Setup Instructions Configure OpenAI API Add your API key Set temperature to 0.3 for consistency Review Legal Requirements Check local health information regulations Customize disclaimers as needed Implement required data policies Emergency Contacts Update emergency numbers for your region Add local health resources Include mental health hotlines Test Thoroughly Verify emergency detection Check disclaimer display Test various symptom queries 💡 Example Interactions General Symptom Query: User: "I have a headache for 3 days" Bot: Provides general headache information, self-care tips, when to see a doctor Emergency Detection: User: "Chest pain, can't breathe" Bot: EMERGENCY response with immediate action steps and emergency numbers Wellness Query: User: "How can I improve my sleep?" Bot: General sleep hygiene tips and healthy habits information 🏥 Integration Options Healthcare Websites**: Embed as support widget Telemedicine Platforms**: Pre-consultation tool Health Apps**: General information module Insurance Portals**: Member resource Pharmacy Systems**: General drug information 📊 Compliance & Privacy HIPAA Considerations**: No PHI storage GDPR Compliant**: No personal data retention Anonymous Processing**: Session-based only Audit Trails**: Optional logging for compliance Data Encryption**: Secure transmission 🚨 Limitations Cannot diagnose medical conditions Cannot prescribe treatments Cannot replace emergency services Cannot provide specific medical advice Should not delay seeking medical care 🔒 Best Practices Always maintain clear disclaimers Never minimize serious symptoms Encourage professional consultation Keep information general and educational Update emergency contacts regularly Review and update health information Monitor for misuse Maintain audit trails where required 🌍 Customization Options Add local emergency numbers Include regional health resources Translate to local languages Integrate with local health systems Add specific disclaimers Customize for specific populations Start providing responsible health information today!
by Guillaume Duvernay
This n8n template provides a powerful AI-powered chatbot that acts as your personal Spotify DJ. Simply tell the chatbot what kind of music you're in the mood for, and it will intelligently create a custom playlist, give it a fitting name, and populate it with relevant tracks directly in your Spotify account. The workflow is built to be flexible, allowing you to easily change the underlying AI model to your preferred provider, making it a versatile starting point for any AI-driven project. Who is this for? Music lovers:** Instantly create playlists for any activity, mood, or genre without interrupting your flow. Developers & AI enthusiasts:** A perfect starting point to understand how to build a functional AI Agent that uses tools to interact with external services. Automation experts:** See a practical example of how to chain AI actions and sub-workflows for more complex, stateful automations. What problem does this solve? Manually creating a good playlist is time-consuming. You have to think of a name, search for individual songs, and add them one by one. This workflow solves that by: Automating playlist creation:** Turns a simple natural language request (e.g., "I need a playlist for my morning run") into a fully-formed Spotify playlist. Reducing manual effort:** Eliminates the tedious task of searching for and adding multiple tracks. Providing player control:** Allows you to manage your Spotify player (play, pause, next) directly from the chat interface. Centralizing music management:** Acts as a single point of control for both creating playlists and managing playback. How it works Trigger & input: The workflow starts when you send a message in the Chat Trigger interface. AI agent & tool-use: An AI Agent, powered by a Large Language Model (LLM), interprets your message. It has access to a set of "tools" that allow it to interact with Spotify. Playlist creation sub-workflow: If you ask for a new playlist, the Agent calls a sub-workflow using the Create new playlist tool. This sub-workflow uses another AI call to brainstorm a creative playlist name and a list of suitable songs based on your request. Spotify actions: The sub-workflow then connects to Spotify to: Create a new, empty playlist with the generated name. Search for each song from the AI's list to get its official Spotify Track ID. Add each track to the new playlist. Player control: If your request is to control the music (e.g., "pause the music"), the Agent uses the appropriate tool (Pause player, Resume player, etc.) to directly control your active Spotify player. Setup Accounts & API keys: You will need active accounts and credentials for: Your AI provider (e.g., OpenAI, Groq, local LLMs via Ollama): To power the AI Agent and the playlist generation. Spotify: To create playlists and control the player. You'll need to register an application in the Spotify Developer Dashboard to get your credentials. Configure credentials: Add your AI provider's API key to the Chat Model nodes. The template uses OpenAI by default, but you can easily swap this out for any compatible Langchain model node. Add your Spotify OAuth2 credentials to all Spotify and Spotify Tool nodes. Activate workflow: Once all credentials are set and the workflow is saved, click the "Active" toggle. You can now start interacting with your Spotify AI Agent via the chat panel! Taking it further This template is a great foundation. Here are a few ideas to expand its capabilities: Become the party DJ:** Make the Chat Trigger's webhook public. You can then generate a QR code that links to the chat URL. Party guests can scan the code and request songs directly from their phones, which the agent can add to a collaborative playlist or the queue. Expand the agent's skills:** The Spotify Tool node has more actions available. Add a new tool for Add to Queue so you can ask the agent to queue up a specific song without creating a whole new playlist. Integrate with other platforms:** Swap the Chat Trigger for a Telegram or Discord trigger to build a Spotify bot for your community. You could also connect it to a Webhook to take requests from a custom web form.
by ARRE
Good to know: This workflow automatically transcribes your favorite podcasts or videos saved in a YouTube playlist and generates a comprehensive, AI-powered summary—so you can quickly understand the main topics and insights without having to watch or listen to the entire episode. 👤 Who is this for? Podcast fans who want to save time and get the key points from episodes Busy professionals who follow educational or industry videos and need quick takeaways Content creators or researchers who organize and review large amounts of video/audio material Anyone who wants to efficiently capture and summarize information from YouTube playlists ❓ What problem is this workflow solving? This workflow solves the challenge of information overload from long-form podcasts and videos. It: Automatically transcribes each video or podcast episode in your chosen YouTube playlist Uses AI to create a clear, well-structured summary of the content Lets you learn and extract valuable information without watching or listening to the entire recording Organizes everything in a Google Sheets document for easy tracking and future reference ✅ What this workflow does: 📺 Fetches all videos from a specified YouTube playlist 🔗 Extracts video titles, URLs, and IDs 📝 Retrieves and combines transcripts for each video or podcast episode 📜 Processes transcript data for clarity 🤖 Uses AI to generate a detailed, sectioned summary that covers all main topics and insights 📊 Automatically logs video titles, transcripts, summaries, and row numbers to a Google Sheets spreadsheet ⚙️ How it works: 🟢 Trigger: Start the workflow manually or on a schedule 📺 Fetch videos from your chosen YouTube playlist 🔗 Extract and organize video details (title, URL, ID) 📝 Retrieve the transcript for each video or podcast episode 📜 Combine transcript segments into a single script ✂️ Extract the first sentences for focused summarization 🤖 AI agent creates a comprehensive summary of the episode or video 📊 Save all data—title, transcript, summary, and row number—to Google Sheets 🛠️ How to use: Set up YouTube OAuth2 credentials in n8n Configure Google Sheets OAuth2 credentials Set up API credentials for transcript and AI processing Create and link your Google Sheets document Input your playlist ID and adjust any filters as needed Activate the workflow 📝 Requirements: n8n instance (cloud or self-hosted) YouTube account with OAuth2 access Google Sheets account Access to transcript and AI APIs Basic n8n workflow knowledge 🟢 Customizing this workflow: Change the YouTube playlist ID to target your preferred podcasts or video series Adjust the transcript retrieval process for other APIs or formats Customize the AI prompt for different summary styles or focus areas Add or remove fields in the Google Sheets output Change the workflow trigger or polling frequency Switch to a different AI model if desired This workflow is designed to help you quickly learn from podcasts and videos you care about—without spending hours consuming the full content.