by Dr. Firas
WhatsApp AI Agent: Auto-Train Product Data & Handle Customer Support Who Is This For This workflow is ideal for eCommerce founders, product managers, customer support teams, and automation builders who rely on WhatsApp to manage product information and interact with clients. It’s perfect for businesses that want to automate product data entry and support responses directly from WhatsApp messages using GPT-4 and Google Sheets. What Problem Does This Workflow Solve Manual Product Data Entry**: Collecting and organizing product data from links is tedious and error-prone. Slow Customer Response Times**: Responding to client questions manually leads to delays and inconsistent support. No Logging System for Issues**: Without automation, support issues often go undocumented, making it harder to learn and improve. What This Workflow Does Step 1 – Incoming Message Detection Listens for incoming messages via WhatsApp. If the message starts with train:, it routes to the product training process. Otherwise, it routes to the customer support assistant. Step 2 – Product Data Training Extracts URL** from the message using a regex script. Fetches HTML content** from the URL. Cleans HTML data** to extract readable product description. Saves raw data** (URL + description) into Google Sheets. Uses GPT-4** to enhance product data: → Name, price (one-time or subscription), topic, and FAQs. Updates the product row** in Google Sheets with structured information. Step 3 – Customer Support Flow Analyzes user messages with GPT-4 to understand the request or issue. Looks up relevant product info in Google Sheets. Detects potential problems (e.g. payment, login, delivery). Suggests an appropriate solution. Logs the problem, solution, and category to the Customer Issues sheet. Sends a response back to the client via WhatsApp. Step 4 – Client Response Sends the AI-generated response to the client via WhatsApp. Keeps the communication fast, clear, and professional. Setup Guide Prerequisites WhatsApp Business API access** OpenAI API Key** Google Account** with Google Sheets access A hosted instance of n8n (Cloud or self-hosted) Setup Steps Import the Workflow into your n8n instance. Connect your credentials for WhatsApp, OpenAI, and Google Sheets. Customize Google Sheet IDs and names as needed. Test by sending a train: message or a regular customer message to WhatsApp. Activate the workflow to make it live. How to Customize This Workflow Edit AI prompts** to reflect your product type, language style, or tone. Change the trigger keyword** (e.g. from train: to add: or anything else). Add integrations** like Notion, Airtable, or CRM tools. Expand the Sheets structure** with more product fields (e.g. stock status, image link). Add notifications** to Slack or email after product updates or issue logging. 📄 Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube
by Miha
Combine Tech News in a Personalized Weekly Newsletter This n8n template automates the collection, storage, and summarization of technology news from top sites, turning it into a concise, personalized weekly newsletter. If you like staying informed but want to reduce daily distractions, this workflow is perfect for you. It leverages RSS feeds, vector databases, and LLMs to read and curate tech content on your behalf—so you only receive what truly matters. How it works A daily scheduled trigger fetches articles from multiple popular tech RSS feeds like Wired, TechCrunch, and The Verge. Fetched articles are: Normalized to extract titles, summaries, and publish dates. Converted to vector embeddings via OpenAI and stored in memory for fast semantic querying. A weekly scheduled trigger activates the AI summarization flow: The AI is provided with your interests (e.g., AI, games, gadgets) and the desired number of items (e.g., 15). It queries the vector store to retrieve relevant articles and summarizes the most newsworthy stories. The summary is converted into a clean, email-friendly format and sent to your inbox. How to use Connect your OpenAI and Gmail accounts to n8n. Customize the list of RSS feeds in the “Set Tech News RSS Feeds” node. Update your interests and number of desired news items in the “Your Topics of Interest” node. Activate the workflow and let the automation run on schedule. Requirements OpenAI** credentials for embeddings and summarization Gmail** (or another email service) for sending the newsletter Customizing this workflow Want to use different sources? Swap in your own RSS feeds, or use an API-based news aggregator. Replace the in-memory vector store with Pinecone, Weaviate, or another persistent vector DB for longer-term storage. Adjust the agent's summarization style to suit internal updates, industry-specific briefings, or even entertainment recaps. Prefer chat over email? Replace the email node with a Telegram bot to receive your personalized tech newsletter directly in a Telegram chat.
by Amjid Ali
Proxmox AI Agent with n8n and Generative AI Integration This template automates IT operations on a Proxmox Virtual Environment (VE) using an AI-powered conversational agent built with n8n. By integrating Proxmox APIs and generative AI models (e.g., Google Gemini), the workflow converts natural language commands into API calls, enabling seamless management of your Proxmox nodes, VMs, and clusters. Buy My Book: Mastering n8n on Amazon Full Courses & Tutorials: http://lms.syncbricks.com Watch Video on Youtube How It Works Trigger Mechanism The workflow can be triggered through multiple channels like chat (Telegram, email, or n8n's built-in chat). Interact with the AI agent conversationally. AI-Powered Parsing A connected AI model (Google Gemini or other compatible models like OpenAI or Claude) processes your natural language input to determine the required Proxmox API operation. API Call Generation The AI parses the input and generates structured JSON output, which includes: response_type: The HTTP method (GET, POST, PUT, DELETE). url: The Proxmox API endpoint to execute. details: Any required payload parameters for the API call. Proxmox API Execution The structured output is used to make HTTP requests to the Proxmox VE API. The workflow supports various operations, such as: Retrieving cluster or node information. Creating, deleting, starting, or stopping VMs. Migrating VMs between nodes. Updating or resizing VM configurations. Response Formatting The workflow formats API responses into a user-friendly summary. For example: Success messages for operations (e.g., "VM started successfully"). Error messages with missing parameter details. Extensibility You can enhance the workflow by connecting additional triggers, external services, or AI models. It supports: Telegram/Slack integration for real-time notifications. Backup and restore workflows. Cloud monitoring extensions. Key Features Multi-Channel Input**: Use chat, email, or custom triggers to communicate with the AI agent. Low-Code Automation**: Easily customize the workflow to suit your Proxmox environment. Generative AI Integration**: Supports advanced AI models for precise command interpretation. Proxmox API Compatibility**: Fully adheres to Proxmox API specifications for secure and reliable operations. Error Handling**: Detects and informs you of missing or invalid parameters in your requests. Example Use Cases Create a Virtual Machine Input: "Create a VM with 4 cores, 8GB RAM, and 50GB disk on psb1." Action: Sends a POST request to Proxmox to create the VM with specified configurations. Start a VM Input: "Start VM 105 on node psb2." Action: Executes a POST request to start the specified VM. Retrieve Node Details Input: "Show the memory usage of psb3." Action: Sends a GET request and returns the node's resource utilization. Migrate a VM Input: "Migrate VM 202 from psb1 to psb3." Action: Executes a POST request to move the VM with optional online migration. Pre-Requisites Proxmox API Configuration Enable the Proxmox API and generate API keys in the Proxmox Data Center. Use the Authorization header with the format: PVEAPIToken=<user>@<realm>!<token-id>=<token-value> n8n Setup Add Proxmox API credentials in n8n using Header Auth. Connect a generative AI model (e.g., Google Gemini) via the relevant credential type. Access the Workflow Import this template into your n8n instance. Replace placeholder credentials with your Proxmox and AI service details. Additional Notes This template is designed for Proxmox 7.x and above. For advanced features like backup, VM snapshots, and detailed node monitoring, you can extend this workflow. Always test with a non-production Proxmox environment before deploying in live systems. Start with n8n Learn n8n with Amjid Get n8n Book What is Proxmox
by Hybroht
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. AI Arena - Debate of AI Agents to Optimize Answers and Simulate Diverse Scenarios Overview Version: 1.0 The AI Arena Workflow is designed to facilitate a refined answer generation process by enabling a structured debate among multiple AI agents. This workflow allows for diverse perspectives to be considered before arriving at a final output, enhancing the quality and depth of the generated responses. ✨ Features Multi-Agent Debate Simulation**: Engage multiple AI agents in a debate to generate nuanced responses. Configurable Rounds and Agents**: Easily adjust the number of debate rounds and participating agents to fit your needs. Contextualized AI Responses**: Each agent operates based on predefined roles and characteristics, ensuring relevant and focused discussions. JSON Output**: The final output is structured in JSON format, making it easy to integrate with other systems or workflows. 👤 Who is this for? This workflow is ideal for developers, data scientists, content creators, and businesses looking to leverage AI for decision-making, content generation, or any scenario requiring diverse viewpoints. It is particularly useful for those who need to synthesize information from multiple personalities or perspectives. 💡 What problem does this solve? The workflow addresses the challenge of generating nuanced responses by simulating a debate among AI agents. This approach ensures that multiple perspectives are considered, reducing bias and enhancing the overall quality of the output. Use-Case examples: 🗓️ Meeting/Interview Simulation ✔️ Quality Assurance 📖 Storywriter Test Environment 🏛️ Forum/Conference/Symposium Simulation 🔍 What this workflow does The workflow orchestrates a debate among AI agents, allowing them to discuss, critique, and suggest rewrites for a given input based on their roles and predefined characteristics. This collaborative process leads to a more refined and comprehensive final output. 🔄 Workflow Steps Input & Setup: The initial input is provided, and the AI environment is configured with necessary parameters. Round Execution: AI agents execute their roles, providing replies and actions based on the input and their individual characteristics. Round Results: The results of each round are aggregated, and a summary is created to capture the key points discussed by the agents. Continue to Next Round: If more rounds are defined, the process repeats until the specified number of rounds is completed. Final Output: The final output is generated based on the agents' discussions and suggestions, providing a cohesive response. ⚡ How to Use/Setup 🔐 Credentials Obtain an API key for the Mistral API or another LLM API. This key is necessary for the AI agents to function properly. 🔧 Configuration Set up the workflow in n8n, ensuring all nodes are correctly configured according to the workflow requirements. This includes setting the appropriate input parameters and defining the roles of each AI agent. This workflow uses a custom node for Global Variables called 'n8n-nodes-globals.' Alternatively, you can use the 'Edit Field (Set)' node to achieve the same functionality. ✏️ Customizing this workflow To customize the workflow, adjust the AI agent parameters in the JSON configuration. This includes defining their roles, personalities, and preferences, which will influence how they interact during the debate. One of the notes includes a ready-to-use example of how to customize the agents and the environment. You can simply edit it and insert it as your credential in the Global Variables node. 📌 Example An example with both input and final output is provided in a note within the workflow. 🛠️ Tools Used n8n: A workflow automation tool that allows users to connect various applications and services. Mistral API: A powerful language model API used for generating AI responses. (You can replace it with any LLM API of your choice) Podman: A container management tool that allows users to create, manage, and run containers without requiring a daemon. (It serves as an alternative to Docker for container orchestration.) ⚙️ n8n Setup Used n8n Version**: 1.100.1 n8n-nodes-globals**: 1.1.0 Running n8n via**: Podman 4.3.1 Operating System**: Linux ⚠️ Notes, Assumptions & Warnings Ensure that the AI agents are configured with clear roles to maximize the effectiveness of the debate. Each agent's characteristics should align with the overall goals of the workflow. The workflow can be adapted for various use cases, including meeting simulations, content generation, and brainstorming sessions. This workflow assumes that users have a basic understanding of n8n and JSON configuration. This workflow assumes that users have access to the necessary API keys and permissions to utilize the Mistral API or other LLM APIs. Ensure that the input provided to the AI agents is clear and concise to avoid confusion in the debate process. Ambiguous inputs may lead to unclear or irrelevant outputs. Monitor the output for relevance and accuracy, as AI-generated content may require human oversight to ensure it meets standards and expectations before being used in production. ℹ️ About Us This workflow was developed by the Hybroht team of AI enthusiasts and developers dedicated to enhancing the capabilities of AI through collaborative processes. Our goal is to create tools that harness the possibilities of AI technology and more.
by Keith Rumjahn
Case Study I'm too lazy to record every transaction for my expense tracking. Since all my expenses are digital, I just extract the transactions from bank PDF statements and screenshots into CSV to import into my budgeting software. Read more -> How I used A.I. to track all my expenses What this workflow does Upload your PDF or screenshots into Google Drive It then passes the PDF/image to Vertex Gemini to do some A.I. image recognition It then sends the transactions as CSV and stores it into another Google Drive folder Setup Set up 2 google drive folders. 1 for uploading and 1 for the output. Input your Google Drive crendtials Input your Vertex Gemini credentials How to adjust it to your needs You can upload other types of documents for information extraction. You can extract any text data from any image or PDF You can adjust the A.I. prompt to do different things
by InfraNodus
Automated Gmail Labeling and Brainstorming This template can be used to automatically label your incoming Gmail messages with AI and to build a knowledge graph from the emails tagged with a specific label to brainstorm new ideas based on them. You can also get notified about the emails with the most important labels via Telegram as well as receive new ideas as you are building a knowledge graph of incoming messages. The idea generation is based on the InfraNodus knowledge graph content gap detection algorithm, which builds a network from your content and then finds a blind spot and uses AI to generate an interesting research question or idea that can be used to bridge this gap. Why it works so well? Think of all the business emails you receive that bypass the spam filters. Probably, they are personalized to you already. Now imagine if you build a knowledge graph from them for over a month. You will then have a ideation device based on your interests and marketing profile. Now, if you identify the gaps inside and generate interesting research questions based on them, you will come up with new interesting ideas that will be relevant (because they touch on the topics that matter to you), but novel, because they bridge them in new ways. What is it useful for? Automate Gmail incoming message labeling** with the new Classifier n8n node — much more advanced than the default Gmail labeling rules. Get notified via Telegram (or a messenger of your choice) about the most important messages and be sure not to miss anything important. Keep the messages with a certain label saved into knowledge graph for brainstorming and ideation. Every time a new message of this category comes in, it's added into the graph, changing its structure, a new idea is generated. So instead of looking at each specific offer, you now use them to generate insights for you. How it works Step 1: This template can is triggered automatically when a new Gmail message arrives. Note: you need to connect your Gmail account here in this node Step 2: We use the new n8n AI Classifier Node to classify your email based on its content. You might need to update to n8n 1.94 version to make it work. Note: we like to use Gemini AI for that classifier as it's the same company as Gmail, so should be safe with data Step 3: After classifying the message, we label the message with the appropriate label. Note: you need to create the labels before in your Gmail account Step 4: For a certain category (e.g. "Business" you format the message and save it into your InfraNodus graph. *Note: specify your InfraNodus API here and choose the name of the graph. It will use the InfraNodus HTTP graphAndEntries endpoint and save your data to an InfraNodus graph. By default, we save the text knowledge graph using the contextSettings parameters (it will only build a text graph of the content), but you can take an alternative setting from this InfraNodus HTTP node's settings and create a social knowledge graph, that will also show email senders in the graph itself.* Step 5 (optional): Generate an interesting insight question with the graphAndAdvice endpoint) of InfraNodus. Step 6 (optional): Then send this insight via Telegram to a chat. Step 7 (optional): Link some important labels to the second Telegram notification node, so you receive important messages for specified labels. Step 8 (optional): Send a Telegram notification We use Telegram, because it takes only 30 seconds to set up a bot with an API (send /newbot to @botfather, unlike Discord or Slack, which is long and cumbersome to set up. You can also attach a Gmail send node and generate an email instead. How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account or log in. Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Add this Authorization code in Steps 4 and 5 of the workflow. Come up with the name of the graph and change it in the HTTP InfraNodus nodes in the steps 4 and 5 and also in the Telegram node in Step 6 that sends a link to the graph. For additional text processing / idea generation settings you can use in the HTTP InfraNodus nodes, see the InfraNodus access points page. For example, in Step 4 you can change the text processing settings to build a social knowledge graph (settings are available in the Node's Notes section) and in Step 5 you can change the requestMode from question to idea to receive business ideas instead. Authorize your Gmail account for Steps 2, 3, 7 and 8 Gmail nodes. The easiest way to set it up is to open a free Google Console API account and to create an OAuth access point for n8n. You can then reuse it with other Google services like Google Sheets, Drive, etc. So it's a useful thing to have in general. Set up the Gemini AI API key using the instructions in the Step 2 Gemini AI classification node. Set up the Telegram node bot for the Step 8. It takes only 30 seconds: just go to @botfather and type in /newbot and you'll have an API key ready. To get the conversation ID, follow the n8n / Telegram instructions in the node itself. Once everything is ready, try to run the default automated workflow to test if everything works well. Requirements An InfraNodus account and API key An Google Cloud API OAuth client and key for Gmail access A Gemini AI API key A Telegram bot API key n8n version 1.94 and higher (for Text Classification AI node to work) Customizing this workflow Check our other n8n workflows at https://n8n.io/creators/infranodus/ for useful content gap analysis, expert panel, and marketing, and research workflows that utilize GraphRAG for better AI generation. Finally, check out https://infranodus.com to learn more about our network analysis technology used to build knowledge graphs from text. For support, please, contact https://support.noduslabs.com
by RedOne
🎙️ AI Audio Assistant with Voice-to-Voice Response Who is this for? Businesses, customer service teams, content creators, and organizations who want to provide intelligent voice-based interactions through Telegram. Perfect for accessibility-focused services, multilingual support, or hands-free customer assistance. What problem does this solve? Enables natural voice conversations with AI Breaks down language and accessibility barriers Provides instant voice responses to customer queries Reduces typing requirements for users Offers 24/7 voice-based customer support Maintains conversation context across voice interactions What this workflow does: Receives voice messages via Telegram bot Transcribes audio using Deepgram's advanced speech-to-text Processes transcribed text through AI agent with knowledge base access Generates intelligent responses based on conversation context Converts AI response to natural-sounding speech using Deepgram TTS Sends audio response back to user via Telegram Maintains conversation memory for contextual interactions 🔧 Technical Architecture Core Components: Telegram Bot**: Receives and sends voice messages Deepgram STT**: Transcribes voice to text with high accuracy OpenAI GPT**: Processes queries and generates responses Supabase Knowledge Base**: Stores and retrieves business information Memory Management**: Maintains conversation context Deepgram TTS**: Converts text responses to natural speech Data Flow: Voice Message → Telegram API → File Download Audio File → Deepgram STT → Transcript Transcript → AI Agent → Response Generation Response → Deepgram TTS → Audio File Audio Response → Telegram → User 🛠️ Setup Instructions Prerequisites Telegram Bot Token Create bot via @BotFather Get bot token and configure webhook Deepgram API Key Sign up at deepgram.com Get API key for STT and TTS services Note: Currently hardcoded in workflow OpenAI API Key OpenAI account with API access Configure in OpenAI Chat Model node Supabase Database Create Supabase project Set up knowledge_base table Configure API credentials Step-by-Step Setup Configure Telegram Bot Update telegramToken in "Prepare Voice Message Data" node Set correct bot token in Telegram nodes Test bot connectivity Set Up Deepgram Integration Replace API key in "Transcribe with Deepgram" node Update TTS endpoint in "HTTP Request" node Test voice transcription accuracy Configure Knowledge Base -- Create knowledge_base table in Supabase CREATE TABLE knowledge_base ( id UUID DEFAULT gen_random_uuid() PRIMARY KEY, question TEXT NOT NULL, answer TEXT NOT NULL, category VARCHAR(100), keywords TEXT[], created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW() ); Customize AI Prompts Update system message in "Telegram AI Agent" node Adjust temperature and max tokens in OpenAI model Configure memory session keys Test End-to-End Flow Send test voice message to bot Verify transcription accuracy Check AI response quality Validate audio output clarity 🎛️ Configuration Options Voice Recognition Settings Model**: nova-2 (Deepgram's latest model) Language**: English (en) - can be changed Smart Format**: Enabled for better punctuation AI Response Settings Temperature**: 0.3 (conservative responses) Max Tokens**: 100 (adjust based on needs) Memory**: Session-based conversation context Text-to-Speech Settings Model**: aura-2-thalia-en (natural female voice) Alternative voices**: Available in Deepgram TTS API Audio Format**: Optimized for Telegram 🔒 Security Considerations API Key Management // Current implementation has hardcoded tokens // Recommended: Use environment variables const telegramToken = process.env.TELEGRAM_BOT_TOKEN; const deepgramKey = process.env.DEEPGRAM_API_KEY; Data Privacy Voice messages are processed by external APIs Consider data retention policies Implement user consent mechanisms Ensure GDPR compliance if applicable 📊 Monitoring & Analytics Key Metrics to Track Voice message processing time Transcription accuracy rates AI response quality scores User engagement metrics Error rates and failure points Recommended Logging // Add to workflow for monitoring console.log({ timestamp: new Date().toISOString(), user_id: userData.user_id, transcript_confidence: transcriptData.confidence, response_length: aiResponse.length, processing_time: processingTime }); 🚀 Customization Ideas Enhanced Features Multi-language Support Add language detection Support multiple TTS voices Translate responses Voice Commands Implement wake words Add voice shortcuts Create voice menus Advanced AI Features Sentiment analysis Intent classification Escalation triggers Integration Expansions Connect to CRM systems Add calendar scheduling Integrate with help desk tools Performance Optimizations Implement audio preprocessing Add response caching Optimize API call sequences Implement retry mechanisms 🐛 Troubleshooting Common Issues Voice Not Transcribing Check Deepgram API key validity Verify audio format compatibility Test with shorter voice messages Poor Audio Quality Adjust TTS model settings Check network connectivity Verify Telegram audio limits AI Responses Too Generic Improve knowledge base content Adjust system prompts Increase context window Memory Not Working Check session key configuration Verify user ID extraction Test conversation continuity 💡 Best Practices Voice Interface Design Keep responses concise and clear Use natural speech patterns Avoid technical jargon Provide clear next steps Knowledge Base Management Regular content updates Clear categorization Keyword optimization Quality assurance testing User Experience Fast response times (<5 seconds) Consistent voice personality Graceful error handling Clear capability communication 📈 Success Metrics Technical KPIs Response time: <3 seconds average Transcription accuracy: >95% User satisfaction: >4.5/5 Uptime: >99.5% Business KPIs Customer query resolution rate Support ticket reduction User engagement increase Cost per interaction decrease 🔄 Maintenance Schedule Daily Monitor error logs Check API rate limits Verify service uptime Weekly Review conversation quality Update knowledge base Analyze usage patterns Monthly Performance optimization Security audit Feature updates User feedback review 📚 Additional Resources Documentation Links Deepgram STT API Deepgram TTS API Telegram Bot API OpenAI API Supabase Documentation Community Support n8n Community Forum Telegram Bot Developers Group Deepgram Developer Discord OpenAI Developer Community Note: This template requires active API subscriptions for Deepgram and OpenAI services. Costs may apply based on usage volume.
by Budi SJ
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 🎯 Purpose This workflow helps you automatically monitor stock related news, extract the main content, summarize it using a LLM (via OpenRouter), and send real time alerts to Telegram and store them in Google Sheets. ⚙️ How It Works Trigger A Cron node triggers the workflow every 15 minutes (adjustable). RSS Feed node checks latest articles from Google Alerts RSS. The workflow filters duplicates using Google Sheets as a log. The article URL is sent to Jina AI Readability API to extract the main body text. The content is summarized using a model from OpenRouter (e.g., Gemini, Claude, GPT-4). You can customize the prompt to suit your tone and analysis needs. The result is appended to a Google Sheets file. Sends the title, summary, and reccomendation to Telegram chat. 🧾 Google Sheets Template Create a Google Sheet using this template: Stock Alert 🧰 Requirements Telegram Bot + your Chat ID OpenRouter account and API key Jina AI account for content extraction Google Account with access to Google Sheets Google Alerts RSS feed 🛠 Setup Instructions Install required credentials: Add OpenRouter API key to n8n credentials. Add Telegram Bot Token and Chat ID. Add Google Sheets credentials. Add Jina AI credentials. Create or copy the Google Sheet using the link above. Go to Google Alerts, create alerts, and copy the RSS feed URL. Replace placeholder API keys and URLs. Adjust Telegram Chat ID. 🔐 Security Note All sensitive credentials (e.g., API keys, personal chat IDs) have been removed from this template. Please replace them using the n8n credentials manager before activating the workflow.
by HoangSP
SEO Blog Generator with GPT-4o, Perplexity, and Telegram Integration This workflow helps you automatically generate SEO-optimized blog posts using Perplexity.ai, OpenAI GPT-4o, and optionally Telegram for interaction. 🚀 Features 🧠 Topic research via Perplexity sub-workflow ✍️ AI-written blog post generated with GPT-4o 📊 Structured output with metadata: title, slug, meta description 📩 Integration with Telegram to trigger workflows or receive outputs (optional) ⚙️ Requirements ✅ OpenAI API Key (GPT-4o or GPT-3.5) ✅ Perplexity API Key (with access to /chat/completions) ✅ (Optional) Telegram Bot Token and webhook setup 🛠 Setup Instructions Credentials: Add your OpenAI credentials (openAiApi) Add your Perplexity credentials under httpHeaderAuth Optional: Setup Telegram credentials under telegramApi Inputs: Use the Form Trigger or Telegram input node to send a Research Query Subworkflow: Make sure to import and activate the subworkflow Perplexity_Searcher to fetch recent search results Customization: Edit prompt texts inside the Blog Content Generator and Metadata Generator to change writing style or target industry Add or remove output nodes like Google Sheets, Notion, etc. 📦 Output Format The final blog post includes: ✅ Blog content (1500-2000 words) ✅ Metadata: title, slug, and meta description ✅ Extracted summary in JSON ✅ Delivered to Telegram (if connected) Need help? Reach out on the n8n community forum
by Jesse Davids
SSL Expiry Alert System Who is this for? This workflow is ideal for administrators or IT professionals responsible for monitoring SSL certificates of multiple websites to ensure they do not expire unexpectedly. Problem SSL certificates play a crucial role in ensuring secure communication over the internet. However, if not monitored closely, they can expire, leading to potential security risks and service disruption. This workflow helps in proactively monitoring SSL certificate expiry dates. Functionality Pulls URLs to monitor from a Google Sheet. Checks SSL certificates using SSL-Checker.io. Updates Google Sheet with SSL details such as expiry date and certificate status. Sends email alerts for SSL certificates nearing expiry (<30 days) or invalid certificates. Setup Clone the provided Google Sheet and update the Google Sheet URL in the "URLs to Monitor" node. Set up Google Sheets and Gmail credentials in n8n. Configure the Discourse Trigger for weekly monitoring. Customize email/telegram/ntfy alert settings as needed. Customization Modify the frequency of monitoring by adjusting the trigger interval in the "Weekly Trigger" node. Customize email content and recipients in the "Send Alert Email" node. Extend functionality by adding additional checks or actions based on SSL certificate status. Note Ensure proper authentication and authorization for accessing Google Sheets, SSL-Checker.io, and Gmail accounts within the workflow.
by Khairul Muhtadin
The Page Speed Insight workflow automates website performance analysis by integrating Google PageSpeed Insights API with Discord messaging and Gemini. This n8n workflow provides expert-level performance audits and comparisons, delivering actionable insights for website owners, SEO professionals, and developers. Disclaimer: this workflow using community nodes Google PageSpeed Insights Community Node 💡 Why Use Page Speed Insight? Save Time:** Instantly analyze and compare website speeds without manual tool usage Eliminate Guesswork:** Receive expert audit reports that translate technical data into clear, actionable insights Improve Website Outcomes:** Identify critical bottlenecks and enhancements prioritized by AI-driven analysis Seamless Integration:** Pull URLs and deliver reports directly via Discord for team collaboration and immediate response ⚡ Who Is This For? Webmasters and website owners seeking fast, automated performance checks SEO analysts who need consistent, data-backed website comparisons Developers requiring clear, prioritized action points from performance audits Digital agencies managing multiple client sites with ongoing monitoring needs 🔧 What This Workflow Does ⏱ Trigger:** Discord message containing URLs or scheduled execution 📎 Parse:** Extracts URLs and determines analysis type (single/comparison) 🔍 Analyze:** Calls Google PageSpeed API for performance data 🤖 Process:** AI generates user-friendly reports from raw Lighthouse JSON 💌 Deliver:** Sends chunked reports to Discord channels 🗂 Log:** Stores execution data for review and improvement 🔐 Setup Instructions Import the provided JSON workflow into your n8n instance Set up credentials for: Google PageSpeed API (ensure you have a valid API key — get yours here) Discord Bot API with permissions to read messages and send messages in your chosen guild/channel Customize the workflow by adjusting: Discord guild and channel IDs where messages are monitored and results posted Scheduled trigger interval if needed Any prompt text or AI model parameters to tailor report tone and detail level Test thoroughly with real URLs and Discord interaction to confirm smooth data flow and output quality 🧩 Pre-Requirements Active n8n instance (Cloud or self-hosted) n8n Google PageSpeed community node Google PageSpeed Insights API key Discord Bot credentials with channel access Google Gemini AI credentials (recommended) 🛠️ Customize It Further Extend to analyze desktop performance or other device types easily by modifying the PageSpeed API call Integrate with Slack, email, or other team tools alongside Discord for broader notification Enhance report depth by adding more AI-driven insights like competitor site recommendations or historical trend tracking 🧠 Nodes Used Google PageSpeed Insights Community Node Discord (getAllMessages, sendMessage) Code (URL parsing, message chunking) AI Language Model (Google Gemini) Schedule Trigger Switch (message type handling) Sticky Notes (workflow guidance) 📞 Support Made by: khaisa Studio Tag: automation, performance, SEO, google-pagespeed, discord Category: Monitoring & Reporting Need a custom solution? Contact Me
by Ranjan Dailata
Who this is for? The LinkedIn Company Story Generator is an automated workflow that extracts company profile data from LinkedIn using Bright Data's web scraping infrastructure, then transforms that data into a professionally written narrative or story using a language model (e.g., OpenAI, Gemini). The final output is sent via webhook notification, making it easy to publish, review, or further automate. This workflow is tailored for: Marketing Professionals**: Seeking to generate compelling company narratives for campaigns. Sales Teams**: Aiming to understand potential clients through summarized company insights. Content Creators**: Looking to craft stories or articles based on company data. Recruiters**: Interested in obtaining concise overviews of companies for talent acquisition strategies. What problem is this workflow solving? Manually gathering and summarizing company information from LinkedIn can be time-consuming and inconsistent. This workflow automates the process, ensuring: Efficiency**: Quick extraction and summarization of company data. Consistency**: Standardized summaries for uniformity across use cases. Scalability**: Ability to process multiple companies without additional manual effort. What this workflow does The workflow performs the following steps: Input Acquisition**: Receives a company's name or LinkedIn URL as input. Data Extraction**: Utilizes Bright Data to scrape the company's LinkedIn profile. Information Parsing**: Processes the extracted HTML content to retrieve relevant company details. Summarization**: Employs AI Google Gemini to generate a concise company story. Output Delivery**: Sends the summarized content to a specified webhook or email address. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the LinkedIn URL by navigating to the Set LinkedIn URL node. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Input Variations: Modify the **Set LinkedIn URL node to accept a different company LinkedIn URL. Data Points**: Adjust the HTML Data Extractor Node to retrieve additional details like employee count, industry, or headquarters location. Summarization Style**: Customize the AI prompt to generate summaries in different tones or formats (e.g., formal, casual, bullet points). Output Destinations**: Configure the output node to send summaries to various platforms, such as Slack, CRM systems, or databases.