by Li CHEN
AWS News Analysis and LinkedIn Automation Pipeline Transform AWS industry news into engaging LinkedIn content with AI-powered analysis and automated approval workflows. Who's it for This template is perfect for: Cloud architects and DevOps engineers** who want to stay current with AWS developments Content creators** looking to automate their AWS news coverage Marketing teams** needing consistent, professional AWS content Technical leaders** who want to share industry insights on LinkedIn AWS consultants** building thought leadership through automated content How it works This workflow creates a comprehensive AWS news analysis and content generation pipeline with two main flows: Flow 1: News Collection and Analysis Scheduled RSS Monitoring: Automatically fetches latest AWS news from the official AWS RSS feed daily at 8 PM AI-Powered Analysis: Uses AWS Bedrock (Claude 3 Sonnet) to analyze each news item, extracting: Professional summary Key themes and keywords Importance rating (Low/Medium/High) Business impact assessment Structured Data Storage: Saves analyzed news to Feishu Bitable with approval status tracking Flow 2: LinkedIn Content Generation Manual Approval Trigger: Feishu automation sends approved news items to the webhook AI Content Creation: AWS Bedrock generates professional LinkedIn posts with: Attention-grabbing headlines Technical insights from a Solutions Architect perspective Business impact analysis Call-to-action engagement Automated Publishing: Posts directly to LinkedIn with relevant hashtags How to set up Prerequisites AWS Bedrock access** with Claude 3 Sonnet model enabled Feishu account** with Bitable access LinkedIn company account** with posting permissions n8n instance** (self-hosted or cloud) Detailed Configuration Steps 1. AWS Bedrock Setup Step 1: Enable Claude 3 Sonnet Model Log into your AWS Console Navigate to AWS Bedrock Go to Model access in the left sidebar Find Anthropic Claude 3 Sonnet and click Request model access Fill out the access request form (usually approved within minutes) Once approved, verify the model appears in your Model access list Step 2: Create IAM User and Credentials Go to IAM Console Click Users → Create user Name: n8n-bedrock-user Attach policy: AmazonBedrockFullAccess (or create custom policy with minimal permissions) Go to Security credentials tab → Create access key Choose Application running outside AWS Download the credentials CSV file Step 3: Configure in n8n In n8n, go to Credentials → Add credential Select AWS credential type Enter your Access Key ID and Secret Access Key Set Region to your preferred AWS region (e.g., us-east-1) Test the connection Useful Links: AWS Bedrock Documentation Claude 3 Sonnet Model Access AWS Bedrock Pricing 2. Feishu Bitable Configuration Step 1: Create Feishu Account and App Sign up at Feishu International Create a new Bitable (multi-dimensional table) Go to Developer Console → Create App Enable Bitable permissions in your app Generate App Token and App Secret Step 2: Create Bitable Structure Create a new Bitable with these columns: title (Text) pubDate (Date) summary (Long Text) keywords (Multi-select) rating (Single Select: Low, Medium, High) link (URL) approval_status (Single Select: Pending, Approved, Rejected) Get your App Token and Table ID: App Token: Found in app settings Table ID: Found in the Bitable URL (tbl...) Step 3: Set Up Automation In your Bitable, go to Automation → Create automation Trigger: When field value changes → Select approval_status field Condition: approval_status equals "Approved" Action: Send HTTP request Method: POST URL: Your n8n webhook URL (from Flow 2) Headers: Content-Type: application/json Body: {{record}} Step 4: Configure Feishu Credentials in n8n Install Feishu Lite community node (self-hosted only) Add Feishu credential with your App Token and App Secret Test the connection Useful Links: Feishu Developer Documentation Bitable API Reference Feishu Automation Guide 3. LinkedIn Company Account Setup Step 1: Create LinkedIn App Go to LinkedIn Developer Portal Click Create App Fill in app details: App name: AWS News Automation LinkedIn Page: Select your company page App logo: Upload your logo Legal agreement: Accept terms Step 2: Configure OAuth2 Settings In your app, go to Auth tab Add redirect URL: https://your-n8n-instance.com/rest/oauth2-credential/callback Request these scopes: w_member_social (Post on behalf of members) r_liteprofile (Read basic profile) r_emailaddress (Read email address) Step 3: Get Company Page Access Go to your LinkedIn Company Page Navigate to Admin tools → Manage admins Ensure you have Content admin or Super admin role Note your Company Page ID (found in page URL) Step 4: Configure LinkedIn Credentials in n8n Add LinkedIn OAuth2 credential Enter your Client ID and Client Secret Complete OAuth2 flow by clicking Connect my account Select your company page for posting Useful Links: LinkedIn Developer Portal LinkedIn API Documentation LinkedIn OAuth2 Guide 4. Workflow Activation Final Setup Steps: Import the workflow JSON into n8n Configure all credential connections: AWS Bedrock credentials Feishu credentials LinkedIn OAuth2 credentials Update webhook URL in Feishu automation to match your n8n instance Activate the scheduled trigger (daily at 8 PM) Test with manual webhook trigger using sample data Verify Feishu Bitable receives data Test approval workflow and LinkedIn posting Requirements Service Requirements AWS Bedrock** with Claude 3 Sonnet model access AWS account with Bedrock service enabled IAM user with Bedrock permissions Model access approval for Claude 3 Sonnet Feishu Bitable** for news storage and approval workflow Feishu account (International or Lark) Developer app with Bitable permissions Automation capabilities for webhook triggers LinkedIn Company Account** for automated posting LinkedIn company page with admin access LinkedIn Developer app with posting permissions OAuth2 authentication setup n8n community nodes**: Feishu Lite node (self-hosted only) Technical Requirements n8n instance** (self-hosted recommended for community nodes) Webhook endpoint** accessible from Feishu automation Internet connectivity** for API calls and RSS feeds Storage space** for workflow execution logs Cost Considerations AWS Bedrock**: ~$0.01-0.05 per news analysis Feishu**: Free tier available, paid plans for advanced features LinkedIn**: Free API access with rate limits n8n**: Self-hosted (free) or cloud subscription How to customize the workflow Content Customization Modify AI prompts** in the AI Agent nodes to change tone, focus, or target audience Adjust hashtags** in the LinkedIn posting node for different industries Change scheduling** frequency by modifying the Schedule Trigger settings Integration Options Replace LinkedIn** with Twitter/X, Facebook, or other social platforms Add Slack notifications** for approved content before posting Integrate with CRM** systems to track content performance Add content calendar** integration for better planning Advanced Features Multi-language support** by modifying AI prompts for different regions Content categorization** by adding tags for different AWS services Performance tracking** by integrating analytics platforms Team collaboration** by adding approval workflows with multiple reviewers Technical Modifications Change RSS sources** to monitor other AWS blogs or competitor news Adjust AI models** to use different Bedrock models or external APIs Add data validation** nodes for better error handling Implement retry logic** for failed API calls Important Notes Service Limitations This template uses community nodes (Feishu Lite) and requires self-hosted n8n Geo-restrictions** may apply to AWS Bedrock models in certain regions Rate limits** may affect high-frequency posting - adjust scheduling accordingly Content moderation** is recommended before automated posting Cost considerations**: Each AI analysis costs approximately $0.01-0.05 USD per news item Troubleshooting Common Issues AWS Bedrock Issues: Model not found**: Ensure Claude 3 Sonnet access is approved in your region Access denied**: Verify IAM permissions include Bedrock service access Rate limiting**: Implement retry logic or reduce analysis frequency Feishu Integration Issues: Authentication failed**: Check App Token and App Secret are correct Table not found**: Verify Table ID matches your Bitable URL Automation not triggering**: Ensure webhook URL is accessible and returns 200 status LinkedIn Posting Issues: OAuth2 errors**: Re-authenticate LinkedIn credentials Posting failed**: Verify company page admin permissions Rate limits**: LinkedIn has daily posting limits for company pages Security Best Practices Never hardcode credentials** in workflow nodes Use environment variables** for sensitive configuration Regularly rotate API keys** and access tokens Monitor API usage** to prevent unexpected charges Implement error handling** for failed API calls
by Samir Saci
Tags: EU News, RSS, AI Classifier, Data Table, Email Digest, Automation, n8n Context Hi! I’m Samir, Supply Chain Engineer and Data Scientist based in Paris, and founder of the startup LogiGreen. This workflow helps me closely follow EU sustainability news that impacts my business. > Use this assistant to automatically curate and summarize EU news tailored to the topics that matter most to you. By default, the workflow filters sustainability-related news, but you can easily adapt the topic description (e.g. AI, trade, digital, energy) using the edit node Topic Config. 📬 For business inquiries, you can find me on LinkedIn Who is this template for? This template is designed for: Policy analysts and researchers** who want to track EU updates on a specific topic Consultants and sustainability teams** who need a daily view of relevant announcements Business owners or startup founders**, like myself, who need to adapt their business strategies to the recent news What does this workflow do? This workflow acts as an AI-powered EU news filter and digest generator. Fetches the latest press releases from the Council of the EU RSS feed every morning at 09:00 Filters out all the news already recorded to avoid duplicates Uses an AI classifier (OpenAI) to decide whether each article is relevant to your topic Stores only the relevant items in an n8n Data Table Generates a formatted HTML newsletter grouping the day’s relevant articles Sends the digest by email using the Gmail node Generates an audio summary with ElevenLabs that is sent via Telegram Here’s an example of the generated email: 🎥 Tutorial A complete tutorial (with explanations of every node) is available on YouTube: Next Steps 🗒️ Inside the workflow: Replace the Data Table reference with your own Set up your Gmail, OpenAI and ElevenLabs credentials Update the recipient email address in the Gmail node Customize the HTML digest (colors, logo, style) in the Code node Adjust the schedule time if necessary Submitted: 18 November 2025 Template designed with n8n version 1.116.2
by Oussama
Production-ready solution for controlling AI agent usage and preventing abuse while managing costs. 🎯 Problem Solved Unlimited AI interactions → Excessive API costs Service abuse → Uncontrolled resource consumption No built-in limits → Need for usage quotas ✅ Solution Overview Two-Part System: Main Flow: User interaction tracking + AI responses Reset Flow: Automated counter resets 🔄 How It Works User Message → Track Counter → Check Limit → Allow/Block → AI Response 🛠️ Core Components Main Workflow 📱 Telegram Trigger - Receives user messages 📊 Google Sheets Counter - Tracks messages per user 🔀 Switch Logic - Checks limits (default: 3 messages) 🤖 AI Agent - Processes allowed interactions 💬 Smart Responses - Delivers AI answers or limit warnings Auto-Reset System ⏰ Schedule Trigger - Runs every configurable interval 🔄 Bulk Counter Reset - Resets all users to 0 ⚙️ Configuration Message Limits Modify Switch Node conditions: > 3 messages → Block silently = 3 messages → Send limit warning < 3 messages → Allow AI response Reset Schedules Testing: Every 1 minute Hourly: 0 * * * * Daily: 0 0 * * * Weekly: 0 0 * * 0 📋 Setup Requirements Credentials Needed: 🤖 Telegram Bot Token 📊 Google Sheets API 🧠 AI Model *Google Sheets Structure: *Column A: User ID (Telegram chat.id) Column B: Message Counter 🎯 Perfect For 💰 Cost Control - Prevent runaway API costs 🛡️ Demo/Trial Bots - Limited interactions 🏢 Customer Service - Usage quotas 🎓 Educational Bots - Daily limits 🚫 Anti-Abuse - Fair usage policies 🚀 Key Benefits ✅ Cost Management - Control AI API expenses ✅ Fair Access - Equal usage for all users ✅ Production Ready - Robust error handling ✅ Flexible Limits - Easy adjustment ✅ Auto-Reset - No manual intervention ✅ User-Friendly - Clear limit messages 📝 Quick Customization Adjust Limits: Change Switch node values Reset Timing: Modify Schedule Trigger Custom Messages: Edit Telegram response nodes User Tiers: Add columns to Google Sheets
by Santhej Kallada
Who is this for? Creators, designers, and developers exploring AI-powered image generation. Automation enthusiasts who want to integrate image creation into n8n workflows. Telegram bot builders looking to add visual AI capabilities. Marketers or freelancers automating creative content workflows. What problem is this workflow solving? Creating AI images usually requires multiple tools and manual setup. This workflow removes the complexity by: Connecting Nano Banana (AI image model) directly to n8n. Allowing image generation via Telegram chatbot. Providing a no-code setup that is fully automated and scalable. What this workflow does This workflow demonstrates how to generate AI images using Nano Banana and n8n, with an integrated Telegram chatbot interface. The process includes: Connecting Gemini Nano Banana to n8n. Automating image generation requests triggered from Telegram. Returning AI-generated images back to the user. Allowing customization of prompts and styles dynamically. By the end, you’ll have a fully functional automation to generate and send AI-created images through Telegram — no coding required. Setup Create accounts: Sign up on n8n.io and ensure you have Telegram Bot API access. Connect your Nano Banana or Gemini API endpoint. Set up your Telegram Bot: Use BotFather to create a new bot and get the token. Add the “Telegram Trigger” node in n8n. Configure Nano Banana connection: Add an HTTP Request node for Nano Banana API. Insert your API key and prompt parameters. Handle responses: Parse the AI-generated image output. Send the image file back to the Telegram user. Test and Deploy: Run a sample image prompt. Verify that Telegram returns the correct generated image. How to customize this workflow to your needs Modify prompts or styles to fit different artistic use cases. Add conditional logic for image size, aspect ratio, or filters. Integrate with Google Drive or Notion for image storage. Schedule automatic image generation for campaigns or content creation. Expand with OpenAI or Stability AI for hybrid workflows. Notes Nano Banana API may have rate limits depending on usage. Ensure your Telegram bot has permission to send files and images. You can host this workflow on n8n Cloud or self-hosted setups. Want A Video Tutorial on How to Setup This Automation: https://youtu.be/0s6ZdU1fjc4
by Kshitij Matta
How it Works? User Answers Questions Prompted by the Telegram Bot Data Tables are updated to with relevant step of the process and a chat id Upon Approval, the Title, Description and Slug are created and then Product is created on WooCommerce via API request. Data Tables are reset and user is prompted to create another product. Setup Steps: (25 Minutes) Create a Telegram bot via @botfather on telegram Setup 2 Data Tables with names WooCommerce Product Manager & User_Images Add your Preffered LLM Credentials and set credentials in telegram node In TelegramGroupMedia node and EditFields 1 node, add your bot token to replace {{your bot token}} Voila! Your Workflow is now configured.
by Roshan Ramani
Who's it for This template is perfect for content creators, researchers, marketers, and Reddit enthusiasts who want to stay updated on specific topics without manually browsing Reddit. If you need curated, AI-summarized Reddit insights delivered directly to your Telegram, this workflow automates the entire process. What it does This workflow transforms your Telegram into a powerful Reddit search engine with AI-powered curation. Simply send any keyword to your Telegram bot, and it will: Search Reddit across 4 different sorting methods (top, hot, relevance) to capture diverse perspectives Automatically remove duplicate posts from multiple search results Filter posts based on quality metrics (minimum 50 upvotes, recent content within 15 days, non-empty text) Extract key information: title, upvotes, subreddit, publication date, URL, and content Generate a clean, Telegram-formatted summary using Google Gemini AI Deliver structured results with direct links back to you instantly The AI summary includes post titles, upvote counts, timestamps, brief insights, and direct Reddit links—all formatted for easy mobile reading. How it works Step 1: Telegram Trigger User sends a search keyword via Telegram (e.g., "voice AI agents") Step 2: Parallel Reddit Searches Four simultaneous Reddit API calls search with different sorting algorithms: Top posts (all-time popularity) Hot posts (trending now) Relevance (best keyword matches) Top posts (duplicate for broader coverage) Step 3: Merge & Deduplicate All search results combine into one stream, then a JavaScript code node removes duplicate posts by comparing post IDs Step 4: Field Extraction The Edit Fields node extracts and formats: Post title Upvote count Subreddit name and subscriber count Publication date (converted from Unix timestamp) Reddit URL Post content (selftext) Step 5: Quality Filtering The Filter node applies three conditions: Minimum 50 upvotes (ensures quality) Non-empty content (excludes link-only posts) Posted within last 15 days (ensures freshness) Step 6: Data Aggregation All filtered posts aggregate into a single dataset for AI processing Step 7: AI Summarization Google Gemini AI analyzes the aggregated posts and generates a concise, Telegram-formatted summary with: Emoji indicators for better readability Point-wise breakdown of top 5-7 posts Upvote counts and relative timestamps Brief 1-2 sentence summaries Direct Reddit links Step 8: Delivery The formatted summary sends back to the user's Telegram chat Requirements Credentials needed: Reddit OAuth2 API** - For searching Reddit posts (Get Reddit API credentials) Google Gemini API** - For AI-powered summarization (Get Gemini API key) Telegram Bot Token** - For receiving queries and sending results (Create Telegram Bot) n8n Version: Self-hosted or Cloud (latest version recommended) Setup Instructions 1. Create Telegram Bot Message @BotFather on Telegram Send /newbot and follow prompts Copy the bot token for n8n credentials Start a chat with your new bot 2. Configure Reddit API Go to https://www.reddit.com/prefs/apps Click "Create App" → Select "script" Note your Client ID and Secret Add credentials to n8n's Reddit OAuth2 3. Get Gemini API Key Visit https://ai.google.dev/ Create a new API key Add to n8n's Google Gemini credentials 4. Import & Configure Workflow Import this template into n8n Add your three credentials to respective nodes Remove pinData from "Telegram Trigger" node (test data) Activate the workflow 5. Test It Send any keyword to your Telegram bot (e.g., "machine learning") Wait 10-20 seconds for results Receive AI-summarized Reddit insights How to customize Adjust Quality Filters: Edit the Filter node conditions: Change minimum upvotes (currently 50) Modify time range (currently 15 days) Add subreddit subscriber minimum Limit Results: Add a Limit node after Filter to cap results at 10-15 posts for faster processing Change Search Strategies: Modify the Reddit nodes' "sort" parameter: new - Latest posts first comments - Most commented controversial - Controversial content Customize AI Output: Edit the AI Agent's system message to: Change summary style (more/less detail) Adjust formatting (bullets, numbered lists) Modify language/tone Add emoji preferences Add User Feedback: Insert a Telegram Send Message node after the trigger: "🔍 Searching Reddit for '{{ $json.message.text }}'... Please wait." Enable Error Handling: Create an Error Workflow: Add Error Trigger node Send fallback message: "❌ Search failed. Please try again." Sort by Popularity: Add a Sort node after Filter: Field: upvotes Order: Descending
by Rully Saputra
Decodo-powered review aggregation to Google Sheets with Gemini analysis and Telegram alerts Who’s it for This template is designed for e-commerce owners, marketplace sellers, product teams, and CX/reputation managers who need an automated way to monitor product reviews. It’s ideal for anyone tracking Amazon listings or other URLs and wants AI-powered sentiment, summaries, and alerts without manual scraping. What it does This workflow automatically retrieves product URLs from Google Sheets, scrapes reviews using Decodo (community node), formats the extracted data, and analyzes it using Gemini AI. It produces both sentiment classification and a concise review summary. Results are saved to a Google Sheets log, and the workflow sends a Telegram alert whenever new reviews are processed. The entire pipeline runs on a schedule, ensuring continuous and fully automated monitoring. How it works A scheduled trigger starts the workflow. Google Sheets provides the list of product URLs. Each URL is processed through Decodo to extract user reviews. A Code node formats the raw review data. Gemini performs sentiment analysis and summarization. Results are appended to a Google Sheets review log. A Telegram message delivers a real-time summary and sentiment snapshot. Sign up for Decodo — get better pricing here Requirements Decodo API credentials (self-hosted community node) Google Sheets API Key Gemini AI credentials Telegram Bot + Chat ID n8n self-hosted (required for Decodo community node) How to set up Add your Decodo credentials to the Decodo node. Update both Google Sheets nodes with your document ID and sheet names. Insert your Gemini API key. Provide your Telegram Bot token and Chat ID. Adjust the schedule interval to your preference. Run the workflow once to validate mappings and output fields. How to customize Modify the Code node to change how reviews are formatted. Extend Gemini prompts for deeper analysis (keywords, categories, toxicity). Add filters to trigger alerts only on negative sentiment. Append additional metadata (timestamps, product IDs) to the Sheets log. Add email, Slack, or other communication channels. Disclaimer (Community Node) This workflow uses a community node (Decodo) and therefore works only on self-hosted n8n instances. Be sure to install and trust the package before using it.
by octik5
🤖 This n8n workflow automatically parses news articles from a webpage, enhances them with AI, and publishes them to a Telegram channel with a watermarked image. Unlike the RSS-based setup, this workflow directly fetches and processes content from any specified webpage. Use Cases Automatically post new website articles to your Telegram channel. Use AI to rewrite or summarize text for better readability. Add branded watermarks to images and keep your channel visually consistent. How It Works Schedule Trigger: Runs the workflow on a custom schedule. Fetch Web Page: Retrieves the HTML content of your chosen website. Extract Links: Parses article links from the HTML source. Check & Update Google Sheet: Skips already processed links and records new ones. Fetch & Clean Article: Retrieves, extracts, and formats the article text. AI Text Customization: Uses an AI agent to enhance the text. Image Watermarking: Fetches the article image and applies a watermark. Telegram Publishing: Posts the final image and AI-enhanced text to your channel. Setup Steps Google Sheet:** Create and share a sheet to track processed links. Web URL:** Enter your target webpage in the HTTP Request node. AI Agent:** Choose a model and prompt for text customization (e.g., OpenRouter or Gemini). Telegram Bot:** Add your bot token and chat ID. Run & Test:** Execute once manually, then let it run on schedule. Tips AI usage may incur costs depending on the model provider. Some AI models can be geo-restricted — check availability if you get “model not found.” Customize watermark style (font, color, size) to match your branding. Use Telegram Markdown for rich message formatting. ✅ Key Advantage: No RSS required — the workflow directly parses websites, enhances content with AI, and automates publishing to Telegram.
by Fahmi Fahreza
AI Research Assistant Using Gemini AI and Decodo Sign up for Decodo HERE for Discount This workflow transforms your Telegram bot into a smart academic research assistant powered by Gemini AI and Decodo. It analyzes queries, interprets URLs, scrapes scholarly data, and returns concise summaries of research papers directly in chat. Who’s it for? For researchers, students, and AI enthusiasts who want to search and summarize academic content via Telegram using Google Scholar and arXiv. How it works The Telegram bot captures text, voice, or image messages. Gemini models interpret academic URLs and user intent. Decodo extracts paper details like titles, abstracts, and publication info. The AI agent summarizes results and delivers them as text or file (if too long). How to set up Add your Telegram bot credentials in the Start Telegram Bot node. Connect Google Gemini and Decodo API credentials. Replace {{INPUT_SEARCH_URL_INSIGHTS}} placeholder on Research Summary Agent's system message with your search URL insights (or use the pinned example). Test by sending a text, image, or voice message to your bot. Activate the workflow to run in real-time.
by Nguyen Thieu Toan
ForumPulse for n8n – Daily Pulse & On-demand Deep Dives Author: Nguyen Thieu Toan Category: Community & Knowledge Automation Tags: Telegram, Reddit, n8n Forum, AI Summarization, Gemini, Groq How it works ForumPulse is an AI-powered assistant that keeps you connected to the latest discussions around n8n. The workflow integrates Reddit (r/n8n) and the n8n Community Forum, fetches trending and recent posts, and uses Gemini/Groq AI models to generate clean, structured summaries. It works in two complementary modes: Daily Pulse (Automated Digest): Runs on schedule (default: 8:00 AM) to gather highlights and deliver a concise summary directly to your Telegram. On-demand Deep Dive (Interactive): Listens to Telegram queries in real-time, detects intent (search, deep dive, open link, or chat), and provides summaries, comments, and insights for any chosen post. When AI intent detection confidence drops below 0.7, the bot automatically asks for clarification before proceeding—ensuring accuracy and transparency. Step-by-step 1. Setup & Prerequisites n8n instance** (cloud or self-hosted). Telegram Bot** (created via BotFather). MongoDB** (optional, for persistent memory). API keys** for Gemini and Groq. Your Telegram user ID** (to receive replies). ⚠️ Replace all test credentials and tokens with your own. Never commit real secrets into exported templates. 2. Daily Pulse Automation Schedule Trigger** runs the workflow every morning at the configured time. Reddit + Forum Search** collects hot/new topics. Merge Results** combines both sources into a unified dataset. AI Summarizer Overview** condenses the results into a short, engaging digest. Telegram Output** delivers the digest, automatically split into safe chunks under 2000 characters. 3. On-demand Interaction Telegram Trigger** listens for incoming messages. Intent Analysis (AI Agent)* classifies the query as *Search | Open Link | Deep Dive | Chitchat. Confidence Gate**: if confidence < 0.7, sends a clarification prompt to the user. Branch by Intent**: Search: Query Reddit/Forum with filters. Open Link: Fetch details of a specific post. Deep Dive: Retrieve comments and metadata. Chitchat: Respond conversationally. AI Summarizer** structures the output, highlighting trends, issues, and takeaways. Telegram Delivery** formats and sends the reply, respecting HTML tags and message length. 4. Deep Dive Details Post Extraction** fetches titles, authors, timestamps, and stats. Comment Parsing** organizes replies into structured data. Merge Post + Comments** builds a complete context package. Summarizer** produces detailed, actionable insights. 5. Error Handling & Safety Confidence Check** prevents wrong answers by requiring clarification. Error Paths** handle API downtime or unexpected formats gracefully. Auto Chunking** avoids Telegram’s message length cap (2000 chars). Safe Defaults** ensure fallback queries when inputs are missing or unclear. Customization Options Sources**: Add or replace platforms by editing HTTP Request nodes. Schedule**: Change the cron time in the Schedule Trigger (e.g., 7:30 AM). Filters**: Adjust default sort order, time ranges, and result limits. AI Persona**: Reword the systemMessage in AI Agent nodes to change tone (professional, casual, emoji-rich). Languages**: Auto-detects user language, but you can force English or Vietnamese by editing prompt settings. Memory**: Enable MongoDB nodes for persistent user context across sessions. Integrations**: Extend beyond Telegram—send digests to Slack, Discord, or email. Models**: Swap Gemini/Groq with other supported LLMs for experimentation. ✨ Crafted by Nguyen Thieu Toan with a focus on clarity, reliability, and community-driven insights. This workflow is not just functional - it reflects a design philosophy: automation should feel natural, transparent, and genuinely useful.
by Michael Yang
Who is this template for? This workflow is perfect for competitive‑intel analysts, product managers, content marketers, and anyone who tracks multiple company blogs or news sources. If you need a weekly snapshot of fresh, on‑topic articles—without wading through dozens of tabs—this template is for you. What does it do? The workflow reads a curated list of candidate URLs from Google Sheets, filters out duplicates and off‑topic pages with an AI agent, scrapes the surviving links, generates three‑sentence summaries, logs the results back to Sheets, and delivers a polished HTML digest to your inbox every week. Why is it useful? Instead of manually opening competitor links, checking for relevance, copying highlights, and pasting them into reports, this automation does the grunt work for you. It turns scattered URLs into a searchable knowledge base and a ready‑to‑share email, freeing you to focus on insights and strategy—not housekeeping. How does it work? A Sunday‑morning cron trigger kicks things off. The workflow pulls links from the Input Links tab, compares them to the existing Summary tab, and passes fresh candidates to an AI “bouncer” that keeps only blog posts, tutorials, news, and product updates. Firecrawl then scrapes each page; Gemini 2.5‑Flash and OpenAI condense the content into title, author, date, and summary. The structured data is appended to your Summary sheet and formatted into a company‑grouped HTML digest, which lands in your email before the workweek starts. Set up steps Clone the workflow Import the JSON into your n8n Cloud workspace. Create the Google Sheet Make a new spreadsheet with two tabs: Input Links and Summary (names must match). In Input Links, add columns Company, Page Type, and Link (or rename to match the node mapping). Leave Summary blank—the workflow will populate it. Copy the Sheet URL; you’ll paste it into two Google Sheets nodes. Add credentials (n8n ▸ Credentials) Google Sheets OAuth2 – Authorise with the Google account that owns the spreadsheet. Gmail OAuth2 – Authorise the Gmail account that should send the digest. Firecrawl HTTP Header Auth – Set Authorization: Bearer <YOUR_FIRECRAWL_API_KEY>. Point nodes to your Sheet Open each Google Sheets node (Input Links, Read_Url_Summary_Tool, Append row in sheet, Get row(s) in sheet). Paste the Document ID (found in the Sheet URL) and select the correct tab (Input Links or Summary). Update email recipients In the Send a message (Gmail) node, replace the sample addresses with your own distribution list. Adjust scheduling (optional) Double‑click the Schedule Trigger node and change the cron expression if you prefer a different day/time. Tune AI models (optional) OpenAI o4‑mini and Gemini 2.5‑Flash nodes default to cost‑efficient settings. Feel free to switch models or tweak temperature to suit your tone. Test with a single URL Add one row in Input Links, then execute the workflow manually (▶ Run). Verify that a new row appears in Summary and an email lands in your inbox. Go live Activate the workflow (toggle in top bar). Confirm the green status badge and wait for the next scheduled run. Tip: The Firecrawl Free tier limits you to ~10 requests/min. If you scale beyond that, raise the batching interval in both Firecrawl nodes or upgrade your Firecrawl plan.
by n8n Team
This n8n workflow automates the monitoring and notification of Palo Alto Networks security advisories. It is triggered manually from within the n8n UI or scheduled to run daily at midnight using the Schedule Trigger. The workflow begins by fetching the latest security advisories from Palo Alto Networks' RSS feed. Each advisory is then processed, and relevant information is extracted and categorized, including the advisory type, subject, and severity. The workflow checks the publication date of each advisory to ensure that it was posted within the last 24 hours, filtering out older advisories. The workflow then splits into two paths based on the advisory type: GlobalProtect and Traps. In the GlobalProtect path, advisories related to GlobalProtect are identified and used to create Jira issues. The Jira issues include a summary with the advisory title and a description that provides details about the advisory, its severity, link, and publication date. In the Traps path, advisories related to Traps are recognized, and dummy data (which should be replaced with logic to retrieve valid user emails) is generated for sample purposes. These email addresses are then used to send email notifications using the Gmail node. Each email's subject includes the type of advisory, while the body contains the advisory title and a link for more information. Potential issues when setting up this workflow for the first time might involve configuring the Schedule Trigger to match the desired time zone. Additionally, ensuring that the Jira and Gmail nodes are configured correctly with the required credentials and email addresses is crucial. The placeholder for generating dummy data for email recipients should be replaced with logic to retrieve valid user emails. Proper error handling and testing with real and sample advisories can help identify and resolve any potential issues during setup.