by Vigh Sandor
Workflow Overview This advanced n8n workflow provides intelligent email automation with AI-generated responses. It combines four core functions: Monitors incoming emails via IMAP (e.g., SOGo) Sends instant Telegram notifications for all new emails Uses AI (Ollama LLM) to generate contextual, personalized auto-replies Sends confirmation notifications when auto-replies are sent Unlike traditional auto-responders, this workflow analyzes email content and creates unique, relevant responses for each message. Setup Instructions Prerequisites Before setting up this workflow, ensure you have: An n8n instance (self-hosted or cloud) with AI/LangChain nodes enabled IMAP email account credentials (e.g., SOGo, Gmail, Outlook) SMTP server access for sending emails Telegram Bot API credentials Telegram Chat ID where notifications will be sent Ollama installed locally or accessible via network (for AI model) The llama3.1 model downloaded in Ollama Step 1: Install and Configure Ollama Local Installation Install Ollama on your system: Visit https://ollama.ai and download the installer for your OS Follow installation instructions for your platform Download the llama3.1 model: ollama pull llama3.1 Verify the model is available: ollama list Start Ollama service (if not already running): ollama serve Test the model: ollama run llama3.1 "Hello, world!" Remote Ollama Instance If using a remote Ollama server: Note the server URL (e.g., http://192.168.1.100:11434) Ensure network connectivity between n8n and Ollama server Verify firewall allows connections on port 11434 Step 2: Configure IMAP Credentials Navigate to n8n Credentials section Create a new IMAP credential with the following information: Host: Your IMAP server address Port: Usually 993 for SSL/TLS Username: Your email address Password: Your email password or app-specific password Enable SSL/TLS: Yes (recommended) Security: Use STARTTLS or SSL/TLS Step 3: Configure SMTP Credentials Create a new SMTP credential in n8n Enter the following details: Host: Your SMTP server address (e.g., Postfix server) Port: Usually 587 (STARTTLS) or 465 (SSL) Username: Your email address Password: Your email password or app-specific password Secure connection: Enable based on your server configuration Allow unauthorized certificates: Enable if using self-signed certificates Step 4: Configure Telegram Bot Create a Telegram bot via BotFather: Open Telegram and search for @BotFather Send /newbot command Follow instructions to create your bot Save the API token provided by BotFather Obtain your Chat ID: Method 1: Send a message to your bot, then visit: https://api.telegram.org/bot<YOUR_BOT_TOKEN>/getUpdates Method 2: Use a Telegram Chat ID bot like @userinfobot Method 3: For group chats, add the bot to the group and check the updates Note: Group chat IDs are negative numbers (e.g., -1234567890123) Add Telegram API credential in n8n: Credential Type: Telegram API Access Token: Your bot token from BotFather Step 5: Configure Ollama API Credential In n8n Credentials section, create a new Ollama API credential Configure based on your setup: For local Ollama: Base URL is usually http://localhost:11434 For remote Ollama: Enter the server URL (e.g., http://192.168.1.100:11434) Test the connection to ensure n8n can reach Ollama Step 6: Import and Configure Workflow Import the workflow JSON into your n8n instance Update the following nodes with your specific information: Check Incoming Emails Node Verify IMAP credentials are connected Configure polling interval (optional): Default behavior checks on workflow trigger schedule Can be set to check every N minutes Set mailbox folder if needed (default is INBOX) Send Notification from Incoming Email Node Update chatId parameter with your Telegram Chat ID Replace -1234567890123 with your actual chat ID Customize notification message template if desired Current format includes: Sender, Subject, Date-Time Dedicate Filtering As No-Response Node Review spam filter conditions: Blocks emails from addresses containing "noreply" or "no-reply" Blocks emails with "newsletter" in subject line (case-insensitive) Add additional filtering rules as needed: Block specific domains Filter by keywords Whitelist/blacklist specific senders Ollama Model Node Verify Ollama API credential is connected Confirm model name: llama3.1:bf230501 (or adjust to your installed version) Context window set to 4096 tokens (sufficient for most emails) Can be adjusted based on your needs and hardware capabilities Basic LLM Chain Node Review the AI prompt engineering (pre-configured but customizable) Current prompt instructs the AI to: Read the email content Identify main topic in 2-4 words Generate a professional acknowledgment response Keep responses consistent and concise Modify prompt if you want different response styles Send Auto-Response in SMTP Node Verify SMTP credentials are connected Check fromEmail uses correct email address: Currently set to {{ $('Check Incoming Emails - IMAP (example: SOGo)').item.json.to }} This automatically uses the recipient address (your mailbox) Subject automatically includes "Re: " prefix with original subject Message text comes from AI-generated content Send Notification from Response Node Update chatId parameter (same as first notification node) This sends confirmation that auto-reply was sent Includes original email details and the AI-generated response text Step 7: Test the Workflow Perform initial configuration test: Test Ollama connectivity: curl http://localhost:11434/api/tags Verify all credentials are properly configured Check n8n has access to required network endpoints Execute a test run: Click "Execute Workflow" button in n8n Send a test email to your monitored inbox Use a clear subject and body for better AI response Verify workflow execution: First Telegram notification received (incoming email alert) AI processes the email content Auto-reply is sent to the original sender Second Telegram notification received (confirmation with AI response) Check n8n execution log for any errors Verify email delivery: Check if auto-reply arrived at sender's inbox Verify it's not marked as spam Review AI-generated content for appropriateness Step 8: Fine-Tune AI Responses Send various types of test emails: Different topics (inquiry, complaint, information request) Various email lengths (short, medium, long) Different languages if applicable Review AI-generated responses: Check if topic identification is accurate Verify response appropriateness Ensure tone is professional Adjust the prompt if needed: Modify topic word count (currently 2-4 words) Change response template Add language-specific instructions Include custom sign-offs or branding Step 9: Activate the Workflow Once testing is successful and AI responses are satisfactory: Toggle the workflow to "Active" state The workflow will now run automatically on the configured schedule Monitor initial production runs: Review first few auto-replies carefully Check Telegram notifications for any issues Verify SMTP delivery rates Set up monitoring: Enable n8n workflow error notifications Monitor Ollama resource usage Check email server logs periodically How to Use Normal Operation Once activated, the workflow operates fully automatically: Email Monitoring: The workflow continuously checks your IMAP inbox for new messages based on the configured polling interval or trigger schedule. Immediate Incoming Notification: When a new email arrives, you receive an instant Telegram notification containing: Sender's email address Email subject line Date and time received Note indicating it's from IMAP mailbox Intelligent Filtering: The workflow evaluates each email against spam filter criteria: Emails from "noreply" or "no-reply" addresses are filtered out Emails with "newsletter" in the subject line are filtered out Filtered emails receive notification but no auto-reply Legitimate emails proceed to AI response generation AI Response Generation: For emails that pass the filter: The AI reads the full email content Analyzes the main topic or purpose Generates a personalized acknowledgment Creates a professional response that: Thanks the sender References the specific topic Promises a personal follow-up Maintains professional tone Automatic Reply Delivery: The AI-generated response is sent via SMTP to the original sender with: Subject line: "Re: [Original Subject]" From address: Your monitored mailbox Body: AI-generated contextual message Response Confirmation: After the auto-reply is sent, you receive a second Telegram notification showing: Original email details (sender, subject, date) The complete AI-generated response text Confirmation of successful delivery Understanding AI Response Generation The AI analyzes emails intelligently: Example 1: Business Inquiry Incoming Email: "I'm interested in your consulting services for our Q4 project..." AI Topic Identification: "consulting services" Generated Response: "Dear Correspondent! Thank you for your message regarding consulting services. I will respond with a personal message as soon as possible. Have a nice day!" Example 2: Technical Support Incoming Email: "We're experiencing issues with the API integration..." AI Topic Identification: "API integration issues" Generated Response: "Dear Correspondent! Thank you for your message regarding API integration issues. I will respond with a personal message as soon as possible. Have a nice day!" Example 3: General Question Incoming Email: "Could you provide more information about pricing?" AI Topic Identification: "pricing information" Generated Response: "Dear Correspondent! Thank you for your message regarding pricing information. I will respond with a personal message as soon as possible. Have a nice day!" Customizing Filter Rules To modify which emails receive AI-generated auto-replies: Open the "Dedicate Filtering As No-Response" node Modify existing conditions or add new ones: Block specific domains: {{ $json.from.value[0].address }} Operation: does not contain Value: @spam-domain.com Whitelist VIP senders (only respond to specific people): {{ $json.from.value[0].address }} Operation: contains Value: @important-client.com Filter by subject keywords: {{ $json.subject.toLowerCase() }} Operation: does not contain Value: unsubscribe Combine multiple conditions: Use AND logic (all must be true) for stricter filtering Use OR logic (any can be true) for more permissive filtering Customizing AI Prompt To change how the AI generates responses: Open the "Basic LLM Chain" node Modify the prompt text in the "text" parameter Current structure: Context setting (read email, identify topic) Output format specification Rules for AI behavior Example modifications: Add company branding: Return only this response, filling in the [TOPIC]: Dear Correspondent! Thank you for reaching out to [Your Company Name] regarding [TOPIC]. I will respond with a personal message as soon as possible. Best regards, [Your Name] [Your Company Name] Make it more casual: Return only this response, filling in the [TOPIC]: Hi there! Thanks for your email about [TOPIC]. I'll get back to you personally soon. Cheers! Add urgency classification: Read the email and classify urgency (Low/Medium/High). Identify the main topic. Return: Dear Correspondent! Thank you for your message regarding [TOPIC]. Priority: [URGENCY] I will respond with a personal message as soon as possible. Customizing Telegram Notifications Incoming Email Notification: Open "Send Notification from Incoming Email" node Modify the "text" parameter Available variables: {{ $json.from }} - Full sender info {{ $json.from.value[0].address }} - Sender email only {{ $json.from.value[0].name }} - Sender name (if available) {{ $json.subject }} - Email subject {{ $json.date }} - Date received {{ $json.textPlain }} - Email body (use cautiously for privacy) {{ $json.to }} - Recipient address Response Confirmation Notification: Open "Send Notification from Response" node Modify to include additional information Reference AI response: {{ $('Basic LLM Chain').item.json.text }} Monitoring and Maintenance Daily Monitoring Check Telegram Notifications**: Review incoming email alerts and response confirmations Verify AI Quality**: Spot-check AI-generated responses for appropriateness Email Delivery**: Confirm auto-replies are being delivered (not caught in spam) Weekly Maintenance Review Execution Logs**: Check n8n execution history for errors or warnings Ollama Performance**: Monitor resource usage (CPU, RAM, disk space) Filter Effectiveness**: Assess if spam filters are working correctly Response Quality**: Review multiple AI responses for consistency Monthly Maintenance Update Ollama Model**: Check for new llama3.1 versions or alternative models Prompt Optimization**: Refine AI prompt based on response quality observations Credential Rotation**: Update passwords and API tokens for security Backup Configuration**: Export workflow and credentials (securely) Advanced Usage Multi-Language Support If you receive emails in multiple languages: Modify the AI prompt to detect language: Detect the email language. Generate response in the SAME language as the email. If English: [English template] If Hungarian: [Hungarian template] If German: [German template] Or use language-specific conditions in the filtering node Priority-Based Responses Generate different responses based on sender importance: Add an IF node after filtering to check sender domain Route VIP emails to a different LLM chain with priority messaging Standard emails use the normal AI chain Response Logging To maintain a record of all AI interactions: Add a database node (PostgreSQL, MySQL, etc.) after the auto-reply node Store: timestamp, sender, subject, AI response, delivery status Use for compliance, analytics, or training data A/B Testing AI Prompts Test different prompt variations: Create multiple LLM Chain nodes with different prompts Use a randomizer or round-robin approach Compare response quality and user feedback Optimize based on results Troubleshooting Notifications Not Received Problem: Telegram notifications not appearing Solutions: Verify Chat ID is correct (positive for personal chats, negative for groups) Check if bot has permissions to send messages Ensure bot wasn't blocked or removed from group Test Telegram API credential independently Review n8n execution logs for Telegram API errors AI Responses Not Generated Problem: Auto-replies sent but content is empty or error messages Solutions: Check Ollama service is running: ollama list Verify llama3.1 model is downloaded: ollama list Test Ollama directly: ollama run llama3.1 "Test message" Review Ollama API credential URL in n8n Check network connectivity between n8n and Ollama Increase context window if emails are very long Monitor Ollama logs for errors Poor Quality AI Responses Problem: AI generates irrelevant or inappropriate responses Solutions: Review and refine the prompt engineering Add more specific rules and constraints Provide examples in the prompt of good vs bad responses Adjust topic word count (increase from 2-4 to 3-6 words) Test with different Ollama models (e.g., llama3.1:70b for better quality) Ensure email content is being passed correctly to AI Auto-Replies Not Sent Problem: Workflow executes but emails not delivered Solutions: Verify SMTP credentials and server connectivity Check fromEmail address is correct Review SMTP server logs for errors Test SMTP sending independently Ensure "Allow unauthorized certificates" is enabled if needed Check if emails are being filtered by spam filters Verify SPF/DKIM records for your domain High Resource Usage Problem: Ollama consuming excessive CPU/RAM Solutions: Reduce context window size (from 4096 to 2048) Use a smaller model variant (llama3.1:8b instead of default) Limit concurrent workflow executions in n8n Add delay/throttling between email processing Consider using a remote Ollama instance with better hardware Monitor email volume and processing time IMAP Connection Failures Problem: Workflow can't connect to email server Solutions: Verify IMAP credentials are correct Check if IMAP is enabled on email account Ensure SSL/TLS settings match server requirements For Gmail: enable "Less secure app access" or use App Passwords Check firewall allows outbound connections on IMAP port (993) Test IMAP connection using email client (Thunderbird, Outlook) Workflow Not Triggering Problem: Workflow doesn't execute automatically Solutions: Verify workflow is in "Active" state Check trigger node configuration and schedule Review n8n system logs for scheduler issues Ensure n8n instance has sufficient resources Test manual execution to isolate trigger issues Check if n8n workflow execution queue is backed up Workflow Architecture Node Descriptions Check Incoming Emails - IMAP: Polls email server at regular intervals to retrieve new messages from the configured mailbox. Send Notification from Incoming Email: Immediately sends formatted notification to Telegram for every new email detected, regardless of spam status. Dedicate Filtering As No-Response: Evaluates emails against spam filter criteria to determine if AI processing should occur. No Operation: Placeholder node for filtered emails that should not receive an auto-reply (spam, newsletters, automated messages). Ollama Model: Provides the AI language model (llama3.1) used for natural language processing and response generation. Basic LLM Chain: Executes the AI prompt against the email content to generate contextual auto-reply text. Send Auto-Response in SMTP: Sends the AI-generated acknowledgment email back to the original sender via SMTP server. Send Notification from Response: Sends confirmation to Telegram showing the auto-reply was successfully sent, including the AI-generated content. AI Processing Pipeline Email Content Extraction: Email body text is extracted from IMAP data Context Loading: Email content is passed to LLM with prompt instructions Topic Analysis: AI identifies main subject or purpose in 2-4 words Template Population: AI fills response template with identified topic Output Formatting: Response is formatted and cleaned for email delivery Quality Assurance: n8n validates response before sending
by WeblineIndia
IPA Size Tracker with Trend Alerts – Automated iOS Apps Size Monitoring This workflow runs on a daily schedule and monitors IPA file sizes from configured URLs. It stores historical size data in Google Sheets, compares current vs. previous builds and sends email alerts only when significant size changes occur (default: ±10%). A DRY_RUN toggle allows safe testing before real notifications go out. Who’s it for iOS developers tracking app binary size growth over time. DevOps teams monitoring build artifacts and deployment sizes. Product managers ensuring app size budgets remain acceptable. QA teams detecting unexpected size changes in release builds. Mobile app teams optimizing user experience by keeping apps lightweight. How it works Schedule Trigger (daily at 09:00 UTC) kicks off the workflow. Configuration: Define monitored apps with {name, version, build, ipa_url}. HTTP Request downloads the IPA file from its URL. Size Calculation: Compute file sizes in bytes, KB, MB and attach timestamp metadata. Google Sheets: Append size data to the IPA Size History sheet. Trend Analysis: Compare current vs. previous build sizes. Alert Logic: Evaluate thresholds (>10% increase or >10% decrease). Email Notification: Send formatted alerts with comparisons and trend indicators. Rate Limit: Space out notifications to avoid spamming recipients. How to set up 1. Spreadsheet Create a Google Sheet with a tab named IPA Size History containing: Date, Timestamp, App_Name, Version, Build_Number, Size_Bytes, Size_KB, Size_MB, IPA_URL 2. Credentials Google Sheets (OAuth)** → for reading/writing size history. Gmail** → for sending alert emails (use App Password if 2FA is enabled). 3. Open “Set: Configuration” node Define your workflow variables: APP_CONFIGS = array of monitored apps ({name, version, build, ipa_url}) SPREADSHEET_ID = Google Sheet ID SHEET_NAME = IPA Size History SMTP_FROM = sender email (e.g., devops@company.com) ALERT_RECIPIENTS = comma-separated emails SIZE_INCREASE_THRESHOLD = 0.10 (10%) SIZE_DECREASE_THRESHOLD = 0.10 (10%) LARGE_APP_WARNING = 300 (MB) SCHEDULE_TIME = 09:00 TIMEZONE = UTC DRY_RUN = false (set true to test without sending emails) 4. File Hosting Host IPA files on Google Drive, Dropbox or a web server. Ensure direct download URLs are used (not preview links). 5. Activate the workflow Once configured, it will run automatically at the scheduled time. Requirements Google Sheet with the IPA Size History tab. Accessible IPA file URLs. SMTP / gmail account (Gmail recommended). n8n (cloud or self-hosted) with Google Sheets + Email nodes. Sufficient local storage for IPA file downloads. How to customize the workflow Multiple apps**: Add more configs to APP_CONFIGS. Thresholds**: Adjust SIZE_INCREASE_THRESHOLD / SIZE_DECREASE_THRESHOLD. Notification templates**: Customize subject/body with variables: {{app_name}}, {{current_size}}, {{previous_size}}, {{change_percent}}, {{trend_status}}. Schedule**: Change Cron from daily to hourly, weekly, etc. Large app warnings**: Adjust LARGE_APP_WARNING. Trend analysis**: Extend beyond one build (7-day, 30-day averages). Storage backend**: Swap Google Sheets for CSV, DB or S3. Add-ons to level up Slack Notifications**: Add Slack webhook alerts with emojis & formatting. Size History Charts**: Generate trend graphs with Chart.js or Google Charts API. Environment separation**: Monitor dev/staging/prod builds separately. Regression detection**: Statistical anomaly checks. Build metadata**: Log bundle ID, SDK versions, architectures. Archive management**: Auto-clean old records to save space. Dashboards**: Connect to Grafana, DataDog or custom BI. CI/CD triggers**: Integrate with pipelines via webhook trigger. Common Troubleshooting No size data** → check URLs return binary IPA (not HTML error). Download failures** → confirm hosting permissions & direct links. Missing alerts** → ensure thresholds & prior history exist. Google Sheets errors** → check sheet/tab names & OAuth credentials. Email issues** → validate SMTP credentials, spam folder, sender reputation. Large file timeouts** → raise HTTP timeout for >100MB files. Trend errors** → make sure at least 2 builds exist. No runs** → confirm workflow is active and timezone is correct. Need Help? If you’d like this to customize this workflow to suit your app development process, then simply reach out to us here and we’ll help you customize the template to your exact use case.
by Growth AI
Who's it for Marketing teams, business intelligence professionals, competitive analysts, and executives who need consistent industry monitoring with AI-powered analysis and automated team distribution via Discord. What it does This intelligent workflow automatically monitors multiple industry topics, scrapes and analyzes relevant news articles using Claude AI, and delivers professionally formatted intelligence reports to your Discord channel. The system provides weekly automated monitoring cycles with personalized bot communication and comprehensive content analysis. How it works The workflow follows a sophisticated 7-phase automation process: Scheduled Activation: Triggers weekly monitoring cycles (default: Mondays at 9 AM) Query Management: Retrieves monitoring topics from centralized Google Sheets configuration News Discovery: Executes comprehensive Google News searches using SerpAPI for each configured topic Content Extraction: Scrapes full article content from top 3 sources per topic using Firecrawl AI Analysis: Processes scraped content using Claude 4 Sonnet for intelligent synthesis and formatting Discord Optimization: Automatically segments content to comply with Discord's 2000-character message limits Automated Delivery: Posts formatted intelligence reports to Discord channel with branded "Claptrap" bot personality Requirements Google Sheets account for query management SerpAPI account for Google News access Firecrawl account for article content extraction Anthropic API access for Claude 4 Sonnet Discord bot with proper channel permissions Scheduled execution capability (cron-based trigger) How to set up Step 1: Configure Google Sheets query management Create monitoring sheet: Set up Google Sheets document with "Query" sheet Add search topics: Include industry keywords, competitor names, and relevant search terms Sheet structure: Simple column format with "Query" header containing search terms Access permissions: Ensure n8n has read access to the Google Sheets document Step 2: Configure API credentials Set up the following credentials in n8n: Google Sheets OAuth2: For accessing query configuration sheet SerpAPI: For Google News search functionality with proper rate limits Firecrawl API: For reliable article content extraction across various websites Anthropic API: For Claude 4 Sonnet access with sufficient token limits Discord Bot API: With message posting permissions in target channel Step 3: Customize scheduling settings Cron expression: Default set to "0 9 * * 1" (Mondays at 9 AM) Frequency options: Adjust for daily, weekly, or custom monitoring cycles Timezone considerations: Configure according to team's working hours Execution timing: Ensure adequate processing time for multiple topics Step 4: Configure Discord integration Set up Discord delivery settings: Guild ID: Target Discord server (currently: 919951151888236595) Channel ID: Specific monitoring channel (currently: 1334455789284364309) Bot permissions: Message posting, embed suppression capabilities Brand personality: Customize "Claptrap" bot messaging style and tone Step 5: Customize content analysis Configure AI analysis parameters: Analysis depth: Currently processes top 3 articles per topic Content format: Structured markdown format with consistent styling Language settings: Currently configured for French output (easily customizable) Quality controls: Error handling for inaccessible articles and content How to customize the workflow Query management expansion Topic categories: Organize queries by industry, competitor, or strategic focus areas Keyword optimization: Refine search terms based on result quality and relevance Dynamic queries: Implement time-based or event-triggered query modifications Multi-language support: Add international keyword variations for global monitoring Advanced content processing Article quantity: Modify from 3 to more articles per topic based on analysis needs Content filtering: Add quality scoring and relevance filtering for article selection Source preferences: Implement preferred publisher lists or source quality weighting Content enrichment: Add sentiment analysis, trend identification, or competitive positioning Discord delivery enhancements Rich formatting: Implement Discord embeds, reactions, or interactive elements Multi-channel distribution: Route different topics to specialized Discord channels Alert levels: Add priority-based messaging for urgent industry developments Archive functionality: Create searchable message threads or database storage Integration expansions Slack compatibility: Replace or supplement Discord with Slack notifications Email reports: Add formatted email distribution for executive summaries Database storage: Implement persistent storage for historical analysis and trending API endpoints: Create webhook endpoints for third-party system integration AI analysis customization Analysis templates: Create topic-specific analysis frameworks and formatting Competitive focus: Enhance competitor mention detection and analysis depth Trend identification: Implement cross-topic trend analysis and strategic insights Summary levels: Create executive summaries alongside detailed technical analysis Advanced monitoring features Intelligent content curation The system provides sophisticated content management: Relevance scoring: Automatic ranking of articles by topic relevance and publication authority Duplicate detection: Prevents redundant coverage of the same story across different sources Content quality assessment: Filters low-quality or promotional content automatically Source diversity: Ensures coverage from multiple perspectives and publication types Error handling and reliability Graceful degradation: Continues processing even if individual articles fail to scrape Retry mechanisms: Automatic retry logic for temporary API failures or network issues Content fallbacks: Uses article snippets when full content extraction fails Notification continuity: Ensures Discord delivery even with partial content processing Results interpretation Intelligence report structure Each monitoring cycle delivers: Topic-specific summaries: Individual analysis for each configured search query Source attribution: Complete citation with publication date, source, and URL Structured formatting: Consistent presentation optimized for quick scanning Professional analysis: AI-generated insights maintaining factual accuracy and business context Performance analytics Monitor system effectiveness through: Processing metrics: Track successful article extraction and analysis rates Content quality: Assess relevance and usefulness of delivered intelligence Team engagement: Monitor Discord channel activity and report utilization System reliability: Track execution success rates and error patterns Use cases Competitive intelligence Market monitoring: Track competitor announcements, product launches, and strategic moves Industry trends: Identify emerging technologies, regulatory changes, and market shifts Partnership tracking: Monitor alliance formations, acquisitions, and strategic partnerships Leadership changes: Track executive movements and organizational restructuring Strategic planning support Market research: Continuous intelligence gathering for strategic decision-making Risk assessment: Early warning system for industry disruptions and regulatory changes Opportunity identification: Spot emerging markets, technologies, and business opportunities Brand monitoring: Track industry perception and competitive positioning Team collaboration enhancement Knowledge sharing: Centralized distribution of relevant industry intelligence Discussion facilitation: Provide common information baseline for strategic discussions Decision support: Deliver timely intelligence for business planning and strategy sessions Competitive awareness: Keep teams informed about competitive landscape changes Workflow limitations Language dependency: Currently optimized for French analysis output (easily customizable) Processing capacity: Limited to 3 articles per query (configurable based on API limits) Platform specificity: Configured for Discord delivery (adaptable to other platforms) Scheduling constraints: Fixed weekly schedule (customizable via cron expressions) Content access: Dependent on article accessibility and website compatibility with Firecrawl API dependencies: Requires active subscriptions and proper rate limit management for all integrated services
by Jitesh Dugar
Overview Advanced AI-powered stock analysis workflow that combines multi-timeframe technical analysis with real-time news sentiment to generate actionable BUY/SELL/HOLD recommendations. Uses sophisticated algorithms to process price data, news sentiment, and market context for informed trading decisions. Core Features Multi-Timeframe Technical Analysis 4-Hour Charts** - Intraday trend analysis and entry timing Daily Charts** - Primary trend identification and key levels Weekly Charts** - Long-term context and major trend direction Moving Average Analysis** - 5, 10, and 20-period trend indicators Support/Resistance Levels** - Dynamic price level identification Volume Analysis** - Trading activity and momentum confirmation AI-Powered News Sentiment Analysis Real-Time News Processing** - Latest market-moving headlines Sentiment Scoring** - Numerical sentiment rating (-1 to +1 scale) Impact Assessment** - News relevance to stock performance Multi-Source Analysis** - Comprehensive news coverage evaluation Context-Aware Processing** - Financial market-specific sentiment analysis Intelligent Recommendation Engine Professional Trading Logic** - Multi-timeframe alignment analysis Risk/Reward Calculations** - Minimum 1:2 ratio requirements Entry/Exit Price Targets** - Specific actionable price levels Stop-Loss Recommendations** - Risk management guidelines Confidence Scoring** - Recommendation strength assessment Technical Capabilities Data Sources & APIs TwelveData API** - Professional-grade price and volume data NewsAPI Integration** - Comprehensive news coverage Perplexity AI** - Additional sentiment context and analysis Chart-Img API** - Visual chart generation for analysis Real-Time Processing** - Live market data integration AI Models & Analysis GPT-4 Integration** - Advanced natural language processing Custom Sentiment Engine** - Financial market-tuned sentiment analysis Multi-Model Approach** - Cross-validation of recommendations Algorithmic Trading Logic** - Professional-grade decision frameworks Visual Analysis Tools Interactive Charts** - TradingView-style chart generation Technical Indicators** - Visual representation of analysis Dark Theme Support** - Professional trading interface Multiple Timeframes** - Comprehensive visual analysis Use Cases & Applications Individual Traders Day Trading Signals** - Short-term entry/exit recommendations Swing Trading Analysis** - Multi-day position guidance Risk Management** - Stop-loss and position sizing advice Market Timing** - Optimal entry point identification Investment Research Due Diligence** - Comprehensive stock analysis Sentiment Monitoring** - News impact assessment Technical Screening** - Multi-criteria stock evaluation Portfolio Optimization** - Individual stock recommendations Automated Trading Systems Signal Generation** - Systematic buy/sell/hold alerts Risk Controls** - Automated stop-loss calculations Multi-Asset Analysis** - Scalable across stock universe Backtesting Support** - Historical recommendation validation Financial Advisors & Analysts Client Reporting** - Professional analysis documentation Research Automation** - Streamlined analysis workflow Decision Support** - Data-driven recommendation framework Market Commentary** - AI-generated insights and rationale Key Benefits Professional-Grade Analysis Institutional Quality** - Bank-level analytical frameworks Multi-Dimensional** - Technical + fundamental + sentiment analysis Real-Time Processing** - Live market data integration Objective Decision Making** - Removes emotional bias from analysis Time Efficiency Instant Analysis** - Seconds vs hours of manual research Automated Processing** - Continuous market monitoring Scalable Operations** - Analyze multiple stocks simultaneously 24/7 Availability** - Round-the-clock market analysis Risk Management Built-in Stop Losses** - Automatic risk level calculation Position Sizing** - Risk-appropriate recommendation sizing Multi-Timeframe Validation** - Reduces false signals Conservative Approach** - Defaults to HOLD when uncertain Setup Requirements API Keys Needed TwelveData API - Free tier available at twelvedata.com NewsAPI Key - Free tier available at newsapi.org OpenAI API - For GPT-4 analysis capabilities Perplexity API - Additional sentiment analysis Chart-Img API - Optional chart visualization (chart-img.com) Configuration Steps API Integration - Add your API keys to respective nodes Symbol Format - Supports company names or stock symbols Risk Parameters - Customize stop-loss and target calculations Notification Setup - Configure alert delivery methods Testing & Validation - Verify API connections and data flow Advanced Features Natural Language Processing Company Name Recognition** - Automatic symbol conversion Context Understanding** - Market-aware news interpretation Multi-Language Support** - Global news source analysis Entity Extraction** - Key information identification Error Handling & Reliability API Failure Recovery** - Graceful degradation strategies Data Validation** - Input/output quality checks Rate Limit Management** - Automatic throttling controls Backup Data Sources** - Redundant information feeds Customization Options Timeframe Selection** - Adjustable analysis periods Risk Tolerance** - Configurable risk/reward ratios Sentiment Weighting** - Balance technical vs fundamental analysis Alert Thresholds** - Custom trigger conditions Important Disclaimers This tool provides educational and informational analysis only. All trading decisions should: Consider your personal risk tolerance and financial situation Be validated with additional research and professional advice Account for market volatility and potential losses Follow proper risk management principles Performance Optimization Speed Enhancements Parallel Processing** - Simultaneous data retrieval Caching Strategies** - Reduced API call frequency Efficient Algorithms** - Optimized calculation methods Memory Management** - Scalable resource usage Accuracy Improvements Multi-Source Validation** - Cross-reference data points Historical Backtesting** - Performance validation Continuous Learning** - Algorithm refinement Market Adaptation** - Evolving analysis criteria Transform your investment research with AI-powered analysis that combines the speed of automation with the depth of professional-grade financial analysis.
by Airtop
Extract Facebook Group Posts with Airtop Use Case Extracting content from Facebook Groups allows community managers, marketers, and researchers to gather insights, monitor discussions, and collect engagement metrics efficiently. This automation streamlines the process of retrieving non-sponsored post data from group feeds. What This Automation Does This automation extracts key post details from a Facebook Group feed using the following input parameters: Facebook Group URL**: The URL of the Facebook Group feed you want to scrape. Airtop Profile**: The name of your Airtop Profile authenticated to Facebook. It returns up to 5 non-sponsored posts with the following attributes for each: Post text Post URL Page/profile URL Timestamp Number of likes Number of shares Number of comments Page or profile details Post thumbnail How It Works Form Trigger: Collects the Facebook Group URL and Airtop Profile via a form. Browser Automation: Initiates a new browser session using Airtop. Navigates to the provided Facebook Group feed. Uses an AI prompt to extract post data, including interaction metrics and profile information. Structured Output: The results are returned in a defined JSON schema, ready for downstream use. Setup Requirements Airtop API Key — Free to generate. An Airtop Profile logged into Facebook. Next Steps Integrate With Analytics Tools**: Feed the output into dashboards or analytics platforms to monitor community engagement. Automate Alerts**: Trigger notifications for posts matching certain criteria (e.g., high engagement, keywords). Combine With Comment Automation**: Extend this to reply to posts or engage with users using other Airtop automations. Let me know if you’d like this saved as a .md file or included in your Airtop automation library. Read more about how to extract posts from Facebook groups
by Mark Shcherbakov
Video Guide I prepared a detailed guide that shows the whole process of building an AI tool to analyze Instagram Reels using n8n. Youtube Link Who is this for? This workflow is ideal for social media analysts, digital marketers, and content creators who want to leverage data-driven insights from their Instagram Reels. It's particularly useful for those looking to automate the analysis of video performance to inform strategy and content creation. What problem does this workflow solve? Analyzing video performance on Instagram can be tedious and time-consuming, requiring multiple steps and data extraction. This workflow automates the process of fetching, analyzing, and recording insights from Instagram Reels, making it simpler for users to track engagement metrics without manual intervention. What this workflow does This workflow integrates several services to analyze Instagram Reels, allowing users to: Automatically fetch recent Reels from specified creators. Analyze the most-watched videos for insights. Store and manage data in Airtable for easy access and reporting. Initial Trigger: The process begins with a manual trigger that can later be modified for scheduled automation. Data Retrieval: It connects to Airtable to fetch a list of creators and their respective Instagram Reels. Video Analysis: It handles the fetching, downloading, and uploading of videos for analysis using an external service, simplifying performance tracking through a structured query process. Record Management: It saves relevant metrics and insights into Airtable, ensuring that users can access and organize their video analytics effectively. Setup Create accounts: Set up Airtable, Edify, n8n, and Gemini accounts. Prepare triggers and modules: Replace credentials in each node accordingly. Configure data flow: Ensure modules are set to fetch and analyze the correct data fields as outlined in the guide. Test the workflow: Run the scenario manually to confirm that data is fetched and analyzed correctly.
by David
Who might benfit from this workflow? Do you have to record your working hours yourself? Then this n8n workflow in combination with an iOS shortcut will definitely help you. Once set up, you can use a shortcut, which can be stored as an app icon on your home screen, to record the start, end and duration of your break. How it works Once setup you can tap the iOS shortcut on your iPhone. You will see a menu containing three options: "Track Start", "Track Break" and "Track End". After time is tracked iOS will display you a notification about the successful operation. How to set it up Copy the notion database to your notion workspace (Top right corner). Copy the n8n workflow to your n8n workspace In the notion nodes in the n8n workflow, add your notion credentials and select the copied notion database. Download the iOS Shortcut from our documentation page Edit the shortcut and paste the url of your n8n Webhook trigger node to the first "Text" node of the iOS shortcut flow. It is a best practice to use authentication. You can do so by adding "Header" auth to the webhook node and to the shrotcut. You need help implementing this or any other n8n workflow? Feel free to contact me via LinkedIn or my business website. You want to start using n8n? Use this link to register for n8n (This is an affiliate link)
by Vadym Nahornyi
How it works Automatically sends Telegram notifications when any n8n workflow fails. Includes workflow name, error message, and execution ID in the alert. Setup Complete setup instructions included in the workflow's sticky note in 5 languages: 🇬🇧 English 🇪🇸 Español 🇩🇪 Deutsch 🇫🇷 Français 🇷🇺 Русский Features Monitors all workflows 24/7 Instant Telegram notifications Zero configuration needed Just add your bot token and chat ID Important ⚠️ Keep this workflow active 24/7 to capture all errors.
by Jay Emp0
Overview Fetch Multiple Google Analytics GA4 metrics daily, post to Discord, update previous day’s entry as GA data finalizes over seven days. Benefits Automates daily traffic reporting Maintains single message per day, avoids channel clutter Provides near–real-time updates by editing prior messages Use Case Teams tracking website performance via Discord (or any chat tool) without manual copy–paste. Marketing managers, community moderators, growth hackers. If your manager asks you for daily marketing report every morning, you can now automate it Notes google analytics node in n8n does not provide real time data. The node updates previous values for the next 7 days discord node on n8n does not have features to update an exisiting message by message id. So we have used the discord api for this most businesses use multiple google analytics properties across their digital platforms Core Logic Schedule trigger fires once a day. Google Analytics node retrieves metrics for date ranges (past 7 days) Aggregate node collates all records. Discord node fetches the last 10 messages in the broadcast channel Code node maps existing Discord messages by to the google analytics data using the date fields For each GA record: If no message exists → send new POST to the discord channel If message exists and metrics changed, send an update patch to the existing discord message Batch loops + wait nodes prevent rate-limit. Setup Instructions Import workflow JSON into n8n. Follow the n8n guide to Create Google Analytics OAuth2 credential with access to all required GA accounts. Follow the n8n guide to Create Discord OAuth2 credential for “Get Messages” operations. Follow the Discord guide to Create HTTP Header Auth credential named “Discord-Bot” with header Key: Authorization Value: Bot <your-bot-token> In the two Set nodes in the beginning of the flow, assign discord_channel_id and google_analytics_id. Get your discord channel id by sending a text on your discord channel and then copy message link Paste the text below and you will see your message link in the form of https://discord.com/channels/server_id/channel_id/message_id , you will want to get the channel_id which is the number in the middle Find your google analytics id by going to google analytics dashboard, seeing the properties in the top right and copy paste that number to the flow Adjust schedule trigger times to your preferred report hour. Activate workflow. Customization Replace Discord HTTP Request nodes with Slack, ClickUp, WhatsApp, Telegram integrations by swapping POST/PATCH endpoints and authentication.
by Anna Bui
Automatically monitor LinkedIn posts from your community members and create AI-powered content digests for efficient social media curation. This template is perfect for community managers, content creators, and social media teams who need to track LinkedIn activity from their network without spending hours manually checking profiles. It fetches recent posts, extracts key information, and creates digestible summaries using AI. Good to know API costs apply** - LinkedIn API calls ($0.01-0.05 per profile check) and OpenAI processing ($0.001-0.01 per post) Rate limiting included** - Built-in random delays prevent API throttling issues Flexible scheduling** - Easy to switch from daily schedule to webhook triggers for real-time processing Requires API setup** - Need RapidAPI access for LinkedIn data and OpenAI for content processing How it works Daily profile scanning** - Automatically checks each LinkedIn profile in your Airtable for posts from yesterday Smart data extraction** - Pulls post content, engagement metrics, author information, and timestamps AI-powered summarization** - Creates 30-character previews of posts for quick content scanning Duplicate prevention** - Checks existing records to avoid storing the same post multiple times Structured storage** - Saves all processed data to Airtable with clean formatting and metadata Batch processing** - Handles multiple profiles efficiently with proper error handling and delays How to use Set up Airtable base** - Create tables for LinkedIn profiles and processed posts using the provided structure Configure API credentials** - Add your RapidAPI LinkedIn access and OpenAI API key to n8n credentials Import LinkedIn profiles** - Add community members' LinkedIn URLs and URNs to your profiles table Test the workflow** - Run manually with a few profiles to ensure everything works correctly Activate schedule** - Enable daily automation or switch to webhook triggers for real-time processing Requirements Airtable account** - For storing profile lists and managing processed posts with proper field structure RapidAPI Professional Network Data API** - Access to LinkedIn post data (requires subscription) OpenAI API account** - For intelligent content summarization and preview generation LinkedIn profile URNs** - Properly formatted LinkedIn profile identifiers for API calls Customising this workflow Change monitoring frequency** - Switch from daily to hourly checks or use webhook triggers for real-time updates Expand data extraction** - Add company information, hashtag analysis, or engagement trending Integrate notification systems** - Add Slack, email, or Discord alerts for high-engagement posts Connect content tools** - Link to Buffer, Hootsuite, or other social media management platforms for direct publishing Add filtering logic** - Set up conditions to only process posts with minimum engagement thresholds Scale with multiple communities** - Duplicate workflow for different LinkedIn communities or industry segments
by Alex Kim
Printify Automation - Update Title and Description Workflow This n8n workflow automates the process of retrieving products from Printify, generating optimized product titles and descriptions, and updating them back to the platform. It leverages OpenAI for content generation and integrates with Google Sheets for tracking and managing updates. Features Integration with Printify**: Fetch shops and products through Printify's API. AI-Powered Optimization**: Generate engaging product titles and descriptions using OpenAI's GPT model. Google Sheets Tracking**: Log and manage updates in Google Sheets. Custom Brand Guidelines**: Ensure consistent tone by incorporating brand-specific instructions. Loop Processing**: Iteratively process each product in batches. Workflow Structure Nodes Overview Manual Trigger: Manually start the workflow for testing purposes. Printify - Get Shops: Retrieves the list of shops from Printify. Printify - Get Products: Fetches product details for each shop. Split Out: Breaks down the product list into individual items for processing. Loop Over Items: Iteratively processes products in manageable batches. Generate Title and Desc: Uses OpenAI GPT to create optimized product titles and descriptions. Google Sheets Integration: Trigger: Monitors Google Sheets for changes. Log Updates: Records product updates, including old and new titles/descriptions. Conditional Logic: If Nodes: Ensure products are ready for updates and stop processing once completed. Printify - Update Product: Sends updated titles and descriptions back to Printify. Brand Guidelines + Custom Instructions: Sets brand tone and seasonal instructions. Setup Instructions Prerequisites n8n Instance: Ensure n8n is installed and configured. Printify API Key: Obtain an API key from your Printify account. Add it to n8n under HTTP Header Auth. OpenAI API Key: Obtain an API key from OpenAI. Add it to n8n under OpenAI API. Google Sheets Integration: Share your Google Sheets with the Google API service account. Configure Google Sheets credentials in n8n. Workflow Configuration Set Brand Guidelines: Update the Brand Guidelines + Custom Instructions node with your brand name, tone, and seasonal instructions. Batch Size: Configure the Loop Over Items node for optimal batch sizes. Google Sheets Configuration: Set the correct Google Sheets document and sheet names in the integration nodes. Run the Workflow: Start manually or configure the workflow to trigger automatically. Key Notes Customization**: Modify API calls to support other platforms like Printful or Vistaprint. Scalability**: Use batch processing for efficient handling of large product catalogs. Error Handling**: Configure retries or logging for any failed nodes. Output Examples Optimized Content Example Input Title**: "Classic White T-Shirt" Generated Title**: "Stylish Classic White Tee for Everyday Wear" Input Description**: "Plain white T-shirt made of cotton." Generated Description**: "Discover comfort and style with our classic white tee, crafted from premium cotton for all-day wear. Perfect for casual outings or layering." Next Steps Monitor Updates: Use Google Sheets to review logs of updated products. Expand Integration: Add support for more Printify shops or integrate with other platforms. Enhance AI Prompts: Customize prompts for different product categories or seasonal needs. Feel free to reach out for additional guidance or troubleshooting!
by Dustin
Short an simple: This Workflow will sync (add and delete) your Liked Songs to an custom playlist that can be shared. Setup: Create an app on the Spotify Developer Dashboard. Create Spotify Credentials - Just click on one of the Spotify Nodes in the Workflow an click on "create new credentials" and follow the guide. Create the Spotify Playlist that you want to sync to. Copy the exact name of you playlist, go into Node "Edit set Vars" and replace the value "CHANGE MEEEE" with your playlist name. Set your Spotify Credentiels on every Spotify Node. (Should be marekd with Yellow and Red Notes) Do you use Gotify? - No: Delete the Gotify Nodes (all the way to the right end of the Workflow) - Yes: Customize the Gotify Nodes to your needs.