by Growth AI
Who's it for Marketing teams, business intelligence professionals, competitive analysts, and executives who need consistent industry monitoring with AI-powered analysis and automated team distribution via Discord. What it does This intelligent workflow automatically monitors multiple industry topics, scrapes and analyzes relevant news articles using Claude AI, and delivers professionally formatted intelligence reports to your Discord channel. The system provides weekly automated monitoring cycles with personalized bot communication and comprehensive content analysis. How it works The workflow follows a sophisticated 7-phase automation process: Scheduled Activation: Triggers weekly monitoring cycles (default: Mondays at 9 AM) Query Management: Retrieves monitoring topics from centralized Google Sheets configuration News Discovery: Executes comprehensive Google News searches using SerpAPI for each configured topic Content Extraction: Scrapes full article content from top 3 sources per topic using Firecrawl AI Analysis: Processes scraped content using Claude 4 Sonnet for intelligent synthesis and formatting Discord Optimization: Automatically segments content to comply with Discord's 2000-character message limits Automated Delivery: Posts formatted intelligence reports to Discord channel with branded "Claptrap" bot personality Requirements Google Sheets account for query management SerpAPI account for Google News access Firecrawl account for article content extraction Anthropic API access for Claude 4 Sonnet Discord bot with proper channel permissions Scheduled execution capability (cron-based trigger) How to set up Step 1: Configure Google Sheets query management Create monitoring sheet: Set up Google Sheets document with "Query" sheet Add search topics: Include industry keywords, competitor names, and relevant search terms Sheet structure: Simple column format with "Query" header containing search terms Access permissions: Ensure n8n has read access to the Google Sheets document Step 2: Configure API credentials Set up the following credentials in n8n: Google Sheets OAuth2: For accessing query configuration sheet SerpAPI: For Google News search functionality with proper rate limits Firecrawl API: For reliable article content extraction across various websites Anthropic API: For Claude 4 Sonnet access with sufficient token limits Discord Bot API: With message posting permissions in target channel Step 3: Customize scheduling settings Cron expression: Default set to "0 9 * * 1" (Mondays at 9 AM) Frequency options: Adjust for daily, weekly, or custom monitoring cycles Timezone considerations: Configure according to team's working hours Execution timing: Ensure adequate processing time for multiple topics Step 4: Configure Discord integration Set up Discord delivery settings: Guild ID: Target Discord server (currently: 919951151888236595) Channel ID: Specific monitoring channel (currently: 1334455789284364309) Bot permissions: Message posting, embed suppression capabilities Brand personality: Customize "Claptrap" bot messaging style and tone Step 5: Customize content analysis Configure AI analysis parameters: Analysis depth: Currently processes top 3 articles per topic Content format: Structured markdown format with consistent styling Language settings: Currently configured for French output (easily customizable) Quality controls: Error handling for inaccessible articles and content How to customize the workflow Query management expansion Topic categories: Organize queries by industry, competitor, or strategic focus areas Keyword optimization: Refine search terms based on result quality and relevance Dynamic queries: Implement time-based or event-triggered query modifications Multi-language support: Add international keyword variations for global monitoring Advanced content processing Article quantity: Modify from 3 to more articles per topic based on analysis needs Content filtering: Add quality scoring and relevance filtering for article selection Source preferences: Implement preferred publisher lists or source quality weighting Content enrichment: Add sentiment analysis, trend identification, or competitive positioning Discord delivery enhancements Rich formatting: Implement Discord embeds, reactions, or interactive elements Multi-channel distribution: Route different topics to specialized Discord channels Alert levels: Add priority-based messaging for urgent industry developments Archive functionality: Create searchable message threads or database storage Integration expansions Slack compatibility: Replace or supplement Discord with Slack notifications Email reports: Add formatted email distribution for executive summaries Database storage: Implement persistent storage for historical analysis and trending API endpoints: Create webhook endpoints for third-party system integration AI analysis customization Analysis templates: Create topic-specific analysis frameworks and formatting Competitive focus: Enhance competitor mention detection and analysis depth Trend identification: Implement cross-topic trend analysis and strategic insights Summary levels: Create executive summaries alongside detailed technical analysis Advanced monitoring features Intelligent content curation The system provides sophisticated content management: Relevance scoring: Automatic ranking of articles by topic relevance and publication authority Duplicate detection: Prevents redundant coverage of the same story across different sources Content quality assessment: Filters low-quality or promotional content automatically Source diversity: Ensures coverage from multiple perspectives and publication types Error handling and reliability Graceful degradation: Continues processing even if individual articles fail to scrape Retry mechanisms: Automatic retry logic for temporary API failures or network issues Content fallbacks: Uses article snippets when full content extraction fails Notification continuity: Ensures Discord delivery even with partial content processing Results interpretation Intelligence report structure Each monitoring cycle delivers: Topic-specific summaries: Individual analysis for each configured search query Source attribution: Complete citation with publication date, source, and URL Structured formatting: Consistent presentation optimized for quick scanning Professional analysis: AI-generated insights maintaining factual accuracy and business context Performance analytics Monitor system effectiveness through: Processing metrics: Track successful article extraction and analysis rates Content quality: Assess relevance and usefulness of delivered intelligence Team engagement: Monitor Discord channel activity and report utilization System reliability: Track execution success rates and error patterns Use cases Competitive intelligence Market monitoring: Track competitor announcements, product launches, and strategic moves Industry trends: Identify emerging technologies, regulatory changes, and market shifts Partnership tracking: Monitor alliance formations, acquisitions, and strategic partnerships Leadership changes: Track executive movements and organizational restructuring Strategic planning support Market research: Continuous intelligence gathering for strategic decision-making Risk assessment: Early warning system for industry disruptions and regulatory changes Opportunity identification: Spot emerging markets, technologies, and business opportunities Brand monitoring: Track industry perception and competitive positioning Team collaboration enhancement Knowledge sharing: Centralized distribution of relevant industry intelligence Discussion facilitation: Provide common information baseline for strategic discussions Decision support: Deliver timely intelligence for business planning and strategy sessions Competitive awareness: Keep teams informed about competitive landscape changes Workflow limitations Language dependency: Currently optimized for French analysis output (easily customizable) Processing capacity: Limited to 3 articles per query (configurable based on API limits) Platform specificity: Configured for Discord delivery (adaptable to other platforms) Scheduling constraints: Fixed weekly schedule (customizable via cron expressions) Content access: Dependent on article accessibility and website compatibility with Firecrawl API dependencies: Requires active subscriptions and proper rate limit management for all integrated services
by Trung Tran
Cloudflare Incident Monitoring & Escalation Workflow 🚀 Try Decodo — Web Scraping & Data API (Coupon: TRUNG) Decodo is a powerful public data access platform offering managed web scraping APIs and proxy infrastructure to collect structured web data at scale. It handles proxies, anti-bot protection, JavaScript rendering, retries, and global IP rotation—so you can focus on data, not scraping complexity. Why Decodo Managed Web Scraping API with anti-bot bypass & high success rates Works with JS-heavy sites; outputs JSON/HTML/CSV Easy integration (Python, Node.js, cURL) for eCommerce, SERP, social & general web data 🎟️ Special Discount Use coupon TRUNG to get the Advanced Scraping API plan — 23,000 requests for $5. Who this workflow is for For DevOps, SRE, IT Ops, and Platform teams running production traffic behind Cloudflare who need reliable incident awareness without alert fatigue. Use it if you want: Continuous Cloudflare incident monitoring Clear severity-based routing Automatic escalation into JIRA Clean Slack & Telegram notifications Deduplicated, noise-controlled alerts What this workflow does This workflow polls the Cloudflare Status API, detects unresolved incidents, scores their impact, and routes them to the right channels. High-impact incidents are escalated to JIRA. Lower-impact updates are notified (or skipped) to reduce noise. How it works (high level) Runs on a fixed schedule (e.g. every 5 minutes) Fetches current Cloudflare incidents Stops early if no active issues exist Normalizes and scores incidents (severity, impact, affected service) Deduplicates previously-alerted incidents Builds human-readable notification payloads Routes by impact: High → create JIRA incident + notify Low → notify or suppress Sends alerts to Slack and Telegram Requirements Decoco Scrapper API credential n8n (self-hosted or Cloud) Cloudflare Status API (public) Slack bot (chat:write) Telegram bot + chat ID JIRA project with issue-create permission Optional LLM credentials (summarization/classification) Notes All secrets are stored in n8n Credentials Workflow is idempotent and safe to rerun No assumptions about root cause or remediation Built for production-grade incident visibility with n8n.
by Vadym Nahornyi
> ⚠️ Multi-language WhatsApp Error Notifier Get instant WhatsApp alerts when any workflow fails — perfect for mobile-first monitoring and fast incident response. ✅ No coding required ✅ Works with any workflow via Error Workflow ✅ Step-by-step setup instructions included in: 🇬🇧 English 🇪🇸 Español 🇩🇪 Deutsch 🇫🇷 Français 🇷🇺 Русский 📦 What This Template Does This template sends real-time WhatsApp notifications when a workflow fails. It uses the WhatsApp Business Cloud API to deliver a preformatted error message directly to your phone. The message includes: Workflow name Error message Last executed node Example message: Error on WorkFlow: {{ $json.workflow.name }} Message: {{ $json.execution.error.message }} lastNodeExecuted: {{ $json.execution.lastNodeExecuted }} ⚙️ Prerequisites Before using this template, make sure you have: A verified Facebook Business account Access to WhatsApp Business Cloud API A sender phone number (registered in Meta) An access token (used as credentials in n8n) A pre-approved message template (or be within the 24h session window) More info from Meta Docs → 🚀 How to Use Open the template and insert your WhatsApp credentials Enter your target phone number (e.g. your own) in international format Customize the message body if needed Save the workflow but do not activate it In any other workflow → open Settings → set this as your Error Workflow 🌐 Multi-language Setup Guide Included This template includes full setup instructions with screenshots and message formatting help in: 🇬🇧 English 🇪🇸 Español 🇩🇪 Deutsch 🇫🇷 Français 🇷🇺 Русский Choose your language inside the embedded sticky note in the workflow.
by Mark Shcherbakov
Video Guide I prepared a detailed guide that showed the whole process of integrating the Binance API and storing data in Airtable to manage funding statements associated with tokens in a wallet. Youtube Link Who is this for? This workflow is ideal for developers, financial analysts, and cryptocurrency enthusiasts who want to automate the process of managing funding statements and token prices. It’s particularly useful for those who need a systematic approach to track and report funding fees associated with tokens in their wallets. What problem does this workflow solve? Managing funding statements and token prices across multiple platforms can be cumbersome and error-prone. This workflow automates the process, allowing users to seamlessly fetch funding fees from Binance and record them alongside token prices in Airtable, minimizing manual data entry and potential discrepancies. What this workflow does This workflow integrates the Binance API with an Airtable database, facilitating the storage and management of funding statements linked to tokens in a wallet. The agent can: Fetch funding fees and current positions from Binance. Aggregate data to create structured funding statements. Insert records into Airtable, ensuring proper linkage between funding data and tokens. API Authentication: The workflow establishes authentication with the Binance API using a Crypto Node to handle API keys and signatures, ensuring secure and verified requests. Data Collection: It retrieves necessary data, including funding fees and current positions with properly formatted API requests to ensure seamless communication with Binance. Airtable Integration: The workflow inserts aggregated funding statements and token data into the corresponding Airtable records, managing token existence checks to avoid duplicate entries. Setup Set Up Airtable Database: Create an Airtable base with tables for Funding Statements and Tokens. Generate Binance API Key: Log in and create an API key with appropriate permissions. Set Up Authentication in N8N: Utilize a Crypto Node for Binance API authentication. Configure API Request to Binance: Set request method and headers for communication with the Binance API. Fetch Funding Fees and Current Positions: Retrieve funding data and current positions efficiently. Aggregate and Create Statements: Aggregate data to create detailed funding statements. Insert Data into Airtable: Input the structured data into Airtable and manage token records. Using Get Price Node: Implement a Get Price Node to maintain current token price tracking without additional setup.
by Cheng Siong Chin
How It Works This workflow automates end-to-end contract and invoice management using AI intelligence. It processes proposals through intelligent contract generation, approval workflows, and automated invoicing. OpenAI analyzes proposal content while the system routes approvals intelligently. Upon approval, contracts are generated, invoices created, and notifications sent. The workflow also monitors payment status, generates financial forecasts, and manages follow-up tasks, eliminating manual contract generation delays and approval bottlenecks while ensuring accurate financial record-keeping. Setup Steps Configure OpenAI API credentials in n8n credentials manager. Connect Google Sheets account for invoice and forecast storage. Set up Gmail for approval notifications and client communications. Input Stripe/payment processor credentials for payment tracking. Map proposal form inputs to workflow start node. Prerequisites OpenAI API key, Google Sheets account, Gmail account, Stripe/payment processor access Use Cases Multi-stage approval workflows, SaaS contract management, professional services invoicing Customization Modify approval logic in conditional nodes, replace OpenAI with Anthropic API Benefits Reduces contract processing time by 80%, eliminates approval delays, prevents invoicing errors
by David
Who might benfit from this workflow? Do you have to record your working hours yourself? Then this n8n workflow in combination with an iOS shortcut will definitely help you. Once set up, you can use a shortcut, which can be stored as an app icon on your home screen, to record the start, end and duration of your break. How it works Once setup you can tap the iOS shortcut on your iPhone. You will see a menu containing three options: "Track Start", "Track Break" and "Track End". After time is tracked iOS will display you a notification about the successful operation. How to set it up Copy the notion database to your notion workspace (Top right corner). Copy the n8n workflow to your n8n workspace In the notion nodes in the n8n workflow, add your notion credentials and select the copied notion database. Download the iOS Shortcut from our documentation page Edit the shortcut and paste the url of your n8n Webhook trigger node to the first "Text" node of the iOS shortcut flow. It is a best practice to use authentication. You can do so by adding "Header" auth to the webhook node and to the shrotcut. You need help implementing this or any other n8n workflow? Feel free to contact me via LinkedIn or my business website. You want to start using n8n? Use this link to register for n8n (This is an affiliate link)
by Dustin
Short an simple: This Workflow will sync (add and delete) your Liked Songs to an custom playlist that can be shared. Setup: Create an app on the Spotify Developer Dashboard. Create Spotify Credentials - Just click on one of the Spotify Nodes in the Workflow an click on "create new credentials" and follow the guide. Create the Spotify Playlist that you want to sync to. Copy the exact name of you playlist, go into Node "Edit set Vars" and replace the value "CHANGE MEEEE" with your playlist name. Set your Spotify Credentiels on every Spotify Node. (Should be marekd with Yellow and Red Notes) Do you use Gotify? - No: Delete the Gotify Nodes (all the way to the right end of the Workflow) - Yes: Customize the Gotify Nodes to your needs.
by Cheng Siong Chin
How It Works This workflow automates environmental, social, and governance (ESG) data collection, compliance validation, and sustainability reporting for corporations managing complex regulatory requirements and stakeholder transparency expectations. Designed for sustainability officers, compliance teams, and investor relations departments, it solves the challenge of aggregating ESG metrics across global operations, validating data accuracy, and generating standardized reports for multiple frameworks. The system schedules regular monitoring, fetches consolidated ESG data from operational systems, generates S&D (sustainability and disclosure) submissions, validates compliance through dual AI agents (Compliance Analyzer ensures regulatory adherence, Decision Coordination orchestrates specialized sub-agents for aggregate analysis, traceability monitoring, summary generation, and governance reporting), checks star ratings for data quality, routes findings by compliance status (critical/routine), and produces standardized reports with traceability records. Organizations achieve 90% reduction in reporting cycle time, ensure multi-framework compliance, eliminate manual data aggregation errors, and maintain complete audit trails for regulatory scrutiny. Setup Steps Connect Schedule Trigger for monitoring frequency Configure ESG data sources with API credentials Add AI model API keys to Compliance Analyzer and Decision Coordination Agent nodes Define reporting frameworks and compliance requirements in agent prompts Set quality rating thresholds for data completeness and materiality scoring parameters Configure alert mechanisms for critical compliance gaps requiring immediate remediation Prerequisites ESG data management system access, AI service accounts Use Cases Carbon emissions tracking and reporting, supply chain sustainability monitoring Customization Modify agent prompts for industry-specific materiality topics Benefits Reduces reporting cycle time by 90%, ensures multi-framework compliance simultaneously
by Sona Labs
Generate Sora videos, stitch clips, and post to Twitter Generate creative ASMR cutting video concepts with GPT-5.1, create high-quality video clips using Sora v2, stitch them together with Cloudinary, and automatically post to Twitter/X—transforming ideas into viral content without manual video editing. How it works Step 1: Generate Video Concepts Schedule Trigger activates the workflow automatically GPT-5.1 AI agent generates 3 unique ASMR cutting scene prompts with unusual objects Creates structured video prompts optimized for Sora v2 (frontal camera angle, cutting actions) Generates Twitter-ready captions with relevant hashtags Saves all concepts and scripts to Google Sheets for tracking Step 2: Create Video Clips with Sora v2 Generates 3 separate Sora v2 video clips in parallel (8-12 seconds each) Each clip uses unique prompts from GPT-5.1 output Videos render at 720x1280 resolution (vertical format for social media) System waits 30 seconds for rendering to complete Step 3: Monitor & Download Videos Loops through all 3 video generation requests Checks Sora API status every 30 seconds until rendering completes Automatically skips failed renders (continues workflow with successful videos) Downloads completed videos from Sora API Uploads each clip to Cloudinary for storage and processing Step 4: Stitch Videos Together Collects all uploaded Cloudinary video IDs Builds Cloudinary transformation URL to stitch 3 clips into one seamless video Applies Twitter-compatible encoding (H.264 baseline, AAC audio, MP4 format) Downloads the final stitched video Step 5: Upload to Twitter/X Prepares video file data and calculates total file size Uses Twitter's chunked upload API (INIT → APPEND → FINALIZE) Waits for Twitter's video processing to complete Checks processing status until video is ready Posts tweet with AI-generated caption and attached video Updates Google Sheets status to "Posted" What you'll get AI-Generated Concepts**: Creative ASMR cutting ideas with unusual objects (glass avocados, lava rocks, rainbow soap) Professional Video Clips**: Three 8-12 second Sora v2 videos per concept with 720x1280 resolution Seamless Stitching**: Single combined video optimized for Twitter/X specifications Engaging Captions**: GPT-5.1 generated tweets with hashtags designed for virality Automated Posting**: Direct upload to Twitter/X without manual intervention Cloud Backup**: All videos stored in Cloudinary with metadata Progress Tracking**: Google Sheets integration shows workflow status (In Progress → Posted) Error Handling**: Failed Sora renders are automatically skipped Why use this Save 4+ hours per video**: Eliminate scripting, shooting, editing, and posting time Consistent posting schedule**: Set it and forget it with the Schedule Trigger Scale content creation**: Generate multiple video variations in 20-30 minutes Professional quality**: Leverage Sora v2's AI video generation for realistic cutting scenes Optimize for virality**: GPT-5.1 creates concepts and captions designed for engagement Reduce creative burnout**: AI handles ideation, execution, and distribution No video editing skills needed**: Complete automation from concept to post Test multiple concepts**: Generate 3 variations per run to see what resonates Setup instructions Required accounts and credentials: OpenAI API Key (GPT-5.1 and Sora v2 access required) Sign up at https://platform.openai.com Ensure your account has Sora v2 API access enabled Generate API key from API Keys section Note: Sora v2 is currently in limited beta Google Sheets OAuth (for tracking video ideas and status) Free Google account required Create a spreadsheet with columns: Category, Scene 1, Scene 2, Scene 3, Status n8n will request OAuth permissions during setup Cloudinary Account (for video storage and stitching) Sign up at https://cloudinary.com (free tier available) Note your cloud name from the dashboard Create an upload preset named n8n_integration Enable unsigned uploads for the preset Twitter OAuth 1.0a Credentials (for automated posting) Apply for Twitter Developer access at https://developer.twitter.com Create a new app in the Developer Portal Generate: API Key, API Secret, Access Token, Access Token Secret Enable "Read and Write" permissions (not just Read) OAuth 1.0a is required for media uploads (OAuth 2.0 won't work) Configuration steps: Update OpenAI API Key: Add your OpenAI API key to these nodes: "OpenAI Chat Model" credentials "Create Sora Video Scene - 1" (Authorization header) "Create Sora Video Scene - 2" (Authorization header) "Create Sora Video Scene - 3" (Authorization header) "Check Video Status" (Authorization header) "Download Completed Video" (Authorization header) Replace Bearer API KEY with Bearer YOUR_ACTUAL_API_KEY Configure Google Sheets: Open "Save Category and Clip Scripts" and "Update Status" nodes Authenticate with your Google account (OAuth 2.0) Select your spreadsheet and sheet name Ensure columns match: Category, Scene 1, Scene 2, Scene 3, Status The workflow will update Status from "In Progress" to "Posted" Update Cloudinary Settings: In "Upload to Cloudinary" node: Replace {Cloud name here} in the URL with your Cloudinary cloud name Verify upload preset is set to n8n_integration In "Build Stitch URL" node: Open the Code node Replace dph9n4uei on line 1 with your cloud name This builds the video stitching transformation URL Add Twitter OAuth 1.0a Credentials: Configure OAuth 1.0a in these nodes: "Twitter Upload - INIT" "Twitter Upload - APPEND" "Finalize Upload" "Check Twitter Processing Status" "Post a Tweet" Use the same OAuth 1.0a credential for all nodes Ensure your Twitter app has "Read and Write" permissions Adjust Schedule Trigger (optional): Default: Runs on every interval Modify in "Schedule Trigger" node to set specific times Recommended: Once per day or every few hours to avoid rate limits Test the workflow: Click "Execute Workflow" to test manually first Verify GPT-5.1 generates 3 video concepts Check that Sora v2 creates all 3 videos Confirm Cloudinary stitches videos correctly Ensure Twitter post appears with video and caption Important notes: Sora API Rate Limits**: Sora v2 may have rendering quotas. Monitor your usage Video Rendering Time**: Each Sora clip takes 2-5 minutes. Total workflow: 15-25 minutes Failed Videos**: The workflow automatically skips failed renders and continues Twitter Video Limits**: Maximum 512MB per video, MP4 format required Cloudinary Free Tier**: 25 credits/month includes video transformations Cost Estimate**: ~$1-3 per run (Sora API pricing varies) Troubleshooting: "Sora API access required"**: Contact OpenAI to enable Sora v2 API on your account Twitter upload fails**: Verify OAuth 1.0a credentials have "Read and Write" permissions Cloudinary upload fails**: Check cloud name and ensure upload preset exists Videos don't stitch**: Verify all 3 videos uploaded successfully to Cloudinary Google Sheets not updating**: Confirm OAuth permissions and sheet column names match Next steps: Enable the Schedule Trigger to automate daily/weekly posts Monitor Google Sheets to track posted content Adjust GPT-5.1 prompts in "ASMR Cutting Ideas" for different content themes Experiment with different video durations (8 vs 12 seconds) Add error notifications using Email or Slack nodes
by Jay Emp0
Overview Fetch Multiple Google Analytics GA4 metrics daily, post to Discord, update previous day’s entry as GA data finalizes over seven days. Benefits Automates daily traffic reporting Maintains single message per day, avoids channel clutter Provides near–real-time updates by editing prior messages Use Case Teams tracking website performance via Discord (or any chat tool) without manual copy–paste. Marketing managers, community moderators, growth hackers. If your manager asks you for daily marketing report every morning, you can now automate it Notes google analytics node in n8n does not provide real time data. The node updates previous values for the next 7 days discord node on n8n does not have features to update an exisiting message by message id. So we have used the discord api for this most businesses use multiple google analytics properties across their digital platforms Core Logic Schedule trigger fires once a day. Google Analytics node retrieves metrics for date ranges (past 7 days) Aggregate node collates all records. Discord node fetches the last 10 messages in the broadcast channel Code node maps existing Discord messages by to the google analytics data using the date fields For each GA record: If no message exists → send new POST to the discord channel If message exists and metrics changed, send an update patch to the existing discord message Batch loops + wait nodes prevent rate-limit. Setup Instructions Import workflow JSON into n8n. Follow the n8n guide to Create Google Analytics OAuth2 credential with access to all required GA accounts. Follow the n8n guide to Create Discord OAuth2 credential for “Get Messages” operations. Follow the Discord guide to Create HTTP Header Auth credential named “Discord-Bot” with header Key: Authorization Value: Bot <your-bot-token> In the two Set nodes in the beginning of the flow, assign discord_channel_id and google_analytics_id. Get your discord channel id by sending a text on your discord channel and then copy message link Paste the text below and you will see your message link in the form of https://discord.com/channels/server_id/channel_id/message_id , you will want to get the channel_id which is the number in the middle Find your google analytics id by going to google analytics dashboard, seeing the properties in the top right and copy paste that number to the flow Adjust schedule trigger times to your preferred report hour. Activate workflow. Customization Replace Discord HTTP Request nodes with Slack, ClickUp, WhatsApp, Telegram integrations by swapping POST/PATCH endpoints and authentication.
by Airtop
Extract Facebook Group Posts with Airtop Use Case Extracting content from Facebook Groups allows community managers, marketers, and researchers to gather insights, monitor discussions, and collect engagement metrics efficiently. This automation streamlines the process of retrieving non-sponsored post data from group feeds. What This Automation Does This automation extracts key post details from a Facebook Group feed using the following input parameters: Facebook Group URL**: The URL of the Facebook Group feed you want to scrape. Airtop Profile**: The name of your Airtop Profile authenticated to Facebook. It returns up to 5 non-sponsored posts with the following attributes for each: Post text Post URL Page/profile URL Timestamp Number of likes Number of shares Number of comments Page or profile details Post thumbnail How It Works Form Trigger: Collects the Facebook Group URL and Airtop Profile via a form. Browser Automation: Initiates a new browser session using Airtop. Navigates to the provided Facebook Group feed. Uses an AI prompt to extract post data, including interaction metrics and profile information. Structured Output: The results are returned in a defined JSON schema, ready for downstream use. Setup Requirements Airtop API Key — Free to generate. An Airtop Profile logged into Facebook. Next Steps Integrate With Analytics Tools**: Feed the output into dashboards or analytics platforms to monitor community engagement. Automate Alerts**: Trigger notifications for posts matching certain criteria (e.g., high engagement, keywords). Combine With Comment Automation**: Extend this to reply to posts or engage with users using other Airtop automations. Let me know if you’d like this saved as a .md file or included in your Airtop automation library. Read more about how to extract posts from Facebook groups
by Anna Bui
Automatically monitor LinkedIn posts from your community members and create AI-powered content digests for efficient social media curation. This template is perfect for community managers, content creators, and social media teams who need to track LinkedIn activity from their network without spending hours manually checking profiles. It fetches recent posts, extracts key information, and creates digestible summaries using AI. Good to know API costs apply** - LinkedIn API calls ($0.01-0.05 per profile check) and OpenAI processing ($0.001-0.01 per post) Rate limiting included** - Built-in random delays prevent API throttling issues Flexible scheduling** - Easy to switch from daily schedule to webhook triggers for real-time processing Requires API setup** - Need RapidAPI access for LinkedIn data and OpenAI for content processing How it works Daily profile scanning** - Automatically checks each LinkedIn profile in your Airtable for posts from yesterday Smart data extraction** - Pulls post content, engagement metrics, author information, and timestamps AI-powered summarization** - Creates 30-character previews of posts for quick content scanning Duplicate prevention** - Checks existing records to avoid storing the same post multiple times Structured storage** - Saves all processed data to Airtable with clean formatting and metadata Batch processing** - Handles multiple profiles efficiently with proper error handling and delays How to use Set up Airtable base** - Create tables for LinkedIn profiles and processed posts using the provided structure Configure API credentials** - Add your RapidAPI LinkedIn access and OpenAI API key to n8n credentials Import LinkedIn profiles** - Add community members' LinkedIn URLs and URNs to your profiles table Test the workflow** - Run manually with a few profiles to ensure everything works correctly Activate schedule** - Enable daily automation or switch to webhook triggers for real-time processing Requirements Airtable account** - For storing profile lists and managing processed posts with proper field structure RapidAPI Professional Network Data API** - Access to LinkedIn post data (requires subscription) OpenAI API account** - For intelligent content summarization and preview generation LinkedIn profile URNs** - Properly formatted LinkedIn profile identifiers for API calls Customising this workflow Change monitoring frequency** - Switch from daily to hourly checks or use webhook triggers for real-time updates Expand data extraction** - Add company information, hashtag analysis, or engagement trending Integrate notification systems** - Add Slack, email, or Discord alerts for high-engagement posts Connect content tools** - Link to Buffer, Hootsuite, or other social media management platforms for direct publishing Add filtering logic** - Set up conditions to only process posts with minimum engagement thresholds Scale with multiple communities** - Duplicate workflow for different LinkedIn communities or industry segments