by Angel Menendez
Enhance Security Operations with the Venafi Slack CertBot! Venafi Presentation - Watch Video Our Venafi Slack CertBot is strategically designed to facilitate immediate security operations directly from Slack. This tool allows end users to request Certificate Signing Requests that are automatically approved or passed to the Secops team for manual approval depending on the Virustotal analysis of the requested domain. Not only does this help centralize requests, but it helps an organization maintain the security certifications by allowing automated processes to log and analyze requests in real time. Workflow Highlights: Interactive Modals**: Utilizes Slack modals to gather user inputs for scan configurations and report generation, providing a user-friendly interface for complex operations. Dynamic Workflow Execution**: Integrates seamlessly with Venafi to execute CSR generation and if any issues are found, AI can generate a custom report that is then passed to a slack teams channel for manual approval with the press of a single button. Operational Flow: Parse Webhook Data**: Captures and parses incoming data from Slack to understand user commands accurately. Execute Actions**: Depending on the user's selection, the workflow triggers other actions within the flow like automatic Virustotal Scanning. Respond to Slack**: Ensures that every interaction is acknowledged, maintaining a smooth user experience by managing modal popups and sending appropriate responses. Setup Instructions: Verify that Slack and Qualys API integrations are correctly configured for seamless interaction. Customize the modal interfaces to align with your organization's operational protocols and security policies. Test the workflow to ensure that it responds accurately to Slack commands and that the integration with Qualys is functioning as expected. Need Assistance? Explore Venafi's Documentation or get help from the n8n Community for more detailed guidance on setup and customization. Deploy this bot within your Slack environment to significantly enhance the efficiency and responsiveness of your security operations, enabling proactive management of CSR's.
by Gregor
Awork currently does not support a check for open subtasks or open dependencies when setting a task status to done. This workflow offers you a simple workaround to add this functionality to Awork and notifies users when triggered. Multiple configuration options available. How it works Triggered via Awork Webhook call on status change of tasks If task is marked as done, subtasks and/or dependent tasks are checked for their status If unfinished tasks are found, a status rollback to previous status is performed and user gets notified Set up steps Add webhook call to Awork Configure Awork API credentials Set up workflow configuration via setup node, e.g. user notification text, restrict to subtasks/dependency checks etc.
by Adnan Tariq
π‘ CyberScan β AI-Powered Vulnerability Scanner with Nessus, OpenAI, and Google Sheets π€ Whoβs it for Security teams, DevOps engineers, vulnerability analysts, and automation builders who want to eliminate repetitive Nessus scan parsing, AI-based risk triage, and manual reporting. Designed for orgs following NIST CSF or CISA KEV compliance guidelines. βοΈ How it works / What it does Runs scheduled or manual scans via the Nessus API. Processes scan results and extracts asset + vulnerability data. Uses a custom AI-based risk metric (LEV) to triage findings into: π¨ Expert review β Self-healing π΅οΈ Monitoring Automatically sends email alerts for critical CVEs. Exports daily summaries to Google Sheets (or your own BI system). Maps to NIST CSF (Identify, Protect, Detect, Respond, Recover). π§° How to set up Nessus: Add your Nessus API credentials and instance URL. Google Sheets: Authenticate your Google account. OpenAI / LLM: Use your API key if adding LLM triage or rewrite prompts. Email: Update SMTP credentials and alert recipient address. Set your targets: Adjust asset ranges or scan UUIDs as needed. β οΈ All setup steps are explained in sticky notes inside the workflow. π Requirements Nessus Essentials (Free) or Nessus Pro with API access. SMTP service (e.g. Gmail, Mailgun, SendGrid). Google Sheets OAuth2 credentials. Optional: OpenAI or other LLM provider for LEV scoring and CVE insights. π How to customize the workflow Swap Google Sheets with Airtable, Supabase, or PostgreSQL. Change scan logic or asset list to fit your internal network scope. Adjust AI scoring logic to match internal CVSS thresholds or KEV tags. Expand alerting logic to include Slack, Discord, or webhook triggers. π No sensitive data included. All credentials and sheet links are placeholders.
by Angel Menendez
Analyze Emails for Security Insights Who is this for? This workflow is ideal for security teams, IT Ops professionals, and managed service providers (MSPs) responsible for monitoring and validating email traffic. Itβs especially useful for organizations that need to identify potential phishing attempts, spam, or compromised accounts by analyzing email headers and IP reputation. What problem is this workflow solving? This workflow helps identify malicious or suspicious emails by verifying email authentication headers (SPF, DKIM, DMARC) and analyzing the reputation of the originating IP address. By automating these checks, it reduces manual analysis time and flags potential threats efficiently. What this workflow does Email Monitoring:** Polls a specified Microsoft Outlook folder for new emails in real-time. Header Analysis:** Retrieves and processes email headers to extract critical information such as authentication results and the senderβs IP address. IP Reputation Check:** Leverages external APIs (IP Quality Score and IP-API) to analyze the originating IP for potential spam or malicious activity. Authentication Validation:** Validates SPF, DKIM, and DMARC headers, determining if the email passes industry-standard authentication protocols. Data Aggregation and Reporting:** Combines all analyzed data into a unified format, ready for reporting or integration into downstream systems. Webhook Integration:** Outputs the findings via a webhook, enabling integration with alerting tools or security information and event management (SIEM) platforms. Setup Connect to Outlook: Configure the Microsoft Outlook trigger node with valid OAuth2 credentials. Specify the email folder to monitor for new messages. API Keys (Optional): Obtain an API key for IP Quality Score (https://ipqualityscore.com). Ensure the IP-API endpoint is accessible. This step is optional as ipqualityscore.com will provide a limited number of free lookups each month. See more details here. Webhook Configuration: Set up a webhook endpoint to receive the output of the workflow. Optional Adjustments: Customize polling intervals in the trigger node. Modify header filters or extend the validation logic as needed. How to customize this workflow to your needs Add Alerts:** Use the Respond to Webhook node to trigger notifications in Slack, email, or any other communication channel. Integrate with SIEM:** Forward the workflow output to SIEM tools like Splunk or ELK Stack for further analysis. Modify Validation Rules:** Update SPF, DKIM, or DMARC logic in the Set nodes to align with your organizationβs security policies. Expand IP Analysis:** Add more APIs or services to enrich IP reputation data, such as VirusTotal or AbuseIPDB. This workflow provides a robust foundation for email security monitoring and can be tailored to fit your organization's unique requirements. With its modular design and integration options, itβs a versatile tool to enhance your cybersecurity operations.
by Halfbit π
AI-Powered Invoice Processing: from Email to Database & Chat Notifications Automatically process PDF invoices directly from your email inbox. This workflow uses AI to extract key data, saves it to a PostgreSQL database, and instantly notifies you about the new document in your preferred chat application. The workflow listens for new emails, fetches PDF attachments, and then passes their content to a Large Language Model (LLM) for intelligent recognition and data extraction. Finally, the information is securely archived in the database, and a summary of the invoice is sent as a notification. > π This workflow is highly customizable. > It uses PostgreSQL, OpenAI (GPT), and Discord by default, but you can easily swap these components. > Feel free to use a different database like MySQL or Airtable, another AI model provider, or send notifications to Slack, MS Teams, or any other chat platform. > β οΈ Note: If the workflow fails to extract data correctly from invoices issued by certain companies, you may need to adjust the prompt used in the Basic LLM Chain node to improve parsing accuracy. Use Case Automating accounts payable for small businesses and freelancers Centralizing financial documents without manual data entry Creating a searchable database of all incoming invoices Receiving real-time notifications for new financial commitments Features π§ Email Trigger (IMAP):** Monitors a dedicated email inbox for new messages with attachments π PDF Filtering:** Automatically identifies and processes only PDF attachments π€ AI-Powered Data Extraction:** Uses an LLM (e.g., GPT-4o-mini) to extract invoice number, buyer/seller details, amounts, currency, and due dates βοΈ Structured Data Output:** Converts AI output to standardized JSON π Database Write Logic:** Prevents duplicates by checking invoice/company combo ποΈ PostgreSQL Integration:** Stores extracted data into company and invoice tables π¬ Chat Notifications:** Sends invoice summary as message to a designated channel Setup Instructions β οΈ API Access & Costs To use the AI extraction feature, you need an API key from a provider like OpenAI. Most providers charge for access to language models. You'll likely need a billing account. 1. PostgreSQL Database Configuration Ensure your database has the following tables: -- Table for companies (invoice issuers) CREATE TABLE company ( id SERIAL PRIMARY KEY, tax_number VARCHAR(255) UNIQUE NOT NULL, name VARCHAR(255), address TEXT, created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP ); -- Table for invoices CREATE TABLE invoice ( id SERIAL PRIMARY KEY, company_id INTEGER REFERENCES company(id), invoice_number VARCHAR(255) NOT NULL, -- Add other fields: total_to_pay, currency, due_date created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP, UNIQUE(company_id, invoice_number) ); Then, in n8n, create a credential for your PostgreSQL DB. 2. Email (IMAP) Configuration In n8n, add credentials for the email account that receives invoices: IMAP host IMAP port Username Password 3. AI Provider Configuration Log in to OpenAI (or similar provider) Generate API key In n8n, create credentials and paste the key 4. Chat Notification (Discord) Go to Discord > Server Settings > Integrations > Webhooks > New Webhook Select channel Copy Webhook URL In n8n, paste URL into the Discord node Placeholders and Fields to Fill | Placeholder | Description | Example | |---------------------------|-------------------------------------------|------------------------------------------| | YOUR_EMAIL_CREDENTIALS | Your IMAP email account in n8n | My Invoice Mailbox | | YOUR_OPENAI_CREDENTIALS | API credentials for AI model | My OpenAI Key | | YOUR_POSTGRES_CREDENTIALS| Your PostgreSQL DB credentials in n8n | My Production DB | | YOUR_DISCORD_WEBHOOK | Webhook URL for your chat system | https://discord.com/api/webhooks/... | Testing the Workflow Send a test invoice to the inbox as a PDF attachment Run the workflow manually in n8n and check if the IMAP node fetches the message Verify AI Extraction β inspect the LLM output (e.g., GPT node) and confirm structured JSON Check the DB β ensure new rows appear in company and invoice Check the chat β verify the invoice summary appears in the chosen channel Customization Tips Change the DB:** Use MySQL, Airtable, or Google Sheets instead of PostgreSQL Other notifications:** Swap Discord for Slack, MS Teams, Telegram, etc. Expand AI logic:** Extract line items, prices, etc. by customizing the prompt Add payment logic:** Allow marking invoices as paid via emoji or a separate webhook
by Sam Nesler
Syncs assignments and completion states to and fro between Canvas LMS and a Notion database. Automatically triggers every 2 hours during the schoolday by default (meaning 7 times a day), but also supports manual refreshing via webhooks. Setup You'll need a few things to get started: A Canvas API key. You can generate one by going to your Canvas account settings and clicking on the "New Access Token" button. The URL looks like https://canvas.wisc.edu/profile/settings You'll also need to replace URLs in Canvas nodes with your institution's domain, unless you're a student at UW-Madison. Canvas nodes are all the HTTP Request nodes except the one labelled "OpenAI Categorization", which is an OpenAI node and will require a key in a later step. A Notion integration token. You can find this by going to your Notion integrations page and clicking "Create new integration". You can make it a "Internal Integration". A Notion database to sync to. I made a template for use with the workflow, but you can use any database that has the following fields: Status (status): Status with at least the options "Not Started" and "Completed" - assignments start out "Not Started", and are marked "Completed" when they are submitted on Canvas. Estimate (select): Select with at least the options "XS", "S", "M", "L", "XL" - this is where the estimated time to complete the assignment will be stored. Even if you don't use AI, they'll start out as "M" Priority (select): Select with at least the options "Could Do", "Should Do", "Must Do" - assignments start out "Should Do" ID (text): this is where the ID of the assignment will be stored. We use this to sync without having a database on the server Due Date (date): this is where the due date of the assignment will be stored Class (text): this is where the name of the class will be stored Link (URL): this is where the link to the assignment will be stored The ID of the Notion database you want to sync to. You can find this by clicking "Share" in the top right of your database and copying the link. The ID is the part of the link that comes after https://www.notion.so/ and before ?v=. So for https://www.notion.so/tsuniiverse/1976e99d91128076b034e7379464560f?v=1976e99d911281e7bd4b000c2cbec692&pvs=4, the ID would be 1976e99d91128076b034e7379464560f. An OpenAI key for assignment length estimation or disable the node. Manual Refreshing Embed the production URL from the Webhook Trigger inside a "toggle list" or "toggle heading" inside Notion, then expand the heading to refresh, like so:
by PUQcloud
Setting up n8n workflow Overview The Docker Immich WHMCS module uses a specially designed workflow for n8n to automate deployment processes. The workflow provides an API interface for the module, receives specific commands, and connects via SSH to a server with Docker installed to perform predefined actions. Prerequisites You must have your own n8n server. Alternatively, you can use the official n8n cloud installations available at: n8n Official Site Installation Steps Install the Required Workflow on n8n You have two options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates for our modules are available on the official n8n marketplace. Visit our profile to access all available templates: PUQcloud on n8n Option 2: Manual Installation Each module version comes with a workflow template file. You need to manually import this template into your n8n server. n8n Workflow API Backend Setup for WHMCS/WISECP Configure API Webhook and SSH Access Create a Basic Auth Credential for the Webhook API Block in n8n. Create an SSH Credential for accessing a server with Docker installed. Modify Template Parameters In the Parameters block of the template, update the following settings: server_domain β Must match the domain of the WHMCS/WISECP Docker server. clients_dir β Directory where user data related to Docker and disks will be stored. mount_dir β Default mount point for the container disk (recommended not to change). Do not modify the following technical parameters: screen_left screen_right Deploy-docker-compose In the Deploy-docker-compose element, you have the ability to modify the Docker Compose configuration, which will be generated in the following scenarios: When the service is created When the service is unlocked When the service is updated nginx In the nginx element, you can modify the configuration parameters of the web interface proxy server. The main section allows you to add custom parameters to the server block in the proxy server configuration file. The main\_location section contains settings that will be added to the location / block of the proxy server configuration. Here, you can define custom headers and other parameters specific to the root location. Bash Scripts Management of Docker containers and all related procedures on the server is carried out by executing Bash scripts generated in n8n. These scripts return either a JSON response or a string. All scripts are located in elements directly connected to the SSH element. You have full control over any script and can modify or execute it as needed.
by Custom Workflows AI
Introduction The "Automatic Weekly Digital PR Stories Suggestions" workflow is a sophisticated automated system designed to identify trending news stories on Reddit, analyze public sentiment through comment analysis, extract key information from source articles, and generate strategic angles for potential digital PR campaigns. This workflow leverages the power of social media trends, natural language processing, and AI-driven analysis to deliver curated, sentiment-analyzed news opportunities for PR professionals. Operating on a weekly schedule, the workflow searches Reddit for posts related to specified topics, filters them based on engagement metrics, and performs a deep analysis of both the content and public reaction. It then generates comprehensive reports that include story opportunities, audience insights, and strategic recommendations. These reports are automatically compiled, stored in Google Drive, and shared with team members via Mattermost for immediate collaboration. This workflow solves the time-consuming process of manually monitoring social media for trending stories, analyzing public sentiment, and identifying PR opportunities. By automating these tasks, PR professionals can focus on strategy development and execution rather than spending hours on research and analysis. Who is this for? This workflow is designed for digital PR professionals, content marketers, communications teams, and media relations specialists who need to stay on top of trending stories and public sentiment to develop timely and effective PR campaigns. It's particularly valuable for: PR agencies managing multiple clients across different industries In-house PR teams needing to identify media opportunities quickly Content marketers looking for trending topics to create timely content Communications professionals monitoring public perception of industry news Users should have basic familiarity with n8n workflows and the PR strategy development process. While technical knowledge of the integrated APIs is not required to use the workflow, some understanding of Reddit, sentiment analysis, and PR campaign development would be beneficial for interpreting and acting on the generated reports. What problem is this workflow solving? Digital PR professionals face several challenges that this workflow addresses: Information Overload: Manually monitoring social media platforms for trending stories is time-consuming and often results in missed opportunities. Sentiment Analysis Complexity: Understanding public perception of news stories requires reading through hundreds of comments and identifying patterns, which is labor-intensive and subjective. Content Extraction: Visiting multiple news sources to read and analyze articles takes significant time. Strategic Angle Development: Identifying unique PR angles that leverage trending stories and public sentiment requires synthesizing large amounts of information. Team Collaboration: Sharing findings and insights with team members in a structured format can be cumbersome. By automating these processes, the workflow enables PR professionals to quickly identify trending stories with PR potential, understand public sentiment, and develop strategic angles based on comprehensive analysis, all while maintaining a structured approach to team collaboration. What this workflow does Overview The workflow automatically identifies trending posts on Reddit related to specified topics, analyzes both the content of linked articles and public sentiment from comments, and generates comprehensive PR strategy reports. These reports include story opportunities, audience insights, and strategic recommendations based on the analysis. The final reports are compiled, stored in Google Drive, and shared with team members via Mattermost. Process Topic Selection and Reddit Search: The workflow starts with a list of topics specified in the "Set Data" node It searches Reddit for posts related to these topics Posts are filtered based on upvotes and other criteria to focus on trending content Comment Analysis: For each post, the workflow retrieves comments It extracts the top 30 comments based on score Using Claude AI, it analyzes the comments to understand: Overall sentiment Dominant narratives Audience insights PR implications Content Analysis: The workflow extracts the content of the linked article using Jina AI It analyzes the content to identify: Core story elements Technical aspects Narrative opportunities Viral elements PR Strategy Development: Based on the combined analysis of comments and content, the workflow generates: First-mover story opportunities Trend-amplifier story ideas Priority rankings Execution roadmap Strategic recommendations Report Generation and Distribution: The workflow compiles comprehensive reports for each post Reports are converted to text files All files are compressed into a ZIP archive The archive is uploaded to Google Drive A link to the archive is shared with team members via Mattermost Setup To set up this workflow, follow these steps: Import the Workflow: Download the workflow JSON file Import it into your n8n instance Configure API Credentials: Reddit: Add a new credential "Reddit OAuth2 API" by following the guide at https://docs.n8n.io/integrations/builtin/credentials/reddit/ Anthropic: Add a new credential "Anthropic Account" by following the guide at https://docs.n8n.io/integrations/builtin/credentials/anthropic/ Google Drive: Add a new credential "Google Drive OAuth2 API" by following the guide at https://docs.n8n.io/integrations/builtin/credentials/google/oauth-single-service/ Configure the "Set Data" Node: Set your interested topics (one per line) Add your Jina API key (obtain from https://jina.ai/api-dashboard/key-manager) Configure the Mattermost Node: Update your Mattermost instance URL Set your Webhook ID and Channel Follow the guide at https://developers.mattermost.com/integrate/webhooks/incoming/ for webhook setup Adjust the Schedule (Optional): The workflow is set to run every Monday at 6am Modify the "Schedule Trigger" node if you need a different schedule Test the Workflow: Run the workflow manually to ensure all connections are working properly Check the output to verify the reports are being generated correctly How to customize this workflow to your needs This workflow can be customized in several ways to better suit your specific requirements: Topic Selection: Modify the topics in the "Set Data" node to focus on industries or subjects relevant to your PR strategy Add multiple topics to cover different client interests or market segments Filtering Criteria: Adjust the "Upvotes Requirement Filtering" node to change the minimum upvotes threshold Modify the filtering conditions to include or exclude certain types of posts Analysis Parameters: Customize the prompts in the "Comments Analysis," "News Analysis," and "Stories Report" nodes to focus on specific aspects of the content or comments Adjust the temperature settings in the Anthropic Chat Model nodes to control the creativity of the AI responses Report Format: Modify the "Set Final Report" node to change the structure or content of the final reports Add or remove sections based on your specific reporting needs Distribution Method: Replace or supplement the Mattermost notification with email notifications, Slack messages, or other communication channels Add additional storage options beyond Google Drive Schedule Frequency: Change the "Schedule Trigger" node to run the workflow more or less frequently Set up multiple triggers for different topics or clients Integration with Other Systems: Add nodes to integrate with your CRM, content management system, or project management tools Create connections to automatically populate content calendars or task management systems
by PUQcloud
Setting up n8n workflow Overview The Docker n8n WHMCS module uses a specially designed workflow for n8n to automate deployment processes. The workflow provides an API interface for the module, receives specific commands, and connects via SSH to a server with Docker installed to perform predefined actions. Prerequisites You must have your own n8n server. Alternatively, you can use the official n8n cloud installations available at: n8n Official Site Installation Steps Install the Required Workflow on n8n You have two options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates for our modules are available on the official n8n marketplace. Visit our profile to access all available templates: PUQcloud on n8n Option 2: Manual Installation Each module version comes with a workflow template file. You need to manually import this template into your n8n server. n8n Workflow API Backend Setup for WHMCS/WISECP Configure API Webhook and SSH Access Create a Basic Auth Credential for the Webhook API Block in n8n. Create an SSH Credential for accessing a server with Docker installed. Modify Template Parameters In the Parameters block of the template, update the following settings: server_domain β Must match the domain of the WHMCS/WISECP Docker server. clients_dir β Directory where user data related to Docker and disks will be stored. mount_dir β Default mount point for the container disk (recommended not to change). Do not modify the following technical parameters: screen_left screen_right Deploy-docker-compose In the Deploy-docker-compose element, you have the ability to modify the Docker Compose configuration, which will be generated in the following scenarios: When the service is created When the service is unlocked When the service is updated nginx In the nginx element, you can modify the configuration parameters of the web interface proxy server. The main section allows you to add custom parameters to the server block in the proxy server configuration file. The main\_location section contains settings that will be added to the location / block of the proxy server configuration. Here, you can define custom headers and other parameters specific to the root location. Bash Scripts Management of Docker containers and all related procedures on the server is carried out by executing Bash scripts generated in n8n. These scripts return either a JSON response or a string. All scripts are located in elements directly connected to the SSH element. You have full control over any script and can modify or execute it as needed.
by Muhammad Asadullah
Document Chat Bot with Automated RAG System This workflow creates a conversational assistant that can answer questions based on your Google Drive documents. It automatically processes various file types and uses Retrieval-Augmented Generation (RAG) to provide accurate answers based on your document content. How It Works Monitors Google Drive for New Documents: Automatically detects when files are created or updated in designated folders Processes Multiple File Types: Handles PDFs, Excel spreadsheets, and Google Docs Builds a Knowledge Base: Converts documents into searchable vector embeddings stored in Supabase Provides Chat Interface: Users can ask questions about their documents through a web interface Retrieves Relevant Information: Uses advanced RAG techniques to find and present the most relevant information Setup Steps (Estimated time: 25-30 minutes) API Credentials: Connect your OpenAI API key for text processing and embeddings Google Drive Integration: Set up Google Drive triggers to monitor specific folders Supabase Configuration: Configure Supabase vector database for document storage Chat Interface Setup: Deploy the web-based chat interface using the provided webhook The workflow automatically chunks documents into manageable segments, generates embeddings, and stores them in a vector database for efficient retrieval. When users ask questions, the system finds the most relevant document sections and uses them to generate accurate, contextual responses.
by Oneclick AI Squad
This enterprise-grade n8n workflow automates competitor monitoring on Instagram β from post fetching to AI-driven strategy alerts β using Claude AI, Instagram API, and multi-channel notifications. It tracks trends, analyzes performance, and delivers actionable insights via WhatsApp and email, keeping your team ahead with zero manual effort. Key Features Daily competitor scanning** from Google Sheets Post performance metrics** (engagement rate, trends) calculated automatically AI-powered insights* using *Claude 3.5 Sonnet** for content and engagement strategies Dual-channel alerts:** WhatsApp (Twilio) and email for instant delivery Audit logs** in Google Sheets for historical trends Scalable triggers:** Daily schedule or webhook for ad-hoc checks Workflow Process | Step | Node | Description | | ---- | ----------------------------------- | -------------------------------------------------------- | | 1 | Schedule Trigger | Runs daily at 10 AM or via webhook (/competitor-alert) | | 2 | Get Competitor List | Loads competitors from Competitors sheet | | 3 | Loop Over Competitors | Processes each competitor to avoid API limits | | 4 | Get Competitor Posts | Fetches last 10 posts via Instagram Graph API | | 5 | Calculate Performance Metrics | Computes avg engagement and trend using Code node | | 6 | Generate AI Insights (Claude AI)| Analyzes data for 3 strategic bullet-point insights | | 7 | Send Email Alert | Emails detailed report to team | | 8 | Send WhatsApp Alert (Twilio) | Sends concise alert via WhatsApp | | 9 | Log Alert | Records metrics and insights in AlertsLog sheet | | 10 | End Workflow | Terminates execution | Setup Instructions 1. Import Workflow Open n8n β Workflows β Import from Clipboard Paste the JSON workflow 2. Configure Credentials | Integration | Details | | ----------------- | -------------------------------------------------- | | Google Sheets | Service account with spreadsheet access | | Instagram API | Business access token for media fetching | | Claude AI | Anthropic API key for claude-3-5-sonnet-20241022 | | Twilio | Credentials for WhatsApp messaging | | SMTP/Email | SMTP or Gmail for email alerts | 3. Update Spreadsheet IDs Ensure your Google Sheets include: Competitors AlertsLog 4. Set Triggers Webhook:** /webhook/competitor-alert (for on-demand runs) Schedule:** Daily at 10:00 AM 5. Run a Test Use manual execution to confirm: Post fetching and metrics calculation AI insights generation WhatsApp/email delivery and sheet logging Google Sheets Structure Competitors | competitorName | competitorUserId | industryFocus | |----------------|------------------|---------------| | BrandX | 1234567890 | Fashion | AlertsLog | competitor | avgEngagement | trend | insights | timestamp | |---------------|----------------|--------|-----------------------------------|--------------------| | BrandX | 75.5 | Rising | - Bullet 1... | 2023-10-01T12:00:00Z | System Requirements | Requirement | Version/Access | | --------------------- | ---------------------------------------------- | | n8n | v1.50+ (AI and messaging integrations supported)| | Claude AI API | claude-3-5-sonnet-20241022 | | Instagram Graph API| Business account access token | | Twilio API | WhatsApp-enabled phone number | | Google Sheets API | https://www.googleapis.com/auth/spreadsheets | | SMTP | For email (e.g., Gmail OAuth) | Optional Enhancements Add visual charts (e.g., engagement trends via Google Charts) Integrate Slack for team-wide alerts Use advanced metrics like reach/impressions via Instagram Insights API Connect CRM (HubSpot) to tag competitors Enable multi-platform monitoring (e.g., TikTok) Add threshold-based alerts (e.g., only if engagement >20% increase) Export insights to Notion or Airtable for strategy docs Result: A single automated system that monitors competitors, uncovers trends, and arms your team with AI strategies β delivered via WhatsApp and email with zero manual work. Get in touch with us for custom n8n automation!
by Abideen Bello
Generate daily audio newsletters from news headlines with AI Who's it for Perfect for content creators, podcasters, news enthusiasts, and busy professionals who want to create automated audio news content or stay informed through personalized audio briefings. Ideal for social media managers, newsletter creators, and anyone building audio-first content experiences. How it works This workflow creates a fully automated news-to-audio pipeline: Schedule trigger fetches the latest news headlines from NewsAPI daily AI processing rewrites each article into newsletter-style content using Claude Content aggregation combines all processed articles into a cohesive newsletter Script generation transforms the newsletter into a 2-minute audio-ready script Text-to-speech converts the script into high-quality audio using OpenAI's voice models Email delivery sends the audio newsletter as an attachment to subscribers The workflow runs automatically on your chosen schedule, delivering fresh audio content without any manual intervention. How to set up Requirements NewsAPI account** with API key (free tier available) OpenRouter API access** for Claude model OpenAI API account** for text-to-speech functionality Gmail account** with OAuth2 access for email delivery Basic understanding** of audio file handling (optional) Step-by-step setup 1. Set Your Schedule trigger Configure the Schedule Trigger for your preferred timing (daily at 7 AM recommended) Consider your audience timezone and optimal delivery times Set up monitoring to ensure consistent execution 2. Configure News Source Sign up for NewsAPI at newsapi.org (free tier includes 100 requests/day) Replace YOUR_NEWSAPI_KEY with your actual API key in the HTTP Request node Customize the news query parameters (country, category, sources) to match your audience interests Test the API endpoint to ensure it returns expected data 3. Extract Individual Articles(Split Out) Takes the articles array from NewsAPI response Creates separate items for each news article Enables individual processing of each story Prepares data for AI content generation 4. Set Up AI Model Credentials Create an OpenRouter account for Claude access Add your OpenRouter API credentials in n8n Alternatively, replace with OpenAI GPT-4 if you prefer (update the model node accordingly) Configure rate limits and usage monitoring 5. Combine Newsletter Content" (Aggregate) Collects output from all processed articles Renames field to news for easy reference Prepares combined content for script generation Ensures no articles are lost in processing 6. Audio Script Generation GPT-4 creates 2-minute audio-ready script from newsletter content. Creates script with "Max" presenter persona Script features: 2-minute target duration Audio-friendly text (no special characters) Natural speaking flow and transitions Engaging introduction and conclusion 7. Configure OpenAI Text-to-Speech Add your OpenAI API credentials in n8n Choose your preferred voice model (options: alloy, echo, fable, onyx, nova, shimmer) Set audio quality preferences (standard vs HD) Test voice output with sample text 8. Customize Email Delivery Add your Gmail OAuth2 credentials Replace YOUR_EMAIL@example.com with your actual recipient email Update the sender name and business information in email template Configure attachment settings for audio files 9. Test the Complete Pipeline Run a manual execution to test all components Verify news data is properly fetched and processed Check audio quality and duration Confirm email delivery with audio attachment How to customize the workflow Advanced News Filtering Custom sources**: Replace NewsAPI with RSS feeds from specific publications Topic filtering**: Add keyword filtering to focus on specific industries or topics Multi-country support**: Fetch news from multiple regions and merge content Sentiment analysis**: Filter out negative news or categorize by sentiment Trending topics**: Integrate with social media APIs to include trending discussions AI Content Enhancement Voice persona**: Customize the AI prompt to create different presenter personalities (professional, casual, expert) Length control**: Adjust script length for different formats (1-minute updates, 5-minute deep dives) Multi-language support**: Generate newsletters in different languages based on subscriber preferences Fact-checking**: Add verification steps to ensure accuracy of AI-generated content Source attribution**: Include proper citations and links to original articles Audio Production Features Voice variety**: Rotate between different OpenAI voices for engaging content Background music**: Add intro/outro music using audio editing APIs Speed control**: Adjust playback speed based on content type Chapter markers**: Add timestamps for different news segments Quality optimization**: Implement audio normalization and enhancement Distribution Enhancements Multi-channel delivery**: Send to Slack, Discord, or team communication platforms Podcast publishing**: Automatically upload to podcast platforms via RSS Social media**: Post audio clips to Twitter, LinkedIn, or Instagram Website integration**: Embed audio player on your website automatically Mobile app push**: Send notifications to mobile apps with audio links Subscriber Management Mailchimp integration**: Build and manage subscriber lists automatically Preference tracking**: Allow subscribers to choose news categories or frequency Analytics tracking**: Monitor open rates, listening duration, and engagement A/B testing**: Test different voice styles, content lengths, or delivery times Segmentation**: Send different newsletters to different subscriber segments Content Workflow Customization Editorial review**: Add approval steps before content distribution Content calendar**: Integrate with planning tools for scheduled content themes Collaborative editing**: Include team review processes for content quality Version control**: Maintain archives of previous newsletters for reference Performance metrics**: Track which content types perform best Webhook Integration Examples Website Integration // Add this to your website for manual newsletter requests fetch('/webhook/trigger-newsletter', { method: 'POST', headers: {'Content-Type': 'application/json'}, body: JSON.stringify({ subscriber_email: 'user@example.com', topics: ['technology', 'business'], urgency: 'normal' }) }); Slack Command Integration Create slash commands to trigger newsletter generation on demand Allow team members to request specific topic newsletters Integrate with Slack workflows for automated team briefings Mobile App Integration Use webhooks to trigger newsletters from mobile app interactions Create push notification systems for breaking news alerts Build in-app audio players for seamless listening experience Troubleshooting Common Issues and Solutions NewsAPI quota exceeded: Monitor your daily API usage in NewsAPI dashboard Implement caching to reduce redundant requests Consider upgrading to paid plan for higher limits Add fallback RSS feeds when API limits are reached AI model rate limiting: Implement exponential backoff for API requests Monitor token usage across Claude and OpenAI services Add queue systems for high-volume processing Consider switching to different models during peak times Audio generation failures: Check OpenAI text-to-speech quotas and billing Validate text input for special characters that might cause issues Implement retry logic for failed audio generation Add fallback to text-only newsletters when audio fails Email delivery problems: Verify Gmail API quotas and sending limits Check audio file size limits (Gmail has 25MB attachment limit) Implement compression for large audio files Consider cloud storage links instead of direct attachments Content quality issues: Fine-tune AI prompts for more consistent output Add content validation steps to check for accuracy Implement editorial guidelines in AI instructions Create feedback loops to improve content over time Performance Optimization Workflow efficiency: Process news articles in parallel where possible Implement smart caching for repeated content Optimize API calls to reduce latency Monitor execution times and identify bottlenecks Cost management: Track API costs across all services (NewsAPI, OpenRouter, OpenAI) Implement budget alerts and automatic shutoffs Optimize content length to reduce text-to-speech costs Consider batch processing during off-peak hours Scalability preparation: Design for multiple subscriber support Plan for increased news volume during major events Prepare backup systems for service outages Document processes for team handoffs Security and Compliance API key protection: Never expose API keys in workflow exports Use n8n's credential management exclusively Implement key rotation policies Monitor for unauthorized API usage Content compliance: Review AI-generated content for accuracy and bias Implement content filtering for inappropriate material Ensure proper attribution to original news sources Maintain editorial standards and fact-checking processes