by 荒城直也
Title: Create daily AI news digest and send to Telegram Description: Stay ahead of the rapidly evolving artificial intelligence landscape without the information overload. This workflow acts as your personal AI news editor, automatically curating, summarizing, and visualizing the top stories of the day, delivered directly to your Telegram. It goes beyond simple RSS aggregation by using an AI Agent to rewrite headlines and summaries into a digestible format and includes a "Chat Mode" where you can ask follow-up questions about the news directly within the n8n interface. Who is it for AI Enthusiasts & Researchers:** Keep up with the latest papers and releases without manually checking multiple sites. Tech Professionals:** Get a morning briefing on industry trends to start your day informed. Content Creators:** Find trending topics for newsletters or social media posts effortlessly. How it works News Aggregation: Every morning at 8:00 AM, the workflow fetches RSS feeds from top tech sources (Google News AI, The Verge, and TechCrunch). Smart Filtering: A Code node aggregates the articles, removes duplicates, and ranks them by recency to select the top 5 stories. AI Summarization: An AI Agent (powered by OpenAI) analyzes the selected stories and writes a concise, engaging summary for each. Visual Generation: DALL-E generates a unique, futuristic header image based on the day's news context. Delivery: The digest is formatted with Markdown and emojis, then sent to your specified Telegram chat. Interactive Chat: A separate branch allows you to chat with an AI Agent via the n8n Chat interface to discuss the news or ask general AI questions. How to set up Configure Credentials: Set up your OpenAI API credential. Set up your Telegram API credential. Get Telegram Chat ID: Create a bot with @BotFather on Telegram. Send a message to your bot. Use @userinfobot to find your numeric Chat ID. Update Workflow Settings: Open the Workflow Configuration node. Paste your Chat ID into the telegramChatId value field. Activate: Toggle the workflow to "Active" to enable the daily schedule. Requirements n8n Version:** Must support LangChain nodes. OpenAI Account:** API Key with access to GPT-4o-mini (or preferred model) and DALL-E 3. Telegram Account:** To create a bot and receive messages. How to customize Change News Sources:** Edit the RSS URLs in the Workflow Configuration node to track different topics (e.g., Crypto, Finance, Sports). Adjust Personality:** Modify the system prompt in the AI News Summarizer Agent node to change the tone of the summaries (e.g., "explain it like I'm 5" or "highly technical"). Change Schedule:** Update the Daily 8 AM Trigger node to your preferred time zone and frequency.
by Cheng Siong Chin
How It Works This workflow automates enterprise ticket management by combining AI-powered classification with knowledge base retrieval. It receives support tickets via webhook, routes them through multiple AI models (OpenAI ChatGPT, NVIDIA's text classification APIs, and embeddings-based search) to determine optimal resolution strategies. The system generates contextual diagnostic logs, formats responses, updates ticket systems, notifies engineers when escalation is needed, and seamlessly integrates with knowledge bases for continuous learning. It solves the critical problem of manual ticket sorting and delayed responses by automating intelligent triage, reducing resolution time, and ensuring consistent quality across support operations. Target audience includes support operations teams, technical support managers, and enterprises managing high-volume ticket queues seeking to improve efficiency and SLA compliance. Setup Steps Configure the OpenAI API key in credentials. Add NVIDIA API credentials for embedding and classification models. Set up Google Sheets for knowledge base storage and retrieval. Connect your ticketing system (Jira, Zendesk, or webhook) for incoming tickets. Link a notification service (Gmail or Slack) for engineer alerts. Map custom fields to your ticket system schema. Prerequisites OpenAI API account with GPT access. NVIDIA API credentials (Embeddings & Classification). Google Sheets for KB management. Ticketing system with webhook capability. Use Cases SaaS support teams triaging 100+ daily tickets, reducing manual sorting by 80%. Technical support escalating complex issues intelligently while documenting solutions. Customization Swap OpenAI models for Claude or Anthropic APIs. Replace Google Sheets with database systems (PostgreSQL, Airtable). Benefits Reduces manual ticket sorting by 70-80%, freeing support staff for complex issues. Decreases average resolution time through intelligent routing.
by Abdullah
Daily Cyber News Digest to Telegram Workflow Created By: Abdullah Dilshad 📧 iamabdullahdishad@gmail.com Stay informed with automated daily summaries. This workflow aggregates cyber news from multiple trusted sources, uses AI to intelligently select and summarize the top 5 most relevant articles, and delivers a clean, concise digest directly to your Telegram chat every morning at 10:00 AM. What This Workflow Does Collects Data: Fetches cybersecurity news from multiple global APIs. Filters Noise: Uses AI to discard irrelevant updates. Summarizes: Generates short, professional summaries (1–2 sentences). Delivers: Automatically sends a formatted digest to Telegram within message length limits. How It Works Schedule Trigger Runs automatically every day at 10:00 AM (customizable). News Collection Fetches articles using the keyword "cyber" from: GNews NewsAPI Data Processing Merges articles from both sources into a single, unified dataset. AI-Powered Selection OpenAI analyzes all fetched articles. Intelligently selects the Top 5 most relevant cybersecurity stories. Smart Summarization Each article is condensed into 1–2 clear sentences. Includes: Publication date, Source name, and Article link. Telegram Delivery Sends a clean, formatted digest to your specified Telegram chat. Ensures the total message length stays under Telegram’s 4096-character limit. Setup Instructions Get API Keys Sign up for free API keys from GNews.io and NewsAPI.org. Connect Accounts Add your Telegram and OpenAI credentials in n8n. Configure Telegram Enter your Telegram Chat ID in the "Send a Text Message" node. Customize the Schedule Change the trigger time if you prefer delivery at a different hour. Customization & Use Cases This workflow is fully reusable and scalable. You can replace the keyword "cyber" to track any topic relevant to your needs. Example Topics: 🤖 Artificial Intelligence (AI) 💰 Cryptocurrency & Blockchain 🚀 Startups & Venture Capital 📱 Consumer Technology 🏭 Industry-specific updates Note: This workflow is designed to be adapted for individual tracking, team updates, or competitor analysis.
by Cj Elijah Garay
Discord AI Content Moderator with Learning System This n8n template demonstrates how to automatically moderate Discord messages using AI-powered content analysis that learns from your community standards. It continuously monitors your server, intelligently flags problematic content while allowing context-appropriate language, and provides a complete audit trail for all moderation actions. Use cases are many: Try moderating a forex trading community where enthusiasm runs high, protecting a gaming server from toxic behavior while keeping banter alive, or maintaining professional standards in a business Discord without being overly strict! Good to know This workflow uses OpenAI's GPT-5 Mini model which incurs API costs per message analyzed (approximately $0.001-0.003 per moderation check depending on message volume) The workflow runs every minute by default - adjust the Schedule Trigger interval based on your server activity and budget Discord API rate limits apply - the batch processor includes 1.5-second delays between deletions to prevent rate limiting You'll need a Google Sheet to store training examples - a template link is provided in the workflow notes The AI analyzes context and intent, not just keywords - "I *cking love this community" won't be deleted, but "you guys are sht" will be Deleted messages cannot be recovered from Discord - the admin notification channel preserves the content for review How it works The Schedule Trigger activates every minute to check for new messages requiring moderation We'll fetch training data from Google Sheets containing labeled examples of messages to delete (with reasons) and messages to keep The workflow retrieves the last 10 messages from your specified Discord channel using the Discord API A preparation node formats both the training examples and recent messages into a structured prompt with unique indices for each message The AI Agent (powered by GPT-5 Mini) analyzes each message against your community standards, considering intent and context rather than just keywords The AI returns a JSON array of message indices that violate guidelines (e.g., [0, 2, 5]) A parsing node extracts these indices, validates them, removes duplicates, and maps them to actual Discord message objects The batch processor loops through each flagged message one at a time to prevent API rate limiting and ensure proper error handling Each message is deleted from Discord using the exact message ID A 1.5-second wait prevents hitting Discord's rate limits between operations Finally, an admin notification is posted to your designated admin channel with the deleted message's author, ID, and original content for audit purposes How to use Replace the Discord Server ID, Moderated Channel ID, and Admin Channel ID in the "Edit Fields" node with your server's specific IDs Create a copy of the provided Google Sheets template with columns: message_content, should_delete (YES/NO), and reason Connect your Discord OAuth2 credentials (requires bot permissions for reading messages, deleting messages, and posting to channels) Add your OpenAI API key to access GPT-5 Mini Customize the AI Agent's system message to reflect your specific community standards and tone Adjust the message fetch limit (default: 10) based on your server activity - higher limits cost more per run but catch more violations Consider changing the Schedule Trigger from every minute to every 3-5 minutes if you have a smaller community Requirements Discord OAuth2 credentials for bot authentication with message read, delete, and send permissions Google Sheets API connection for accessing the training data knowledge base OpenAI API key for GPT-5 Mini model access A Google Sheet formatted with message examples, deletion labels, and reasoning Discord Server ID, Channel IDs (moderated + admin) which you can get by enabling Developer Mode in Discord Customising this workflow Try building an emoji-based feedback system where admins can react to notifications with ✅ (correct deletion) or ❌ (wrong deletion) to automatically update your training data Add a severity scoring system that issues warnings for minor violations before deleting messages Implement a user strike system that tracks repeat offenders and automatically applies temporary mutes or bans Expand the AI prompt to categorize violations (spam, harassment, profanity, etc.) and route different types to different admin channels Create a weekly digest that summarizes moderation statistics and trending violation types Add support for monitoring multiple channels by duplicating the Discord message fetch nodes with different channel IDs Integrate with a database instead of Google Sheets for faster lookups and more sophisticated training data management If you have questions Feel free to contact me here: elijahmamuri@gmail.com elijahfxtrading@gmail.com
by Lee Lin
How It Works Top Branch Workflow* 1. The Data Scientist: Ingest: Pulls historical sales data from Google Sheets. Math Engine: Runs 7 statistical algorithms (e.g., Seasonal Naive, Linear Trend, Regression). It backtests them against your history and scientifically selects the winner with the lowest error rate. 2. The Data Analyst: Interpret: The AI Agent takes the mathematical output and translates it into business insights, assigning confidence scores based on error margins. Report: Generates a visual trend chart (PNG) and sends a complete briefing to your phone. Bottom Branch Workflow* 3. The Consultant: AI Agent 2 handles the follow-up questions. It pulls the latest analysis context and checks historical rate data to give an informed answer. Recall: When you ask a question via WhatsApp, the bot retrieves the saved forecast state. Answer: It acts as an on-demand analyst, comparing current forecasts against historical actuals to give you instant answers. Setup Steps 1) Google Sheet: Prepare columns: Year, Month, Sales. Map the Sheet ID in the "Workflow Configuration" node. 2) Forecast Engine: No config needed. It automatically detects seasonality vs. linear trends. 3) Database: Create a table latest_forecast to store the JSON output. 4) Credentials: Connect Google Sheets, OpenAI, and WhatsApp Use Cases & Benefits For Business Owners: Gain enterprise-grade forecasting on autopilot. Always have a sophisticated financial outlook running in the background 24/7. For Sales Leaders: Get immediate visibility into future revenue trends. Bypass the wait for end-of-month manual reports and get a strategic "pulse check" delivered instantly to your phone. 🤖Virtual Data Team: Instantly add the capabilities of a Data Scientist and Data Analyst to your business or division. It works alongside your existing team to handle the heavy lifting, or stands in as your dedicated automated department. 🧠Precision & Trust: Combines the best of both worlds: rigorous, deterministic code for the math (no hallucinations) and advanced AI for the strategic explanation. You get numbers you can trust with context you can use. ⚡Decision-Ready Insights: Stop digging through dashboards. High-level intelligence is pushed directly to you on WhatsApp, allowing you to make faster, data-driven decisions from anywhere. 📬 Want to Customize This? leelin.business@gmail.com
by Punit
WordPress AI Content Creator Overview Transform a few keywords into professionally written, SEO-optimized WordPress blog posts with custom featured images. This workflow leverages AI to research topics, structure content, write engaging articles, and publish them directly to your WordPress site as drafts ready for review. What This Workflow Does Core Features Keyword-to-Article Generation**: Converts simple keywords into comprehensive, well-structured articles Intelligent Content Planning**: Uses AI to create logical chapter structures and content flow Wikipedia Integration**: Researches factual information to ensure content accuracy and depth Multi-Chapter Writing**: Generates coherent, contextually-aware content across multiple sections Custom Image Creation**: Generates relevant featured images using DALL-E based on article content SEO Optimization**: Creates titles, subtitles, and content optimized for search engines WordPress Integration**: Automatically publishes articles as drafts with proper formatting and featured images Business Value Content Scale**: Produce high-quality blog posts in minutes instead of hours Research Efficiency**: Automatically incorporates factual information from reliable sources Consistency**: Maintains professional tone and structure across all generated content SEO Benefits**: Creates search-engine friendly content with proper HTML formatting Cost Savings**: Reduces need for external content creation services Prerequisites Required Accounts & Credentials WordPress Site with REST API enabled OpenAI API access (GPT-4 and DALL-E models) WordPress Application Password or JWT authentication Public-facing n8n instance for form access (or n8n Cloud) Technical Requirements WordPress REST API v2 enabled (standard on most WordPress sites) WordPress user account with publishing permissions n8n instance with LangChain nodes package installed Setup Instructions Step 1: WordPress Configuration Enable REST API (usually enabled by default): Check that yoursite.com/wp-json/wp/v2/ returns JSON data If not, contact hosting provider or install REST API plugin Create Application Password: In WordPress Admin: Users > Profile Scroll to "Application Passwords" Add new password with name "n8n Integration" Copy the generated password (save securely) Get WordPress Site URL: Note your full WordPress site URL (e.g., https://yourdomain.com) Step 2: OpenAI Configuration Obtain OpenAI API Key: Visit OpenAI Platform Create API key with access to: GPT-4 models (for content generation) DALL-E (for image creation) Add OpenAI Credentials in n8n: Navigate to Settings > Credentials Add "OpenAI API" credential Enter your API key Step 3: WordPress Credentials in n8n Add WordPress API Credentials: In n8n: Settings > Credentials > "WordPress API" URL: Your WordPress site URL Username: Your WordPress username Password: Application password from Step 1 Step 4: Update Workflow Settings Configure Settings Node: Open the "Settings" node Replace wordpress_url value with your actual WordPress URL Keep other settings as default or customize as needed Update Credential References: Ensure all WordPress nodes reference your WordPress credentials Verify OpenAI nodes use your OpenAI credentials Step 5: Deploy Form (Production Use) Activate Workflow: Toggle workflow to "Active" status Note the webhook URL from Form Trigger node Test Form Access: Copy the form URL Test form submission with sample data Verify workflow execution completes successfully Configuration Details Form Customization The form accepts three key inputs: Keywords**: Comma-separated topics for article generation Number of Chapters**: 1-10 chapters for content structure Max Word Count**: Total article length control You can modify form fields by editing the "Form" trigger node: Add additional input fields (category, author, publish date) Change field types (dropdown, checkboxes, file upload) Modify validation rules and requirements AI Content Parameters Article Structure Generation The "Create post title and structure" node uses these parameters: Model**: GPT-4-1106-preview for enhanced reasoning Max Tokens**: 2048 for comprehensive structure planning JSON Output**: Structured data for subsequent processing Chapter Writing The "Create chapters text" node configuration: Model**: GPT-4-0125-preview for consistent writing quality Context Awareness**: Each chapter knows about preceding/following content Word Count Distribution**: Automatically calculates per-chapter length Coherence Checking**: Ensures smooth transitions between sections Image Generation Settings DALL-E parameters in "Generate featured image": Size**: 1792x1024 (optimized for WordPress featured images) Style**: Natural (photographic look) Quality**: HD (higher quality output) Prompt Enhancement**: Adds photography keywords for better results Usage Instructions Basic Workflow Access the Form: Navigate to the form URL provided by the Form Trigger Enter your desired keywords (e.g., "artificial intelligence, machine learning, automation") Select number of chapters (3-5 recommended for most topics) Set word count (1000-2000 words typical) Submit and Wait: Click submit to trigger the workflow Processing takes 2-5 minutes depending on article length Monitor n8n execution log for progress Review Generated Content: Check WordPress admin for new draft post Review article structure and content quality Verify featured image is properly attached Edit as needed before publishing Advanced Usage Custom Prompts Modify AI prompts to change: Writing Style**: Formal, casual, technical, conversational Target Audience**: Beginners, experts, general public Content Focus**: How-to guides, opinion pieces, news analysis SEO Strategy**: Keyword density, meta descriptions, heading structure Bulk Content Creation For multiple articles: Create separate form submissions for each topic Schedule workflow executions with different keywords Use CSV upload to process multiple keyword sets Implement queue system for high-volume processing Expected Outputs Article Structure Generated articles include: SEO-Optimized Title**: Compelling, keyword-rich headline Descriptive Subtitle**: Supporting context for the main title Introduction**: ~60 words introducing the topic Chapter Sections**: Logical flow with HTML formatting Conclusions**: ~60 words summarizing key points Featured Image**: Custom DALL-E generated visual Content Quality Features Factual Accuracy**: Wikipedia integration ensures reliable information Proper HTML Formatting**: Bold, italic, and list elements for readability Logical Flow**: Chapters build upon each other coherently SEO Elements**: Optimized for search engine visibility Professional Tone**: Consistent, engaging writing style WordPress Integration Draft Status**: Articles saved as drafts for review Featured Image**: Automatically uploaded and assigned Proper Formatting**: HTML preserved in WordPress editor Metadata**: Title and content properly structured Troubleshooting Common Issues "No Article Structure Generated" Cause: AI couldn't create valid structure from keywords Solutions: Use more specific, descriptive keywords Reduce number of chapters requested Check OpenAI API quotas and usage Verify keywords are in English (default language) "Chapter Content Missing" Cause: Individual chapter generation failed Solutions: Increase max tokens in chapter generation node Simplify chapter prompts Check for API rate limiting Verify internet connectivity for Wikipedia tool "WordPress Publication Failed" Cause: Authentication or permission issues Solutions: Verify WordPress credentials are correct Check WordPress user has publishing permissions Ensure WordPress REST API is accessible Test WordPress URL accessibility "Featured Image Not Attached" Cause: Image generation or upload failure Solutions: Check DALL-E API access and quotas Verify image upload permissions in WordPress Review image file size and format compatibility Test manual image upload to WordPress Performance Optimization Large Articles (2000+ words) Increase timeout values in HTTP request nodes Consider splitting very long articles into multiple posts Implement progress tracking for user feedback Add retry mechanisms for failed API calls High-Volume Usage Implement queue system for multiple simultaneous requests Add rate limiting to respect OpenAI API limits Consider batch processing for efficiency Monitor and optimize token usage Customization Examples Different Content Types Product Reviews Modify prompts to include: Pros and cons sections Feature comparisons Rating systems Purchase recommendations Technical Tutorials Adjust structure for: Step-by-step instructions Code examples Prerequisites sections Troubleshooting guides News Articles Configure for: Who, what, when, where, why structure Quote integration Fact checking emphasis Timeline organization Alternative Platforms Replace WordPress with Other CMS Ghost**: Use Ghost API for publishing Webflow**: Integrate with Webflow CMS Strapi**: Connect to headless CMS Medium**: Publish to Medium platform Different AI Models Claude**: Replace OpenAI with Anthropic's Claude Gemini**: Use Google's Gemini for content generation Local Models**: Integrate with self-hosted AI models Multiple Models**: Use different models for different tasks Enhanced Features SEO Optimization Add nodes for: Meta Description Generation**: AI-created descriptions Tag Suggestions**: Relevant WordPress tags Internal Linking**: Suggest related content links Schema Markup**: Add structured data Content Enhancement Include additional processing: Plagiarism Checking**: Verify content originality Readability Analysis**: Assess content accessibility Fact Verification**: Multiple source confirmation Image Optimization**: Compress and optimize images Security Considerations API Security Store all credentials securely in n8n credential system Use environment variables for sensitive configuration Regularly rotate API keys and passwords Monitor API usage for unusual activity Content Moderation Review generated content before publishing Implement content filtering for inappropriate material Consider legal implications of auto-generated content Maintain editorial oversight and fact-checking WordPress Security Use application passwords instead of main account password Limit WordPress user permissions to minimum required Keep WordPress and plugins updated Monitor for unauthorized access attempts Legal and Ethical Considerations Content Ownership Understand OpenAI's terms regarding generated content Consider copyright implications for Wikipedia-sourced information Implement proper attribution where required Review content licensing requirements Disclosure Requirements Consider disclosing AI-generated content to readers Follow platform-specific guidelines for automated content Ensure compliance with advertising and content standards Respect intellectual property rights Support and Maintenance Regular Maintenance Monitor OpenAI API usage and costs Update AI prompts based on output quality Review and update Wikipedia search strategies Optimize workflow performance based on usage patterns Quality Assurance Regularly review generated content quality Implement feedback loops for improvement Test workflow with diverse keyword sets Monitor WordPress site performance impact Updates and Improvements Stay updated with OpenAI model improvements Monitor n8n platform updates for new features Engage with community for workflow enhancements Document custom modifications for future reference Cost Optimization OpenAI Usage Monitor token consumption patterns Optimize prompts for efficiency Consider using different models for different tasks Implement usage limits and budgets Alternative Approaches Use local AI models for cost reduction Implement caching for repeated topics Batch similar requests for efficiency Consider hybrid human-AI content creation License and Attribution This workflow template is provided under MIT license. Attribution to original creator appreciated when sharing or modifying. Generated content is subject to OpenAI's usage policies and terms of service.
by Incrementors
Wikipedia to LinkedIn AI Content Poster with Image via Bright Data 📋 Overview Workflow Description: Automatically scrapes Wikipedia articles, generates AI-powered LinkedIn summaries with custom images, and posts professional content to LinkedIn using Bright Data extraction and intelligent content optimization. 🚀 How It Works The workflow follows these simple steps: Article Input: User submits a Wikipedia article name through a simple form interface Data Extraction: Bright Data scrapes the Wikipedia article content including title and full text AI Summarization: Advanced AI models (OpenAI GPT-4 or Claude) create professional LinkedIn-optimized summaries under 2000 characters Image Generation: Ideogram AI creates relevant visual content based on the article summary LinkedIn Publishing: Automatically posts the summary with generated image to your LinkedIn profile URL Generation: Provides a shareable LinkedIn post URL for easy access and sharing ⚡ Setup Requirements Estimated Setup Time: 10-15 minutes Prerequisites n8n instance (self-hosted or cloud) Bright Data account with Wikipedia dataset access OpenAI API account (for GPT-4 access) Anthropic API account (for Claude access - optional) Ideogram AI account (for image generation) LinkedIn account with API access 🔧 Configuration Steps Step 1: Import Workflow Copy the provided JSON workflow file In n8n: Navigate to Workflows → + Add workflow → Import from JSON Paste the JSON content and click Import Save the workflow with a descriptive name Step 2: Configure API Credentials 🌐 Bright Data Setup Go to Credentials → + Add credential → Bright Data API Enter your Bright Data API token Replace BRIGHT_DATA_API_KEY in all HTTP request nodes Test the connection to ensure access 🤖 OpenAI Setup Configure OpenAI credentials in n8n Ensure GPT-4 model access Link credentials to the "OpenAI Chat Model" node Test API connectivity 🎨 Ideogram AI Setup Obtain Ideogram AI API key Replace IDEOGRAM_API_KEY in the "Image Generate" node Configure image generation parameters Test image generation functionality 💼 LinkedIn Setup Set up LinkedIn OAuth2 credentials in n8n Replace LINKEDIN_PROFILE_ID with your profile ID Configure posting permissions Test posting functionality Step 3: Configure Workflow Parameters Update Node Settings: Form Trigger:** Customize the form title and field labels as needed AI Agent:** Adjust the system message for different content styles Image Generate:** Modify image resolution and rendering speed settings LinkedIn Post:** Configure additional fields like hashtags or mentions Step 4: Test the Workflow Testing Recommendations: Start with a simple Wikipedia article (e.g., "Artificial Intelligence") Monitor each node execution for errors Verify the generated summary quality Check image generation and LinkedIn posting Confirm the final LinkedIn URL generation 🎯 Usage Instructions Running the Workflow Access the Form: Use the generated webhook URL to access the submission form Enter Article Name: Type the exact Wikipedia article title you want to process Submit Request: Click submit to start the automated process Monitor Progress: Check the n8n execution log for real-time progress View Results: The workflow will return a LinkedIn post URL upon completion Expected Output 📝 Content Summary Professional LinkedIn-optimized text Under 2000 characters Engaging and informative tone Bullet points for readability 🖼️ Generated Image High-quality AI-generated visual 1280x704 resolution Relevant to article content Professional appearance 🔗 LinkedIn Post Published to your LinkedIn profile Includes both text and image Shareable public URL Professional formatting 🛠️ Customization Options Content Personalization AI Prompts:** Modify the system message in the AI Agent node to change writing style Character Limits:** Adjust summary length requirements Tone Settings:** Change from professional to casual or technical Hashtag Integration:** Add relevant hashtags to LinkedIn posts Visual Customization Image Style:** Modify Ideogram prompts for different visual styles Resolution:** Change image dimensions based on LinkedIn requirements Rendering Speed:** Balance between speed and quality Brand Elements:** Include company logos or brand colors 🔍 Troubleshooting Common Issues & Solutions ⚠️ Bright Data Connection Issues Verify API key is correctly configured Check dataset access permissions Ensure sufficient API credits Validate Wikipedia article exists 🤖 AI Processing Errors Check OpenAI API quotas and limits Verify model access permissions Review input text length and format Test with simpler article content 🖼️ Image Generation Failures Validate Ideogram API key Check image prompt content Verify API usage limits Test with shorter prompts 💼 LinkedIn Posting Issues Re-authenticate LinkedIn OAuth Check posting permissions Verify profile ID configuration Test with shorter content ⚡ Performance & Limitations Expected Processing Times Wikipedia Scraping:** 30-60 seconds AI Summarization:** 15-30 seconds Image Generation:** 45-90 seconds LinkedIn Posting:** 10-15 seconds Total Workflow:** 2-4 minutes per article Usage Recommendations Best Practices: Use well-known Wikipedia articles for better results Monitor API usage across all services Test content quality before bulk processing Respect LinkedIn posting frequency limits Keep backup of successful configurations 📊 Use Cases 📚 Educational Content Create engaging educational posts from Wikipedia articles on science, history, or technology topics. 🏢 Thought Leadership Transform complex topics into accessible LinkedIn content to establish industry expertise. 📰 Content Marketing Generate regular, informative posts to maintain active LinkedIn presence with minimal effort. 🔬 Research Sharing Quickly summarize and share research findings or scientific discoveries with your network. 🎉 Conclusion This workflow provides a powerful, automated solution for creating professional LinkedIn content from Wikipedia articles. By combining web scraping, AI summarization, image generation, and social media posting, you can maintain an active and engaging LinkedIn presence with minimal manual effort. The workflow is designed to be flexible and customizable, allowing you to adapt the content style, visual elements, and posting frequency to match your professional brand and audience preferences. For any questions or support, please contact: info@incrementors.com or fill out this form: https://www.incrementors.com/contact-us/
by Kev
Generate ready-to-publish short-form videos from text prompts using AI Click on the image to see the Example output in google drive Transform simple text concepts into professional short-form videos complete with AI-generated visuals, narrator voice, background music, and dynamic text overlays - all automatically generated and ready for Instagram, TikTok, or YouTube Shorts. This workflow demonstrates a cost-effective approach to video automation by combining AI-generated images with audio composition instead of expensive AI video generation. Processing takes 1-2 minutes and outputs professional 9:16 vertical videos optimized for social platforms. The template serves as both a showcase and building block for larger automation systems, with sticky notes providing clear guidance for customization and extension. Who's it for Content creators, social media managers, and marketers who need consistent, high-quality video content without manual production work. Perfect for motivational content, storytelling videos, educational snippets, and brand campaigns. How it works The workflow uses a form trigger to collect video theme, setting, and style preferences. ChatGPT generates cohesive scripts and image prompts, while Google Gemini creates themed background images and OpenAI TTS produces narrator audio. Background music is sourced from Openverse for CC-licensed tracks. All assets are uploaded to JsonCut API which composes the final video with synchronized overlays, transitions, and professional audio mixing. Results are stored in NocoDB for management. How to set up JsonCut API: Sign up at jsoncut.com and create an API key at app.jsoncut.com. Configure HTTP Header Auth credential in n8n with header name x-api-key OpenAI API: Set up credentials for script generation and text-to-speech Google Gemini API: Configure access for Imagen 4.0 image generation NocoDB (Optional): Set up instance for video storage and configure database credentials Requirements JsonCut free account with API key OpenAI API access for GPT and TTS Google Gemini API for image generation NocoDB (optional) for result storage How to customize the workflow This template is designed as a foundation for larger automation systems. The modular structure allows easy modification of AI prompts for different content niches (business, wellness, education), replacement of the form trigger with RSS feeds or database triggers for automated content generation, integration with social media APIs for direct publishing, and customization of visual branding through JsonCut configuration. The workflow can be extended for bulk processing, A/B testing multiple variations, or integration with existing content management systems. Sticky notes throughout the workflow provide detailed guidance for common customizations and scaling options.
by Robin Geuens
Overview Get a weekly report on website traffic driven by large language models (LLMs) such as ChatGPT, Perplexity, and Gemini. This workflow helps you track how these tools bring visitors to your site. A weekly snapshot can guide better content and marketing decisions. How it works The trigger runs every Monday. Pull the number of sessions on your website by source/medium from Google Analytics. The code node uses the following regex to filter referral traffic from AI providers like ChatGPT, Perplexity, and Gemini: /^.openai.|.copilot.|.chatgpt.|.gemini.|.gpt.|.neeva.|.writesonic.|.nimble.|.outrider.|.perplexity.|.google.bard.|.bard.google.|.bard.|.edgeservices.|.astastic.|.copy.ai.|.bnngpt.|.gemini.google.$/i; Combine the filtered sessions into one list so they can be processed by an LLM. Generate a short report using the filtered data. Email the report to yourself. Setup Get or connect your OpenAI API key and set up your OpenAI credentials in n8n. Enable Google Analytics and Gmail API access in the Google Cloud Console. (Read more here). Set up your Google Analytics and Gmail credentials in n8n. If you're using the cloud version of n8n, you can log in with your Google account to connect them easily. In the Google Analytics node, add your credentials and select the property for the website you’re working with. Alternatively, you can use your property ID, which can be found in the Google Analytics admin panel under Property > Property Details. The property ID is shown in the top-right corner. Add this to the property field. Under Metrics, select the metric you want to measure. This workflow is configured to use sessions, but you can choose others. Leave the dimension as-is, since we need the source/medium dimension to filter LLMs. (Optional) To expand the list of LLMs being filtered, adjust the regex in the code node. You can do this by copying and pasting one of the existing patterns and modifying it. Example: |.example.| The LLM node creates a basic report. If you’d like a more detailed version, adjust the system prompt to specify the details or formatting you want. Add your email address to the Gmail node so the report is delivered to your inbox. Requirements OpenAI API key for report generation Google Analytics API enabled in Google Cloud Console Gmail API enabled in Google Cloud Console Customizing this workflow The regex used to filter LLM referral traffic can be expanded to include specific websites. The system prompt in the AI node can be customized to create a more detailed or styled report.
by Jan Zaiser
Your inbox is overflowing with daily newsletters: Public Affairs, ESG, Legal, Finance, you name it. You want to stay informed, but reading 10 emails every morning? Impossible. What if you could get one single digest summarizing everything that matters, automatically? ❌ No more copy-pasting text into ChatGPT ❌ No more scrolling through endless email threads ✅ Just one smart, structured daily briefing in your inbox Who Is This For Public Affairs Teams: Stay ahead of political and regulatory updates—without drowning in emails. Executives & Analysts: Get daily summaries of key insights from multiple newsletters. Marketing, Legal, or ESG Departments: Repurpose this workflow for your own content sources. How It Works Gmail collects all newsletters from the day (based on sender or label). HTML noise and formatting are stripped automatically. Long texts are split into chunks and logged in Google Sheets. An AI Agent (Gemini or OpenAI) summarizes all content into one clean daily digest. The workflow structures the summary into an HTML email and sends it to your chosen recipients. Setup Guide • You’ll need Gmail and Google Sheets credentials. • Add your own AI Model (e.g., Gemini or OpenAI) with an API key. • Adjust the prompt inside the “Public Affairs Consultant” node to fit your topic (e.g., Legal, Finance, ESG, Marketing). • Customize the email subject and design inside the “Structure HTML-Mail” node. • Optional: Use Memory3 to let the AI learn your preferred tone and style over time. Cost & Runtime Runs once per day. Typical cost: ~$0.10–0.30 per run (depending on model and input length). Average runtime: <2 minutes.
by Atta
This workflow automates brand monitoring on X by analyzing both the text and the images in posts. It uses multi-modal AI to score brand relevance, filters out noise, logs important mentions in Airtable, and sends real-time alerts to a Telegram group for high-priority posts. What it does Traditional brand monitoring tools often miss the most authentic user content because they only track text. They can't "see" your logo in a photo or your product featured in a video without a direct keyword mention. This workflow acts as an AI agent that overcomes this blind spot. It finds mentions of your brand on X and then uses Google Gemini's multi-modal capabilities to perform a comprehensive analysis of both the text and any attached images. This allows it to understand the full context of a mention, score its relevance to your brand, and take the appropriate action, creating a powerful "visual intelligence" system. How it works The workflow runs on a schedule to find, analyze, and triage brand mentions. Get New Tweets: The workflow begins by using an Apify actor to scrape X for recent posts based on a defined set of search terms (e.g., Tesla OR $TSLA). It then filters these results to find unique mentions not already processed. Check for Duplicates: It cross-references each found tweet with an Airtable base to ensure it hasn't been analyzed before, preventing duplicate work. Analyze Post Content: For each new, unique post, the workflow performs two parallel analyses using Google Gemini: Analyze the Photos: The AI examines the images in the post to describe the scene, identify logos or products, and determine the visual mood. Analyze the Text: A separate AI call analyzes the text of the post to understand its context and sentiment. Final Relevance Check: A "Head Strategist" AI node receives the outputs from both the visual and text analyses. It synthesizes this information to assign a final brand relevance score from 1 to 10. Triage and Action: Based on this score, the workflow automatically triages the post: High Relevance (Score > 7): The post is logged in the Airtable base, and an instant, detailed alert is sent to a Telegram monitoring group. Medium Relevance (Score 4-7): The post is quietly logged in Airtable for later strategic review. Low Relevance (Score < 4): The post is ignored, effectively filtering out noise. Setup Instructions To get this workflow running, you will need to configure your Airtable base and provide credentials for Apify, Google, and Telegram. Required Credentials Apify: You will need an Apify API Token to run the X scraper. Airtable: You will need Airtable API credentials to connect to your base. Google AI: You will need credentials for the Google AI APIs to use the Gemini models. Telegram: You will need a Bot Token and the Chat ID for the channel where you want to receive high-relevance alerts. Of course. Based on the Config node parameters you provided, the setup process is much more centralized. Here is the corrected and rewritten "Step-by-Step Configuration" section. Of course. Here is the rewritten "Step-by-Step Configuration" section with the link to the advanced search documentation. Step-by-Step Configuration Set up Your Airtable Base: Before configuring the workflow, create a new table in your Airtable base. For the workflow to function correctly, this table must contain fields to store the analysis results. Create fields with the following names: postId, postURL, postText, postDateCreated, authorUsername, authorName, sentiment, relevanceScore, relevanceReasoning, mediaPhotosAnalysis, and status. Once the table is created, have your Base ID and Table ID ready to use in the Config node. Edit the Config Node: The majority of the setup is handled in the first Config node. Click on it and edit the following parameters in the "Expressions" tab: searchTerms: Replace the example with the keywords, hashtags, and accounts you want to monitor. The field supports advanced search operators for complex queries. For a full list of available parameters, see the Twitter Advanced Search documentation. airtableBaseId: Paste your Airtable Base ID here. airtableTableId: Paste your Airtable Table ID here. lang: Set the two-letter language code for the posts you want to find (e.g., "en" for English). min_faves: Set the minimum number of "favorites" a post should have to be considered. tweetsToScrape: Define the maximum number of posts the scraper should find in each run. actorId: This is the specific Apify actor for scraping X. You can leave this as is unless you intend to use a different one. Configure the Telegram Node: In the final node, "Send High Relevance Posts to Monitoring Group", you need to manually set the destination for the alerts. Enter the Chat ID for your Telegram group or channel. How to Adapt the Template This workflow is a powerful framework that can be adapted for various monitoring needs. Change the Source:* Replace the *Apify** node with a different trigger or data source. You could monitor Reddit, specific RSS feeds, or a news API for mentions. Customize the AI Logic:* The core of this workflow is in the AI prompts. You can edit the prompts in the *Google Gemini** nodes to change the analysis criteria. For example, you could instruct the AI to check for specific competitor logos, analyze the sentiment of comments, or identify if the post is from an influential account. Modify the Scoring:** Adjust the logic in the "Switch" node to change the thresholds for what constitutes a high, medium, or low-relevance post to better fit your brand's needs. Change the Actions:* Replace the *Telegram** node with a different action. Instead of sending an alert, you could: Create a ticket in a customer support system like Zendesk or Jira. Send a summary email to your marketing team. Add the post to a content curation tool or a social media management platform.
by Incrementors
Description This workflow automates AI Search Engine Optimization (ASEO) tracking for digital marketing agencies. It tests your client's website visibility across four major AI platforms—ChatGPT, Claude, DeepSeek, and Perplexity—using brand-neutral prompts, analyzes ranking position and presence strength on each platform, identifies top competitors, and returns a structured 27-field scorecard with actionable recommendations. Designed as a sub-workflow, it integrates directly into your existing client audit or reporting pipeline. Key Features Brand-neutral prompt generation (no client name used—tests true organic AI discoverability) Simultaneous visibility testing across ChatGPT, Claude, DeepSeek, and Perplexity Presence strength scoring (0–100%) per platform Competitor identification across all four AI platforms Strongest and weakest platform detection AI-generated actionable recommendations for improvement Structured 27-field output ready for Google Sheets or database insertion Error handling on all agent nodes (partial results if any platform fails) Sub-workflow design—integrates cleanly into larger audit pipelines What This Workflow Does Input This workflow is triggered by a parent workflow and receives two parameters: Website**: The client's website URL (e.g., https://example.com) Website Summary**: A plain-text description of what the business does and its core services Processing Stage 1 — Brand-Neutral Prompt Generation GPT-4.1-mini generates a realistic search prompt that potential customers would type into an AI chatbot to find a company like the client. Critically, the prompt does not include the client's brand name—it focuses on their services and industry instead. For example, for a Los Angeles product photography studio, the prompt would be something like "best product photography studio for Amazon listings in Los Angeles" rather than the studio's name. This tests true organic discoverability, not brand recall. Stage 2 — Four-Platform Sequential Testing The same generated prompt is submitted sequentially to four AI platforms: ChatGPT via GPT-4o-mini Claude via Claude Sonnet 3.7 DeepSeek Perplexity Each platform agent runs independently with error handling enabled. If one platform API is down or throws an error, the workflow continues and returns partial results—it does not fail entirely. Stage 3 — Cross-Platform Analysis DeepSeek analyzes all four platform outputs together and produces a structured JSON report covering each platform's ranking (Yes/No), position (1–10 or null), presence strength percentage, key mentions, and top competitors. It also generates an overall summary comparing all platforms. Stage 4 — Data Flattening The nested JSON is flattened into 27 individual fields that can be directly inserted into a Google Sheet row, database, or passed back to the parent workflow for reporting. Output The workflow returns 27 structured data fields: Search prompt used (1 field) Per-platform metrics for ChatGPT, Claude, DeepSeek, Perplexity: Ranking (Yes/No), Position, Presence Strength %, Key Mentions, Top Competitors (5 fields × 4 platforms = 20 fields) Overall summary: Total platforms ranking, Average presence strength, Strongest platform, Weakest platform, Main competitors across all platforms, Recommendations (6 fields) Setup Instructions Prerequisites Active n8n instance (self-hosted or n8n Cloud) Parent workflow with Execute Workflow node (this workflow does not run standalone) OpenAI API key (used for prompt generation and ChatGPT testing) Anthropic API key (used for Claude testing) DeepSeek API key (used for DeepSeek testing and final analysis) Perplexity API key (used for Perplexity testing) Estimated setup time: 20–25 minutes Step 1: Understand how this workflow is triggered This is a sub-workflow. It does not have its own schedule trigger. It runs when a parent workflow calls it using n8n's Execute Workflow node. Setting up the parent workflow: Open or create your parent workflow (e.g., a client audit scheduler, a Google Sheets loop, or a manual trigger) Add an Execute Workflow node to your parent workflow Inside the Execute Workflow node: Source: Select "Database" Workflow: Search for and select this AI Search Ranking Analyzer workflow Mode: Choose "Run once for all items" or "Run once for each item" depending on your setup Under Fields, add two parameters to pass: Name: Website | Value: your client's website URL expression (e.g., ={{ $json['Website URL'] }}) Name: Website Summary | Value: your client's business description (e.g., ={{ $json['Business Description'] }}) Example parent workflow structure: Schedule Trigger (Weekly / Monthly) → Read Client List from Google Sheets → Loop Over Each Client → Execute Workflow (this AI Search Ranking Analyzer) Pass: Website = {{ $json['Website URL'] }} Pass: Website Summary = {{ $json['Summary'] }} → Append 27 Fields to Reporting Sheet → Send Report Email or Slack Notification Testing the trigger connection: Open this workflow and click on the Receive Website and Summary from Parent node You will see "Waiting for input from parent workflow..." Go to your parent workflow and click Execute node on the Execute Workflow node The data will flow into this workflow for testing Both workflows must be set to Active for production use Step 2: Connect OpenAI credentials This workflow uses two OpenAI models: GPT-4.1-mini** — used by Generate Brand-Neutral Search Prompts, Parse Prompt as JSON, and GPT Model for Parser Support GPT-4o-mini** — used by Test Visibility on ChatGPT To connect: In n8n go to Credentials → Add credential → OpenAI API Paste your API key from https://platform.openai.com/api-keys Name it clearly (e.g., "OpenAI Main") Open each of these nodes and select your credential: GPT Model for Prompt Generation → select your OpenAI credential, set model to gpt-4.1-mini GPT Model for Parser Support → select your OpenAI credential, set model to gpt-4.1-mini GPT-4o-mini for ChatGPT Test → select your OpenAI credential, set model to gpt-4o-mini Step 3: Connect Anthropic credentials Used by the Test Visibility on Claude agent via Claude Sonnet 3.7 Model. To connect: Go to Credentials → Add credential → Anthropic API Get API key from https://console.anthropic.com/ Open the Claude Sonnet 3.7 Model node and select your credential Verify the model is set to claude-3-7-sonnet-20250219 Step 4: Connect DeepSeek credentials Used by two nodes: DeepSeek Model for Testing (platform test) and DeepSeek Model for Analysis (final summarizer). To connect: Go to Credentials → Add credential → DeepSeek API Get API key from https://platform.deepseek.com/ Open DeepSeek Model for Testing node → select your credential Open DeepSeek Model for Analysis node → select your credential Step 5: Connect Perplexity credentials Used by the Test Visibility on Perplexity node (Perplexity native node, not an AI agent). To connect: Go to Credentials → Add credential → Perplexity API Get API key from https://www.perplexity.ai/settings/api Open the Test Visibility on Perplexity node and select your credential Step 6: Test the complete workflow Temporarily add a Manual Trigger node at the start and connect it to Generate Brand-Neutral Search Prompts (bypass the executeWorkflowTrigger for isolated testing) Set the Manual Trigger to pass test data: { "Website": "https://your-test-site.com", "Website Summary": "A company that provides [your service] in [your city]" } Execute and verify: Generate Brand-Neutral Search Prompts produces a sensible search query Each platform node returns output (or gracefully continues on error) Analyze All Platform Results produces structured JSON Flatten JSON to 27 Data Fields produces all 27 fields correctly Remove the test Manual Trigger once testing is complete Activate this workflow and your parent workflow Workflow Node Breakdown Receive Website and Summary from Parent The entry point of this sub-workflow. Listens for execution from a parent workflow via n8n's Execute Workflow node. Receives two inputs: Website (client URL) and Website Summary (business description text). These values are referenced by subsequent nodes throughout the workflow. Generate Brand-Neutral Search Prompts An AI agent powered by GPT-4.1-mini that creates a realistic search query a potential customer might type into an AI chatbot to find a business like the client—without using the client's brand name. This tests organic discoverability based on services and industry positioning rather than brand recognition. The output is a single focused search prompt. Parse Prompt as JSON A Structured Output Parser that enforces JSON schema {"Prompts": "..."} on the generated prompt. Uses GPT Model for Parser Support as its language model and has autoFix enabled, so malformed outputs are automatically retried and corrected. Test Visibility on ChatGPT An AI agent that submits the generated search prompt to ChatGPT (GPT-4o-mini) and records the response. This captures what ChatGPT currently recommends when users search for services like the client's. Test Visibility on Claude An AI agent powered by Claude Sonnet 3.7 (Anthropic) that receives the same prompt and records Claude's recommendations. Has onError: continueRegularOutput so the workflow continues if Claude's API is unavailable. Test Visibility on DeepSeek An AI agent powered by DeepSeek that tests the same prompt on DeepSeek's platform. Also has onError: continueRegularOutput for resilience. Test Visibility on Perplexity Uses n8n's native Perplexity node (not an AI agent) to submit the prompt to Perplexity's search-augmented AI. Perplexity is particularly important because it uses real-time web search, making its recommendations highly relevant for current visibility. Has onError: continueRegularOutput. Analyze All Platform Results A DeepSeek-powered AI agent that receives all four platform outputs simultaneously along with the client website URL and the original search prompt. It analyzes each platform independently—determining whether the client appears (Yes/No), at what position (1–10), how strongly (0–100%), how they are mentioned, and which competitors appear. It also generates an overall summary comparing all platforms and provides specific improvement recommendations. Uses Parse Analysis as Structured JSON as its output parser. Flatten JSON to 27 Data Fields A Set node that extracts values from the nested JSON output of the analyzer into 27 flat fields. This makes the data ready for direct insertion into a Google Sheets row, Airtable record, or database table—or for return to the parent workflow. Output Data Complete A No Operation node marking the successful completion of the workflow. The parent workflow receives all 27 fields as the execution output. Usage Guide Adding clients for analysis In your parent workflow, maintain a Google Sheet with columns: | Client Name | Website URL | Business Description | Last Checked | |---|---|---|---| | Example Corp | https://example.com | A SaaS company that provides... | 2025-01-15 | Your parent workflow reads each row, passes the Website URL and Business Description to this sub-workflow, and writes the 27 returned fields back into the sheet for tracking. Understanding the output After execution, check the Flatten JSON to 27 Data Fields node output. For each platform you get: Ranking:** Yes (client appears) or No (client not mentioned) Position:** Numeric position in the AI's recommendations (1 being top) Presence Strength:** 0–100% measuring how positively and prominently the client is featured Key Mentions:** How the AI described or mentioned the client Ranking Competitors:** Which competitors the AI recommended instead The Overall Summary tells you: How many of 4 platforms are currently ranking your client The average presence strength across all platforms Which platform is your strongest opportunity Which platform needs the most improvement The 3 main competitors appearing consistently Specific recommendations for improving AI discoverability Tracking over time Run this workflow monthly per client. Append results to a Google Sheet with a date column. Track whether presence strength is improving, whether the client appears on more platforms over time, and whether competitors are losing or gaining ground. Customization Options Change the number of platforms: Remove any platform agent node and update the Analyze All Platform Results prompt to exclude that platform's output reference. Add more platforms: Add new AI agent nodes (e.g., Grok, Gemini) between Test Visibility on Perplexity and Analyze All Platform Results. Update the analyzer prompt to include the new platform's output. Generate multiple prompts: Modify Generate Brand-Neutral Search Prompts to produce 3–5 different prompts. Loop through each and aggregate results for more comprehensive testing. Write results directly to Google Sheets: After Flatten JSON to 27 Data Fields, add a Google Sheets Append node in your parent workflow to log each audit automatically. Add email or Slack notifications: After the workflow completes in the parent, add a Send Email or Slack node that formats the key metrics (Overall Ranking, Average Presence Strength, Recommendations) into a readable client report. Adjust presence strength scoring: Modify the Analyze All Platform Results prompt to change how the AI scores presence strength—for example, weighting first-position mentions more heavily. Troubleshooting Parent workflow not triggering this workflow Verify both workflows are toggled to Active In the Execute Workflow node, confirm the correct workflow is selected Check that the Mode is set (not left blank) Test by clicking Execute node directly on the Execute Workflow node in the parent Website and Website Summary parameters not passing In the Execute Workflow node, confirm the field names are exactly Website and Website Summary (case-sensitive, space in second parameter) Check the parent workflow is actually passing values, not empty expressions Use the Receive Website and Summary from Parent node's input panel to verify received data One platform returns empty output The workflow continues even if one platform fails (onError: continueRegularOutput is set) Check the specific platform node for the error message Verify API credentials are valid and have available credits Perplexity free tier has strict rate limits—upgrade plan if hitting limits Structured output parser fails Parse Prompt as JSON has autoFix enabled—it will retry malformed outputs automatically If Parse Analysis as Structured JSON fails, simplify the prompt in Analyze All Platform Results or increase max tokens Check that DeepSeek credentials are active (DeepSeek handles the analysis output parsing) Generated prompt includes client brand name The Generate Brand-Neutral Search Prompts agent prompt instructs GPT to avoid brand names If brand names slip through, add to the system prompt: "Never mention any specific company name, brand, or trademark in the generated prompt" All 27 fields not appearing in output Run the workflow with test data and inspect Analyze All Platform Results node output If a platform returned empty output due to an error, its fields will be null Check that Flatten JSON to 27 Data Fields expressions reference the correct node names Use Cases Digital marketing agencies offering ASEO services: Run monthly AI visibility audits for 20–50 clients from one parent workflow. Generate client reports showing AI platform rankings, presence strength trends, and competitor comparisons. Position ASEO as a premium new service. SEO teams expanding beyond Google: Use this alongside traditional Google ranking reports. Show clients their full search visibility picture—covering both Google and the AI chatbots that are increasingly influencing purchase decisions. Competitive intelligence: Run this workflow for your own site and 3–5 competitors simultaneously. Identify which competitors dominate AI recommendations and reverse-engineer their content strategy. Brand monitoring: Track how AI chatbots describe your brand over time. Detect if competitors are gaining ground or if negative associations are appearing in AI responses. New market entry research: Before entering a new market or launching a new service line, test whether your website would appear in AI searches for that service category. Use results to guide content strategy before launch. Expected Results Time savings: 45–60 minutes of manual AI testing per client, eliminated per audit cycle Coverage: 4 major AI platforms tested in a single automated run Output quality: Structured, consistent 27-field data format—ready for Google Sheets, dashboards, or PDF reports Scalability: Process 50+ clients per parent workflow run with no additional manual effort Competitive advantage: One of the first systematic approaches to measuring AI Search Engine Optimization (ASEO)—a space with no established tooling yet For any questions, custom development, or workflow integration support: 📧 Email: info@incrementors.com 🌐 Website: https://www.incrementors.com/