by InfraNodus
This template can be used to generate research ideas from PDF scientific papers based on the content gaps found in text using the InfraNodus knowledge graph GraphRAG knowledge graph representation. Simply upload several PDF files (research papers, corporate or market reports, etc) and the template will generate a research question, which will then be sent as an AI prompt to the InfraNodus GraphRAG system that will extract the answer from the documents. As a result, you find the gap in a collection of research papers and bridge it in a few seconds . The template is useful for: advancing scientific research generating AI prompts that drive research further finding the right questions to ask to bridge blind spots in a research field avoiding the generic bias of LLM models and focusing on what's important in your particular context Using Content Gaps for Generating Research Questions Knowledge graphs represent any text as a network: the main concepts are the nodes, their co-occurrences are the connections between them. Based on this representation, we build a graph and apply network science metrics to rank the most important nodes (concepts) that serve as the crossroads of meaning and also the main topical clusters that they connect. Naturally, some of the clusters will be disconnected and will have gaps between them. These are the topics (groups of concepts) that exist in this context (the documents you uploaded) but that are not very well connected. Addressing those gaps can help you see which groups of concepts you could connect with your own ideas. This is exactly what InfraNodus does: builds the structure, finds the gaps, then uses the built-in AI to generate research questions that bridge those gaps. How it works 1) Step 1: First, you upload your PDF files using an online web form, which you can run from n8n or even make publicly available. 2) Steps 2-4: The documents are processed using the Code and PDF to Text nodes to extract plain text from them. 3) Step 5: This text is then sent to the InfraNodus GraphRAG node that creates a knowledge graph, identifies structural gaps in this graph, and then uses built-in AI to research questions, which are then used as AI prompts. 4) Step 6: The research questino is sent to the InfraNodus GraphRAG system that represents the PDF documents you submitted as a knowledge graph and then uses the research question generated to come up with an answer based on the content you uploaded. 4) Step 7: The ideas are then shown to the user in the same web form. Optionally, you can derive the answers from a different set of papers, so the question is generated from one batch, but the answer is generated from another. If you'd like to sync this workflow to PDF files in a Google Drive folder, you can copy our Google Drive PDF processing workflow for n8n. How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key. Add this key into the InfraNodus GraphRAG HTTP node(s) you use in this workflow. You do not need any OpenAI keys for this to work. Optionally, you can change the settings in the Step 4 of this workflow and enforce it to always use the biggest gap it identifies. Requirements An InfraNodus account and API key Note: OpenAI key is not required. You will have direct access to the InfraNodus AI with the API key. Customizing this workflow You can use this same workflow with a Telegram bot or Slack (to be notified of the summaries and ideas). You can also hook up automated social media content creation workflows in the end of this template, so you can generate posts that are relevant (covering the important topics in your niche) but also novel (because they connect them in a new way). Check out our n8n templates for ideas at https://n8n.io/creators/infranodus/ Also check the full tutorial with a conceptual explanation at https://support.noduslabs.com/hc/en-us/articles/20454382597916-Beat-Your-Competition-Target-Their-Content-Gaps-with-this-n8n-Automation-Workflow Also check out the video introduction to InfraNodus to better understand how knowledge graphs and content gaps work: For support and help with this workflow, please, contact us at https://support.noduslabs.com
by Femi Ad
Generate & Schedule Social Media Posts with GPT-4 and Telegram Approval Workflow This comprehensive content automation system features 23 nodes that seamlessly orchestrate AI-powered content creation, validation, and multi-platform publishing through Telegram interaction. It supports posting to major platforms like Twitter, LinkedIn, Facebook, Instagram, and more via the Upload-Post API. Core Components Telegram Integration: Bidirectional messaging with approval workflows and real-time notifications. AI Content Engine: Configurable language models (GPT-4, Claude, etc.) via OpenRouter with structured output parsing. Content Validation: Character count enforcement (240-265), format checking, and quality threshold monitoring. Multi-Platform Publishing: Post on any social media platform with Upload-Post API - better and easier to use than Blotato, with a dedicated n8n community node. Approval System: Preview and approve/reject functionality before content goes live. Web Research: Optional Tavily integration for real-time information gathering. Target Users Content creators seeking consistent social media presence. Digital marketers managing multiple brand accounts. Entrepreneurs wanting automated thought leadership. Agencies needing scalable content solutions. Small businesses without dedicated social media teams. Setup Requirements To get started, you'll need: Telegram Bot: Create via @BotFather and configure webhook. Required APIs: OpenRouter (for AI model access). Upload-Post API (superior alternative to Blotato with community node support). Tavily API (optional for research). n8n Prerequisites: Version 1.7+ with Langchain nodes. Webhook configuration enabled. Proper credential storage setup. Disclaimer: This template uses community-supported nodes, such as the Upload-Post API node. These may require additional setup and could change with n8n updates. Always verify compatibility and test in a safe environment. Step-by-Step Setup Guide Install n8n: Ensure you're running n8n version 1.7 or higher. Enable webhook configurations in your settings. Set Up Credentials: In n8n, add credentials for OpenRouter, Upload-Post API, and optionally Tavily. Store them securely. Create Telegram Bot: Go to Telegram, search for @BotFather, and create a new bot. Note the token and set up a webhook pointing to your n8n instance. Import the Workflow: Copy the workflow JSON (available in the template submission) and import it into your n8n dashboard. Configure Nodes: Set your AI model preferences in the OpenRouter node. Link your social media accounts via the Upload-Post API node. Adjust validation settings (e.g., character limits, retry attempts) as needed. Test the Workflow: Trigger a test run via Telegram by sending a content request. Approve or reject the preview, and monitor the output. Schedule or Automate: Use n8n's scheduling features for automated triggers, or run manually for on-demand posts. Usage Instructions Initiate via Telegram: Send a message to your bot with a topic or prompt (e.g., "Create a post about AI automation for entrepreneurs"). AI Generation: The system generates content using your chosen model, with optional web research. Validation Check: Content is automatically validated for length, quality (70% pass threshold), and format. Approval Workflow: Receive a preview in Telegram. Reply with "approve" to post, or "reject" to retry (up to 3 attempts). Publishing: Approved content posts to your selected platforms. Get notifications on success or errors. Customization: Adapt for single posts, 3-6 post threads, or different tones (business, creative, educational, personal, technical). Use scheduling for consistent posting. Workflow Features Universal Platform Support: Post to any social media platform via Upload-Post API. Scheduling Flexibility: Automated triggers or manual execution. Content Types: Single posts or multi-post threads. Quality Control: 30% error tolerance with detailed validation reporting. Character Optimization: Enforced 240-265 character range for maximum engagement. Topic Versatility: Adapts tone and style based on content type. Error Handling: Comprehensive validation with helpful user feedback. Performance Specifications: AI retry attempts: 3 for reliability. Validation threshold: 70% pass rate. Format support: Single posts and 3-6 post threads. Platform coverage: Any social media platform through Upload-Post API. Research capability: Optional web search for trending topics. Why Upload-Post API? Community-supported n8n node for easier integration. More reliable and feature-rich than Blotato. Supports all major social platforms. Active development and support. Workflow Image Need help customizing this workflow for your specific use case, Femi? As a fellow entrepreneur passionate about automation and business development, I'd be happy to consult. Connect with me on LinkedIn: https://www.linkedin.com/in/femi-adedayo-h44/ or email for support. Let's make your AI automation agency even more efficient!
by Ventsislav Minev
UptimeRobot Alerts to Telegram with Visual Verification Automatically sends Telegram notifications with optional screenshots when monitors change status (β UP/π΄ DOWN/βΈοΈ PAUSED) Example Message in Telegram: Who Is This For? Teams or individuals needing to: Get alerts when websites/services go down Verify outages with visual screenshots Monitor infrastructure from Telegram What Does This Workflow Solve? π¨ Missed Alerts: Get immediate notifications in Telegram πΌοΈ Visual Verification: Optional screenshot confirmation of outages π Status Tracking: Clear records of when issues began/resolved π One-Click Access: Direct links to affected monitors β±οΈ Time Savings: No need to check dashboards manually Setup Guide 1. Pre-Requisites UptimeRobot Account**: With at least one monitor configured Gmail Account**: To receive alert notifications Telegram Account**: To receive alerts (mobile/desktop app recommended) (Optional) ScreenshotMachine free/paid account 2. Credentials Setup Make sure your n8n instance is connected with: Gmail Account** (via OAuth2) UptimeRobot API** (via API key) Telegram Bot** (via bot token) (Optional) ScreenshotMachine (via customer key) 3. Configure Your n8n Workflow Nodes 1. Alert Trigger Gmail Trigger**: Configure to watch for emails from alert@uptimerobot.com Set appropriate polling interval (e.g., every 5 minutes) 2. Monitor Configuration Conf Node**: Set your preferences: { "take_screenshot": true, "screenshotmachine_secret": "your-secret-here", "screenshotmachine_device": "desktop", "screenshotmachine_dimension": "1366xfull", "screenshotmachine_format": "png" } 3. Notification Settings Telegram Nodes**: Set your Chat ID (find with @getidsbot) Customize message formatting if needed 4. Service-Specific Setup UptimeRobot: Go to Dashboard β My Settings β API Settings Create API key with "Monitor Read" permissions Enable email alerts in monitor settings Telegram Bot: Message @BotFather to create new bot Get your Chat ID using @getidsbot Add bot token to n8n credentials ScreenshotMachine (Optional): Sign up at screenshotmachine.com Get Customer Key from account dashboard Set your secret phrase if using hash verification Final Steps Test your workflow by manually triggering a monitor status change Verify Telegram notifications arrive as expected Check screenshot quality if enabled Monitor for a few days to fine-tune alert preferences Happy Monitoring!
by Gerald Denor
Overview This comprehensive n8n workflow automatically transforms trending Google search queries into engaging LinkedIn posts using AI. The system runs autonomously, discovering viral topics, researching content, and publishing professionally formatted posts to grow your social media presence. Workflow Description Automate your entire social media content pipeline - from trend discovery to publication. This workflow monitors Google Trends, selects high-potential topics, creates human-like content using advanced AI, and publishes across multiple social platforms with built-in tracking. Key Features Automated Trend Discovery**: Pulls trending topics from Google Trends API with customizable filters Intelligent Topic Selection**: AI chooses the most relevant trending topic for your niche Multi-AI Content Generation**: Combines Perplexity for research and OpenAI for content curation Human-Like Writing**: Advanced prompts eliminate AI detection markers LinkedIn Optimization**: Proper formatting with Unicode characters, emojis, and engagement hooks Multi-Platform Support**: Ready for LinkedIn, Twitter/X, and Facebook posting Automated Scheduling**: Configurable posting times (default: 6 AM & 6 PM daily) Performance Tracking**: Automatic logging to Google Sheets with timestamps and metrics Error Handling**: Built-in delays and retry mechanisms for API stability Technical Implementation Workflow Architecture Schedule Trigger: Automated execution at specified intervals Google Trends API: Fetches trending search queries with geographical filtering Data Processing: JavaScript code node filters high-volume keywords (30+ search volume) Topic Selection: OpenAI GPT-3.5 evaluates and selects optimal trending topic Content Research: Perplexity AI researches selected topic for current information Content Generation: Advanced prompt engineering creates LinkedIn-optimized posts Content Distribution: Multi-platform posting with platform-specific formatting Analytics Tracking: Google Sheets integration for performance monitoring Node Breakdown Schedule Trigger**: Configurable timing for automated execution HTTP Request (Google Trends)**: SerpAPI integration for trend data Set Node**: Structures trending data for processing Code Node**: JavaScript filtering for high-volume keywords OpenAI Node**: Intelligent topic selection based on relevance and trend strength HTTP Request (Perplexity)**: Advanced AI research with anti-detection prompts Wait Node**: Rate limiting and API respect Split Out**: Prepares content for multi-platform distribution LinkedIn Node**: Authenticated posting with community management Google Sheets Node**: Automated tracking and analytics Social Media Nodes**: Twitter/X, LinkedIn and Facebook ready for activation Use Cases Content Creators**: Maintain consistent posting schedules with trending content Marketing Agencies**: Scale content creation across multiple client accounts Business Development**: Build thought leadership with timely industry insights Personal Branding**: Establish authority by commenting on trending topics SEO Professionals**: Create content around high-search-volume keywords Configuration Requirements API Integrations SerpAPI**: Google Trends data access Perplexity AI**: Advanced content research capabilities OpenAI**: Content curation and topic selection LinkedIn Community Management API**: Professional posting access Google Sheets API**: Analytics and tracking Authentication Setup LinkedIn OAuth2 community management credentials Google Sheets OAuth2 integration HTTP header authentication for AI services Customization Options Industry Targeting**: Modify prompts for specific business verticals Posting Schedule**: Adjust timing based on audience activity Content Tone**: Customize voice and style through prompt engineering Platform Selection**: Enable/disable specific social media channels Trend Filtering**: Adjust search volume thresholds and geographic targeting Content Length**: Modify character limits for different platforms Advanced Features Anti-AI Detection**: Sophisticated prompts create human-like content Rate Limit Management**: Built-in delays prevent API throttling Error Recovery**: Robust error handling with retry mechanisms Content Deduplication**: Prevents posting duplicate content Engagement Optimization**: LinkedIn-specific formatting for maximum reach Performance Metrics Time Savings**: Eliminates 10+ hours of weekly content creation Consistency**: Maintains regular posting schedule without manual intervention Relevance**: Content always based on current trending topics Engagement**: Optimized formatting increases social media interaction Scalability**: Single workflow manages multiple platform posting Installation Notes Import JSON workflow file into n8n instance Configure all required API credentials Set up Google Sheets tracking document Test workflow execution with manual trigger Enable schedule trigger for automated operation Best Practices Monitor API usage to stay within rate limits Regularly update prompts based on content performance Review and adjust trending topic filters for your niche Maintain backup of workflow configuration Test content output before enabling automation Support & Updates Comprehensive setup documentation included Configuration troubleshooting guide provided Regular workflow updates for API changes Community support through n8n forums Tags social-media content-automation linkedin ai-generation google-trends perplexity openai marketing trend-analysis content-creation Compatibility n8n Version: 1.0+ Node Requirements: Standard n8n installation External Dependencies: API access to listed services Hosting: Compatible with cloud and self-hosted n8n instances
by Airtop
Use Case Turn any web page into a compelling LinkedIn post β complete with an AI-generated image. This automation is ideal for sharing content like blog posts, case studies, or product updates in a polished and engaging format. What This Automation Does Given a page URL and optional user instructions, this automation: Scrapes the content of the webpage Uses AI to write a clear, educational, and LinkedIn-optimized post Generates a brand-aligned visual that matches the content Sends both to Slack for review and approval Handles feedback and revisions via Slack interactions Input: Page URL** β The link to the webpage (required) Instructions** β Optional notes on tone, emphasis, or format Output: LinkedIn post text AI-generated visual prompt and image Slack message with review/approval options How It Works Form Submission: User inputs a web page and optional instructions. Web Scraping: Uses Airtop to extract page content. Post Generation: AI agent writes a post based on the page and instructions. Visual Generation: Another AI model creates an image prompt; this is sent to a sub-workflow for image rendering. Slack Review Flow: Post and image sent to Slack for feedback User can approve, request revisions, or decline Revisions trigger reprocessing steps automatically Final Post Delivery: Approved post and image are sent back to Slack, ready to publish. Setup Requirements Airtop API key OpenAI credentials for post and image prompt generation Slack OAuth integration with a review channel A sub-workflow for branded image generation Next Steps Post Directly**: Add LinkedIn publishing to automate the full content workflow. Template Variations**: Offer post style presets (e.g., technical, story-driven, short-form). CRM Sync**: Save approved posts and stats in Airtable or Notion for team use. Read more about content generation with Airtop
by Zach @BrightWayAI
Who's it for Content creators, researchers, educators, and digital marketers who need to discover high-quality YouTube training videos on specific topics. Perfect for building curated learning resource lists, competitive research, or content inspiration. What it does This workflow automatically searches YouTube using multiple search queries, filters for quality content, scores videos by relevance, and exports the top results to Google Sheets. It processes hundreds of videos and delivers only the most valuable educational content ranked by custom relevance criteria. The workflow searches for videos using 10 different AI automation-related queries (easily customizable), filters out low-quality content like shorts and clickbait, then ranks results based on title keywords, view counts, and engagement metrics. How it works Multi-query search: Searches YouTube with an array of related queries to get comprehensive coverage Content filtering: Removes shorts, spam, and low-quality videos using regex patterns Quality assessment: Filters videos based on view count, likes, and publication date Relevance scoring: Assigns scores based on title keywords and engagement metrics Result ranking: Sorts videos by relevance score and limits to top 50 results Export to Sheets: Delivers clean, organized data to Google Sheets with all metadata Requirements YouTube Data API v3 credentials from Google Cloud Console Google Sheets credentials for n8n workspace A Google Sheets document to receive the results How to set up Enable YouTube Data API v3 in your Google Cloud Console Add YouTube OAuth2 credentials to your n8n workspace Add Google Sheets credentials to your n8n workspace Create a Google Sheet and update the Google Sheets node with your document ID Customize search queries in the "Set Query" node for your topic Adjust filtering criteria in the Filter nodes based on your quality requirements How to customize the workflow Search topics: Modify the query array in the "Set Query" node to research any topic: [ "Python tutorial", "JavaScript course", "React beginner guide", // Add your queries here ] Quality thresholds: Adjust minimum views, likes, and date ranges in the "Filter for Quality" node Relevance scoring: Customize keyword weightings in the "Relevance Score" node to match your priorities Result limits: Change the number of final results in the "Limit" node (default: 50) Output format: Modify the "Set Fields" node to include additional YouTube metadata like duration, thumbnails, or category information The workflow is designed to be easily adaptable for any research topic while maintaining high content quality standards.
by InfyOm Technologies
β What problem does this workflow solve? If you're using a self-hosted n8n instance, there's no built-in version history or undo for your workflows. If a workflow is accidentally modified or deleted, there's no way to roll back. This backup workflow solves that problem by automatically syncing your workflows to Google Drive, giving you version control and peace of mind. βοΈ What does this workflow do? β± Runs on a set schedule (e.g., daily or every 12 hours). π Fetches all workflows from your self-hosted n8n instance. π§ Detects changes to avoid duplicate backups. π Creates a dedicated folder for each workflow in Google Drive. πΎ Uploads new or updated workflow files in JSON format. ποΈ Keeps backup history organized by date. π Allows for easy restore by importing backed-up JSON into n8n. π§ Setup Instructions 1. Google Drive Setup Connect your Google Drive account using the Google Drive node in n8n. Choose or create a root folder (e.g., n8n-workflow-backups) where backups will be stored. 2. n8n API Credentials Generate a Personal Access Token from your self-hosted n8n instance: Go to Settings β API in your n8n dashboard. Copy the token and use it in the HTTP Request node headers as: Authorization: Bearer <your_token> 3. Schedule the Workflow Use the Cron node to schedule this workflow to run at your desired frequency (e.g., once a day or every 12 hours). π§ How it Works Step-by-Step Flow: Scheduled Trigger The workflow begins on a timed schedule using the Cron node. Fetch All Workflows Uses the n8n API (/workflows) to retrieve a list of all existing workflows. Loop Through Workflows For each workflow: A folder is created in Google Drive using the workflow name. The workflowβs last updated timestamp is checked against Google Drive backups. Smart Change Detection If the workflow has changed since the last backup: A new .json file is uploaded to the corresponding folder. The file is named with the last updated date of the workflow (YYYY-MM-DD-HH-mm-ss.json) to maintain a versioned history. If no change is detected, the workflow is skipped. π Google Drive Folder Organization Backups are neatly organized by workflow and version: /n8n-workflow-backups/ βββ google-drive-backup-KqhdMBHIyAaE7p7v/ β βββ 2025-07-15-13-03-32.json β βββ 2025-07-14-03-08-12.json βββ resume-video-avatar-KqhdMBHIyAaE8p8vr/ β βββ 2025-07-15-23-05-52.json Each folder is named after the workflow's name+id and contains timestamped versions. π§ Customization Options π Change Backup Frequency Adjust the Cron node to run backups daily, weekly, or even hourly based on your needs. π€ Use a Different Storage Provider You can swap out Google Drive for Dropbox, S3, or another cloud provider with minimal changes. π§ͺ Add Workflow Filtering Only back up workflows that are active or match specific tags by filtering results from the n8n API. β»οΈ How to Restore a Workflow from Backup Go to the Google Drive backup folder for the workflow you want to restore. Download the desired .json file (based on the date). Open your self-hosted n8n instance. Click Import Workflow from the sidebar menu. Upload the JSON file to restore the workflow. > You can choose to overwrite an existing workflow or import it as a new one. π€ Who can use this? This template is ideal for: π§βπ» Developers running self-hosted n8n π’ Teams managing large workflow libraries π Anyone needing workflow versioning, rollback, or disaster recovery πΎ Productivity enthusiasts looking for automated backups π£ Tip Consider enabling version history in Google Drive so you get even more fine-grained backup recovery options on top of what this workflow provides! π Ready to use? Just plug in your n8n token, connect Google Drive, and schedule your backups. Your workflows are now protected!
by Lucas Peyrin
How it works This template is a complete, hands-on tutorial that lets you build and interact with your very first AI Agent. Think of an AI Agent as a standard AI chatbot with superpowers. The agent doesn't just talk; it can use tools to perform actions and find information in real-time. This workflow is designed to show you exactly how that works. The Chat Interface (Chat Trigger): This is your window to the agent. It's a fully styled, public-facing chat window where you can have a conversation. The Brain (AI Agent Node): This is the core of the operation. It takes your message, understands your intent, and intelligently decides which "superpower" (or tool) it needs to use to answer your request. The agent's personality and instructions are defined in its extensive system prompt. The Tools (Tool Nodes): These are the agent's superpowers. We've included a variety of useful and fun tools to showcase its capabilities: Get a random joke. Search Wikipedia for a summary of any topic. Calculate a future date. Generate a secure password. Calculate a monthly loan payment. Fetch the latest articles from the n8n blog. The Memory (Memory Node): This gives the agent a short-term memory, allowing it to remember the last few messages in your conversation for better context. When you send a message, the agent's brain analyzes it, picks the right tool for the job, executes it, and then formulates a helpful response based on the tool's output. Set up steps Setup time: ~3 minutes This template is nearly ready to go out of the box. You just need to provide the AI's "brain." Configure Credentials: This workflow requires an API key for an AI model. Make sure you have credentials set up in your n8n instance for either Google AI (Gemini) or OpenAI. Choose Your AI Brain (LLM): By default, the workflow uses the Google Gemini node. If you have Google AI credentials, you're all set! If you prefer to use OpenAI, simply disable the Gemini node and enable the OpenAI node. You only need one active LLM node. Make sure it is connected to the Agent parent node. Explore the Tools: Take a moment to look at the different tool nodes connected to the Your First AI Agent node. This is where the agent gets its abilities! You can add, remove, or modify these to create your own custom agent. Activate and Test! Activate the workflow. Open the public URL for the Example Chat Window node (you can copy it from the node's panel). Start chatting! Try asking it things like: "Tell me a joke." "What is n8n?" "Generate a 16-character password for me." "What are the latest posts on the n8n blog?" "What is the monthly payment for a $300,000 loan at 5% interest over 30 years?"
by Onur
Description: Create Social Media Content from Telegram with AI This n8n workflow empowers you to effortlessly generate social media content and captivating image prompts, all powered by AI. Simply send a topic request through Telegram (as a voice or text message), and watch as the workflow conducts research, crafts engaging social media posts, and creates detailed image prompts ready for use with your preferred AI art generation tool. What does this workflow do? This workflow streamlines the content creation process by automating research, social media content generation, and image prompt creation, triggered by a simple Telegram message. Who is this for? Social Media Managers:** Quickly generate engaging content and image ideas for various platforms. Content Creators:** Overcome writer's block and discover fresh content ideas with AI assistance. Marketing Teams:** Boost productivity by automating social media content research and drafting. Anyone** looking to leverage AI for efficient and creative social media content creation. Benefits Effortless Content and Image Prompt Generation:** Automate the creation of social media posts and detailed image prompts. AI-Powered Creativity:** Leverage the power of LLMs to generate original content ideas and captivating image prompts. Increased Efficiency:** Save time and resources by automating the research and content creation process. Voice-to-Content:** Use voice messages to request content, making content creation even more accessible. Enhanced Engagement:** Create high-quality, attention-grabbing content that resonates with your audience. How it Works Receive Request: The workflow listens for incoming voice or text messages on Telegram containing your content request. Process Voice (if necessary): If the message is a voice message, it's transcribed into text using OpenAI's Whisper API. AI Takes Over: The AI agent, powered by an OpenAI Chat Model and SerpAPI, conducts online research based on your request. Content and Image Prompt Generation: The AI agent generates engaging social media content and a detailed image prompt based on the research. Image Generation (Optional): You can use the generated image prompt with your preferred AI art generation tool (e.g., DALL-E, Stable Diffusion) to create a visual. Output: The workflow provides you with the social media content and the detailed image prompt, ready for you to use or refine. n8n Nodes Used Telegram Trigger Switch Telegram (for fetching voice messages) OpenAI (Whisper API for voice-to-text) Set (for preparing variables) AI Agent (with OpenAI Chat Model and SerpAPI tool) HTTP Request (for optional image generation) Extract from File (for optional image processing) Set (for final output) Prerequisites Active n8n instance Telegram account with a bot OpenAI API key SerpAPI account Hugging Face API key (if you want to generate images within the workflow) Setup Import the workflow JSON into your n8n instance. Configure the Telegram Trigger node with your Telegram bot token. Set up the OpenAI and SerpAPI credentials in the respective nodes. If you want to generate images directly within the workflow, configure the HTTP Request node with your Hugging Face API key. Test the workflow by sending a voice or text message to your Telegram bot with a topic request. This workflow combines the convenience of Telegram with the power of AI to provide a seamless content creation experience. Start generating engaging social media content today!
by Matt Chong
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Who is this for? If your inbox is full of unread emails, this workflow is for you. Instead of reading through them one by one, let AI do the sorting. It reads your emails and flags only what needs action. What does it solve? This workflow reads your unread Gmail emails and uses AI to decide whatβs important β and whatβs not. It labels emails that need your attention, identifies receipts, and trashes everything else. No more manual reading. Just an inbox that uses AI to take care of itself. How it works Every hour, the workflow runs automatically. It searches for unread emails in your Gmail inbox. For each email: It extracts the content and sends it to OpenAI. The AI returns one of four labels: Action, Receipt, Informational or Spam. Based on the label: Emails are marked with the appropriate label. Or moved to trash it is spam. It marks the email as read once processed. How to set up? Connect these services in your n8n credentials: Gmail (OAuth2) OpenAI (API key) Create the Gmail labels: In your Gmail account, create these labels exactly as written: Action, Receipt, and Informational The workflow will apply these labels based on AI classification. How to customize this workflow to your needs Change the AI prompt to detect more types of emails like Meeting or Newsletter. Add more branches to the Switch node to apply custom logic. Change the schedule to fit your workflow. By default, it runs every hour, but you can update this in the Schedule Trigger node.
by Javier Hita
Follow me on LinkedIn for more! Category: Lead Generation, Data Collection, Business Intelligence Tags: lead-generation, google-maps, rapidapi, business-data, contact-extraction, google-sheets, duplicate-prevention, automation Difficulty Level: Intermediate Estimated Setup Time: 15-20 minutes Template Description Overview This powerful n8n workflow automates the extraction of comprehensive business information from Google Maps using keyword-based searches via RapidAPI's Local Business Data service. Perfect for lead generation, market research, and competitive analysis, this template intelligently gathers business data including contact details, social media profiles, and location information while preventing duplicates and optimizing API usage. Key Features π Keyword-Based Google Maps Scraping**: Search for any business type in any location using natural language queries π§ Contact Information Extraction**: Automatically extracts emails, phone numbers, and social media profiles (LinkedIn, Instagram, Facebook, etc.) π« Smart Duplicate Prevention**: Two-level duplicate detection saves 50-80% on API costs by skipping processed searches and preventing duplicate business entries π Google Sheets Integration**: Seamless data storage with automatic organization and structure π Multi-Location Support**: Process multiple cities, regions, or countries in a single workflow execution β‘ Rate Limiting & Error Handling**: Built-in delays and error handling ensure reliable, uninterrupted execution π° Cost Optimization**: Intelligent batching and duplicate prevention minimize API usage and costs π± Comprehensive Data Collection**: Gather business names, addresses, ratings, reviews, websites, verification status, and more Prerequisites Required Services & Accounts RapidAPI Account with subscription to "Local Business Data" API Google Account for Google Sheets integration n8n Instance (cloud or self-hosted) Required Credentials RapidAPI HTTP Header Authentication** for Local Business Data API Google Sheets OAuth2** for data storage and retrieval Setup Instructions Step 1: RapidAPI Configuration Create RapidAPI Account Sign up at RapidAPI.com Navigate to "Local Business Data" API Subscribe to a plan (Basic plan supports 1000 requests/month) Get API Credentials Copy your X-RapidAPI-Key from the API dashboard Note the host: local-business-data.p.rapidapi.com Configure n8n Credential In n8n: Settings β Credentials β Create New Type: HTTP Header Auth Name: RapidAPI Local Business Data Add headers: X-RapidAPI-Key: YOUR_API_KEY X-RapidAPI-Host: local-business-data.p.rapidapi.com Step 2: Google Sheets Setup Enable Google Sheets API Go to Google Cloud Console Enable Google Sheets API for your project Create OAuth2 credentials Configure n8n Credential In n8n: Settings β Credentials β Create New Type: Google Sheets OAuth2 API Follow OAuth2 setup process Create Google Sheet Structure Create a new Google Sheet with these tabs: keyword_searches sheet: | select | query | lat | lon | country_iso_code | |--------|-------|-----|-----|------------------| | X | Restaurants Madrid | 40.4168 | -3.7038 | ES | | X | Hair Salons Brooklyn | 40.6782 | -73.9442 | US | | X | Coffee Shops Paris | 48.8566 | 2.3522 | FR | stores_data sheet: The workflow will automatically create columns for business data including: business_id, name, phone_number, email, website, full_address, rating, review_count, linkedin, instagram, query, lat, lon, and 25+ more fields Step 3: Workflow Configuration Import the Workflow Copy the provided JSON In n8n: Import from JSON Update Placeholder Values Replace YOUR_GOOGLE_SHEET_ID with your actual Google Sheet ID Update credential references to match your setup Configure Search Parameters (Optional) Adjust limit: 1-100 results per query (default: 100) Modify zoom: 10-18 search radius (default: 13) Change language: EN, ES, FR, etc. (default: EN) How It Works Workflow Process Load Search Criteria: Reads queries marked with "X" from keyword_searches sheet Load Existing Data: Retrieves previously processed data for duplicate detection Filter New Searches: Smart merge identifies only new query+location combinations Process Each Location: Sequential processing prevents API overload Configure Parameters: Prepares search parameters from sheet data API Request: Calls RapidAPI to extract business information Parse Data: Structures and cleans all business information Save Results: Stores new leads in stores_data sheet Rate Limiting: 10-second delay between requests Loop: Continues until all new searches are processed Duplicate Prevention Logic Search Level: Compares new queries against existing data using query+latitude combination, skipping already processed searches. Business Level: Each business receives a unique business_id to prevent duplicate entries even across different searches. Data Extracted Business Information Business name, full address, phone number Website URL, Google My Business rating and review count Business type, price level, verification status Geographic coordinates (latitude/longitude) Detailed location breakdown (street, city, state, country, zip) Contact Details Email addresses (when publicly available) Social media profiles: LinkedIn, Instagram, Facebook, Twitter, YouTube, TikTok, Pinterest Additional phone numbers Direct Google Maps and reviews links Search Metadata Original search query and parameters Extraction timestamp and geographic data API response details for tracking Use Cases Lead Generation Generate targeted prospect lists for B2B sales Build location-specific customer databases Create industry-specific contact lists Develop territory-based sales strategies Market Research Analyze competitor density in target markets Study business distribution
by Hybroht
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. JSON Architect - Dynamically Generate JSON Output Formats for Any AI Agent Overview Version: 1.0 The JSON Architect Workflow is designed to instruct AI agents on the required JSON structure for a given context and create the appropriate JSON output format. This workflow ensures that the generated JSON is validated and tested, providing a reliable JSON output format for use in various applications. β¨ Features Dynamic JSON Generation**: Automatically generate the JSON format based on the input requirements. Validation and Testing**: Validate the generated JSON format and test its functionality, ensuring reliability before output. Iterative Improvement**: If the generated JSON is invalid or fails testing, the workflow will attempt to regenerate it until successful or until a defined maximum number of rounds is reached. Structured Output**: The final output is the generated JSON output format, making it easy to integrate with other systems or workflows. π€ Who is this for? This workflow is ideal for developers, data scientists, and businesses that require dynamic JSON structures for the responses of AI agents. It is particularly useful for those involved in procedural generation, data interchange formats, configuration management and machine learning model input/output. π‘ What problem does this solve? The workflow addresses the challenge of generating optimal JSON structures by automating the process of creation, validation, and testing. This approach ensures that the JSON format is appropriate for its intended use, reducing errors and enhancing the overall quality of data interchange. Use-Case examples: π Data Interchange Formats π οΈ Procedural Generation π Machine Learning Model Input/Output βοΈ Configuration Management π What this workflow does The workflow orchestrates a process where AI agents generate, validate, and test JSON output formats based on the provided input. This approach leads to a more refined and functional JSON output parser. π Workflow Steps Input & Setup: The initial input is provided, and the workflow is configured with necessary parameters. Round Start: Initiates the round of JSON construction, ensuring the input is as expected. JSON Generation & Validation: Generates and validates the JSON output format according to the input. JSON Test: Verifies whether the generated JSON output format works as intended. Validation or Test Fails: If the JSON fails validation or testing, the process loops back to the Round Start for correction. Final Output: The final output is generated based on successful JSON construction, providing a cohesive response. π Expected Input input**: The input that requires a proper JSON structure. max_rounds**: The maximum number of rounds before stopping the loop if it fails to produce and test a valid JSON structure. Suggested: 10. rounds**: The initial number of rounds. Default: 0. π¦ Expected Output input**: The original input used to create the JSON structure. json_format_name**: A snake_case identifier for the generated JSON format. Useful if you plan to reuse it for multiple AI agents or Workflows. json_format_usage**: A description of how to use the JSON output format in an input. Meant to be used by AI agents receiving the JSON output format in their output parser. json_format_valid_reason**: The reason provided by the AI agents explaining why this JSON format works for the input. json_format_structure: The JSON format itself, intended for application through the **Advanced JSON Output Parser custom node. json_format_input: The **input after the JSON output format ( json_format_structure ) has been applied in an AI agent's output parser. π Example An example that includes both the input and the final output is provided in a note within the workflow. βοΈ n8n Setup Used n8n Version**: 1.100.1 n8n-nodes-advanced-output-parser**: 1.0.1 Running n8n via**: Podman 4.3.1 Operating System**: Linux β‘ Requirements to Use/Setup ππ§ Credentials & Configuration Obtain the necessary LLM API key and permissions to utilize the workflow effectively. This workflow is dependent on a custom node for dynamically inputting JSON output formats called n8n-nodes-advanced-output-parser. You can find the repository here. Warning: As of 2025-07-09, the custom node creator has warned that this node is not production-ready. Beware when using it in production environments without being aware of its readiness. β οΈ Notes, Assumptions & Warnings This workflow assumes that users have a basic understanding of n8n and JSON configuration. This workflow assumes that users have access to the necessary API keys and permissions to utilize the Mistral API or other LLM APIs. Ensure that the input provided to the AI agents is clear and concise to avoid confusion in the JSON generation process. Ambiguous inputs may lead to invalid or irrelevant JSON output formats. βΉοΈ About Us This workflow was developed by the Hybroht team of AI enthusiasts and developers dedicated to enhancing the capabilities of AI through collaborative processes. Our goal is to create tools that harness the possibilities of AI technology and more.