by Romuald Członkowski
Social Media Intelligence Workflow with Bright Data and OpenAI Get a 360 Social media presence report for a person Who's it for Business development professionals, recruiters, sales teams, and market researchers who need comprehensive social media intelligence on individuals for lead qualification, due diligence, partnership evaluation, or candidate assessment. How it works Enter target person's details through the web form (name, company, location) AI Discovery Agent searches across selected platforms using name variations Profile validator verifies discovered profiles with confidence scoring Platform-specific agents analyze each profile using Bright Data MCP tools GPT-4 synthesizes all data into a comprehensive intelligence report Report automatically generated as formatted Google Doc with direct link Requirements Bright Data MCP account with PRO access (Get your Bright Data API key here) OpenAI API key (or alternative LLM provider) Google Drive OAuth connection for report delivery n8n self-hosted instance or cloud account How to set up Update Bright Data credentials: Find "Bright Data MCP" node (look for red warning note) Replace YOUR_BRIGHT_DATA_TOKEN_HERE with your actual token Update UNLOCKER_CODE_HERE with your unlocker code Update Google Drive settings: Find "Create Empty Google Doc" node Select target folder there Configure your LLM credentials (OpenAI or alternative) Test with your own name using "Basic" search depth Watch Youtube Tutorial How to customize the workflow Add platforms**: Extend the Switch node with new cases and create corresponding prompt builders Modify analysis depth**: Edit the platform-specific prompt builders to focus on different metrics Change report format**: Update the final LLM Chain prompt to adjust report structure Add notifications**: Insert Slack or email nodes after report generation Adjust confidence thresholds**: Modify validators to change profile verification requirements Alternative outputs**: Replace Google Docs with PDF, Excel, or webhook to CRM
by Janak Patel
Who’s it for This template is ideal for YouTube video creators who spend a lot of time manually generating SEO assets like descriptions, tags, titles, keywords, and thumbnails. If you're looking to automate your YouTube SEO workflow, this is the perfect solution for you. How it works / What it does Connect a Google Sheet to n8n and pull in the Hindi script (or any language). Use OpenAI to generate SEO content: Video description Tags Keywords Titles Thumbnail titles etc. Use the generated description as input to create a thumbnail image using an image generation API. Store all outputs in the same Google Sheet in separate columns. Optionally, use tools like VidIQ or TubeBuddy to test the SEO strength of generated titles, tags, and keywords. 💡 Note: This example uses Runway’s image generation API, but you can plug in any other image-generation service of your choice. Requirements A Google Sheet with clearly named columns Hindi, English, or other language scripts in the sheet OpenAI API key Runway API key (or any other image generation API) How to set up You can set up this workflow in 15 minutes by following the pre-defined steps. Replace the manual Google Sheet trigger with a scheduled trigger for daily or timed automation. You may also swap Google Sheets with any database or data source of your choice. No Google Sheets API required. Requires minimal JavaScript or Python knowledge for advanced customizations.
by Robert Breen
Give business users a chat box; get back valid BigQuery SQL and live query results. The workflow: Captures a plain-language question from a chat widget or internal portal. Fetches the current table + column schema from your BigQuery dataset (via INFORMATION_SCHEMA). Feeds both the schema and the question to GPT-4o so it can craft a syntactically correct SQL query using only fields that truly exist. Executes the AI-generated SQL in BigQuery and returns the results. Stores a short-term memory by session, enabling natural follow-up questions. Perfect for analysts, customer-success teams, or any stakeholder who needs data without writing SQL. ⚙️ Setup Instructions Import the workflow n8n → Workflows → Import from File (or Paste JSON) → Save Add credentials | Service | Where to create credentials | Node(s) to update | |---------|----------------------------|-------------------| | OpenAI | <https://platform.openai.com> → Create API key | OpenAI Chat Model | | Google BigQuery | Google Cloud Console → IAM & Admin → Service Account JSON key | Google BigQuery (schema + query) | Point the schema fetcher to your dataset In Google BigQuery1 you’ll see: SELECT table_name, column_name, data_type FROM n8nautomation-453001.email_leads_schema.INFORMATION_SCHEMA.COLUMNS Replace n8nautomation-453001.email_leads_schema with YOUR_PROJECT.YOUR_DATASET. Keep the rest of the query the same—BigQuery’s INFORMATION_SCHEMA always surfaces table_name, column_name, and data_type. Update the execution node Open Google BigQuery (the second BigQuery node). In Project ID select your project. The SQL Query field is already {{ $json.output.query }} so it will run whatever the AI returns. (Optional)Embed the chat interface Test end-to-end Open the embedded chat widget. Ask: “How many distinct email leads were created last week?” After a few seconds the workflow will return a table of results—or an error if the schema lacks the requested fields. As specific questions about your data Activate Toggle Active so the chat assistant is available 24/7. 🧩 Customization Ideas Row-limit safeguard**: automatically append LIMIT 1000 to every query. Chart rendering**: send query results to Google Sheets + Looker Studio for instant visuals. Slack bot**: forward both the question and the SQL result to a Slack channel for team visibility. Schema caching**: store the INFORMATION_SCHEMA result for 24 hours to cut BigQuery costs. Contact Email:** rbreen@ynteractive.com Website:** https://ynteractive.com YouTube:** https://www.youtube.com/@ynteractivetraining LinkedIn:** https://www.linkedin.com/in/robertbreen
by Ranjan Kumar
Who’s it for This template is ideal for creators, bloggers, and automation enthusiasts who want to auto-generate blog posts from AI-generated content — without lifting a finger. Whether you're running a tech blog, AI newsletter, or just want to keep your WordPress site fresh, this workflow does the heavy lifting. How it works This n8n workflow automatically publishes WordPress posts using trending content from Reddit RSS feeds (like /r/artificial and /r/MachineLearning), enhanced with AI writing and royalty-free images. RSS Feed Trigger: Fetches new Reddit posts every minute from multiple AI-related subreddits. AI Blog Writer: Uses an LLM (Groq / GPT-4o) to convert Reddit titles + content into a full blog article (title, content, category, tags, image keyword). Image Generator: Queries the Pexels API using the keyword provided by the AI to fetch a relevant blog image. Category & Tag Manager: Automatically creates or reuses categories and tags in WordPress. WordPress Publisher: Posts the article in draft or published form — complete with featured image and metadata. Everything is dynamically generated — no hardcoded text or API keys! How to set up Estimated time: 15–20 minutes You’ll need: 🧠 Groq or OpenAI API key (for AI article generation) 🖼️ Pexels API key (for fetching featured images) 📰 WordPress API credentials (with media + post permissions) Customization via Sticky Notes: Choose your own RSS feeds (or subreddit URLs) Modify the AI prompt to match your writing style Set post status (draft or publish) Add your WordPress API URL and credentials Requirements Free n8n account (or self-hosted instance) API credentials (Groq/OpenAI, Pexels, WordPress) Working WordPress site with REST API access Sticky notes explaining: Setup instructions AI prompt format Required credential names
by Don Jayamaha Jr
Get real-time MEXC Spot Market data instantly in Telegram! This workflow connects the MEXC REST v3 API with Telegram and optional GPT-4.1-mini formatting, providing users with latest prices, 24h stats, order book depth, trades, and candlesticks in structured, Telegram-ready messages. 🔎 How It Works A Telegram Trigger node listens for commands. User Authentication ensures only authorized Telegram IDs can access the bot. A Session ID is generated from chat.id for lightweight memory. The MEXC AI Agent coordinates multiple API calls via HTTP nodes: Ticker (Latest Price) → /api/v3/ticker/price?symbol=BTCUSDT 24h Stats → /api/v3/ticker/24hr?symbol=BTCUSDT Order Book Depth → /api/v3/depth?symbol=BTCUSDT&limit=50 Best Bid/Ask Snapshot → /api/v3/ticker/bookTicker?symbol=BTCUSDT Candlesticks (Klines) → /api/v3/klines?symbol=BTCUSDT&interval=15m&limit=200 Recent Trades → /api/v3/trades?symbol=BTCUSDT&limit=100 Utility Nodes refine the data: Calculator → spreads, averages, mid-prices. Think → formats raw JSON into human-readable summaries. Simple Memory → saves symbol, sessionId, and context across turns. Message Splitter prevents Telegram messages from exceeding 4000 characters. Results are sent back to Telegram in structured, readable reports. ✅ What You Can Do with This Agent Get latest prices & 24h stats for any spot pair. Retrieve order book depth (customizable levels). Monitor best bid/ask quotes for spreads. View candlestick OHLCV data for multiple timeframes. Check recent trades (up to 100). Receive clean Telegram reports — no raw JSON. 🛠️ Setup Steps Create a Telegram Bot Use @BotFather to create a bot and copy its API token. Configure in n8n Import MEXC AI Agent v1.02.json. Update the User Authentication node with your Telegram ID. Add Telegram API credentials (bot token). Add OpenAI API key (Optional) Add MEXC API key Deploy & Test Activate the workflow in n8n. Send a query like BTCUSDT to your bot. Instantly receive structured MEXC Spot Market data in Telegram. 📤 Output Rules Output grouped into Price, 24h Stats, Order Book, Candlesticks, Trades. No raw JSON — formatted summaries only. Complies with Telegram’s 4000-character message limit (auto-split). 📺 Setup Video Tutorial Watch the full setup guide on YouTube: ⚡ Unlock real-time MEXC Spot Market insights in Telegram — clean, fast, and API-key free. 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. 🔗 For support: Don Jayamaha – LinkedIn
by Don Jayamaha Jr
Instantly access Upbit Spot Market Data in Telegram with AI Automation This workflow integrates the Upbit REST API with GPT-4o-mini and Telegram, giving you real-time price data, order books, trades, and candles directly in chat. Perfect for crypto traders, market analysts, and investors who want structured Upbit data at their fingertips—no manual API calls required. ⚙️ How It Works A Telegram bot listens for user queries like upbit KRW-BTC 15m. The Upbit AI Agent parses the request and fetches live data from the official Upbit REST API: Price & 24h stats (/v1/ticker) Order book depth & best bid/ask (/v1/orderbook) Recent trades (/v1/trades/ticks) Dynamic OHLCV candles across all timeframes (/v1/candles/{seconds|minutes|days|weeks|months|years}) A built-in Calculator tool computes spreads, % change, and midpoints. A Think module reshapes raw JSON into simplified, clean fields. The agent formats results into concise, structured text and sends them back via Telegram. 📊 What You Can Do with This Agent ✅ Get real-time prices and 24h change for any Upbit trading pair. ✅ View order book depth and best bid/ask snapshots. ✅ Fetch multi-timeframe OHLCV candles (from 1s to 1y). ✅ Track recent trades with price, volume, side, and timestamp. ✅ Calculate midpoints, spreads, and percentage changes. ✅ Receive clean, human-readable reports in Telegram—no JSON parsing needed. 🛠 Set Up Steps Create a Telegram Bot Use @BotFather and save your bot token. Configure Telegram API and OpenAI in n8n Add your bot token under Telegram credentials. Replace your Telegram ID in the authentication node to restrict access. Import the Workflow Load Upbit AI Agent v1.02.json into n8n. Ensure connections to tools (Ticker, Orderbook, Trades, Klines). Deploy and Test Example query: upbit KRW-BTC 15m → returns price, order book, candles, and trades. Example query: upbit USDT-ETH trades 50 → returns 50 latest trades. 📺 Setup Video Tutorial Watch the full setup guide on YouTube: ⚡ Unlock clean, structured Upbit Spot Market data instantly—directly in Telegram! 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. 🔗 For support: Don Jayamaha – LinkedIn
by Dayong Huang
How it works This template creates a fully automated Twitter content system that discovers trending topics, analyzes why they're trending using AI, and posts intelligent commentary about them. The workflow uses MCP (Model Context Protocol) with the twitter154 MCP server from MCPHub to connect with Twitter APIs and leverages OpenAI GPT models to generate brand-safe, engaging content about current trends. Key Features: 🔍 Smart Trend Discovery: Automatically finds US trending topics with engagement scoring 🤖 AI-Powered Analysis: Uses GPT to explain "why it's trending" in 30-60 words 📊 Duplicate Prevention: MySQL database tracks posted trends with 3-day cooldowns 🛡️ Brand Safety: Filters out NSFW content and low-quality hashtags ⚡ Rate Limiting: Built-in delays to respect API limits 🐦 Powered by twitter154: Uses the robust "Old Bird" MCP server for comprehensive Twitter data access Set up steps Setup time: ~10 minutes Prerequisites: OpenAI API key for GPT models Twitter API access for posting MySQL database for trend tracking MCP server access**: twitter154 from aigeon-ai via MCPHub Configuration: Set up MCP integration with twitter154 server endpoint: https://api.mcphub.com/mcp/aigeon-ai-twitter154 Configure credentials for OpenAI, Twitter, and MySQL connections Set up authentication for the twitter154 MCP server (Header Auth required) Create MySQL table for keyword registry (schema provided in workflow) Test the workflow with manual execution before enabling automation Set schedule for automatic trend discovery (recommended: every 2-4 hours) MCP Server Features Used: Search Tweets**: Core functionality for trend analysis Get Trends Near Location**: Discovers trending topics by geographic region AI Tools**: Leverages sentiment analysis and topic classification capabilities Customization Options: Modify trend scoring criteria in the AI agent prompts Adjust cooldown periods in database queries Change target locale from US to other regions (WOEID configuration) Customize tweet formatting and content style Configure different MCP server endpoints if needed Perfect for: Social media managers, content creators, and businesses wanting to stay current with trending topics while maintaining consistent, intelligent posting schedules. Powered by: The twitter154 MCP server ("The Old Bird") provides robust access to Twitter data including tweets, user information, trends, and AI-powered text analysis tools.
by Shinji Watanabe
Who’s it for Teams that care about space-weather impact—SRE/infra, satellite ops, aviation, power utilities, researchers—or anyone who wants timely, readable alerts when NASA publishes significant solar events. How it works / What it does Every 30 minutes a Cron trigger runs, the NASA DONKI node fetches the past 24 hours of space-weather notifications, and a code step de-duplicates, labels event types, and assigns a severity (CRITICAL / HIGH / OTHER). A Switch routes items: CRITICAL/HIGH** → an LLM (“AI Agent”) produces a concise Japanese alert → Slack posts with local time and source link. OTHER** → an LLM creates a short summary for record-keeping → a small merge step prepares fields → Google Sheets appends a new row. Sticky notes in the canvas explain the schedule, data source, and overall flow. How to set up Add credentials for Slack, Google Sheets, and OpenAI (or compatible LLM). Open the Slack nodes and select your workspace + target channel. Select your Google Sheet and worksheet for logging. (Optional) Adjust the Cron interval and the NASA lookback window. Test with a manual execution, then activate. Requirements Slack Bot with permission to post to the chosen channel Google account with access to the target Sheet OpenAI (or API-compatible) credentials for the LLM nodes Internet access to NASA DONKI (no API key required) How to customize the workflow Tweak severity rules inside the Analyze & Prioritize code node. Edit prompt tone/length in each AI Agent node. Change Slack formatting or mention style (@channel vs none). Add filters (e.g., alert only on CME/FLR) or extend logging fields in the merge step.
by Toshiki Hirao
Managing contracts manually is time-consuming and prone to human error, especially when documents need to be shared, tracked, and stored across different tools. This workflow automates the entire process by capturing contract PDFs and Words uploaded to Slack, extracting key information with GPT, and organizing the data into a structured format inside Google Sheets. Essential fields such as client, service provider, contract value, and important dates are automatically parsed and logged, eliminating repetitive manual entry. Once the data is saved, a confirmation message is posted back to Slack so your team can quickly verify that everything has been recorded accurately. Who’s it for This workflow is ideal for operations teams, legal departments, or growing businesses that manage multiple contracts and want to maintain accuracy without spending hours on administration. By integrating Slack, GPT, and Google Sheets, you gain a simple but powerful contract management system that reduces risk, improves visibility, and keeps everyone aligned. Instead of scattered files and manual spreadsheets, you have a single automated pipeline that ensures your contract data is always up to date and accessible. How it works The workflow is triggered when a contract in PDF or Word format is shared in the designated Slack channel. The uploaded file is automatically retrieved for processing. Its content is extracted and converted into plain text. If the file is not in PDF or Word format, an error message is sent. GPT interprets the extracted text and structures the essential fields (e.g., Client, Service Provider, Effective Date, Expiration Date, Signature Date, Contract Value). The structured contract information is appended as a new row in the contract tracker spreadsheet on Google Sheets. A summary of the saved data is posted back to Slack for quick validation. How to set up You need to import this workflow into your n8n instance. You must authenticate your Slack account and select the target channel for contract submissions. You should link your Google account and specify the spreadsheet where the contract data will be stored. In this template, the required columns are Client, Service Provider, Effective Date, Expiration Date, Signature Date, and Contract Value. You can adjust the GPT parsing prompt to match the specific fields that your organization requires. You upload a sample contract in PDF or Word format to Slack and verify that the extracted data is correctly recorded in Google Sheets. Requirements You must have an active n8n instance in the cloud. You need a Slack account with permission to upload files and send messages. You must use a Google Sheets account with edit access to the target spreadsheet. You need a GPT integration (e.g., OpenAI) to enable AI-powered text parsing. How to customize the workflow You can modify this workflow to fit your organization’s unique contract needs. For example, you may update the GPT parsing prompt to capture additional fields, change the target Google Sheets structure, or integrate notifications into other tools. You have full flexibility to expand or simplify the steps so the workflow matches your team’s processes and compliance requirements.
by yu-ya
Automate GitHub pull request reviews and labeling using OpenAI This workflow automates the first line of code review for your development team. By leveraging OpenAI, it analyzes pull request diffs, assigns descriptive labels based on change size and category, posts summary comments back to GitHub, and keeps your team informed via Slack. Who’s it for? DevOps Engineers** looking to standardize PR triage. Team Leads** who want to provide instant feedback to developers. Open Source Maintainers** managing high volumes of contributions. Development Teams** aiming to reduce manual overhead in code reviews. How it works / What it does Trigger: The workflow starts via a GitHub PR Webhook when a pull request is opened or synchronized. Data Gathering: It extracts PR metadata and uses the GitHub Node and HTTP Request Node to fetch a list of changed files and the raw code diff. Analysis: A Code Node categorizes the changes (e.g., size labels like size/S or size/L). AI Review: The AI Agent (powered by OpenAI) analyzes the code diff to generate a quality score, summary, and specific strengths/concerns. Action: The GitHub Node updates the PR with relevant labels. An automated review comment is posted to the PR discussion. A summary is sent to a Slack channel. Reporting: All review data is logged into Google Sheets for long-term tracking and analytics. Requirements GitHub Account:** OAuth credentials with repository access. OpenAI API Key:** For the Chat Model (recommends GPT-4o-mini or higher). Slack Workspace:** A bot token to post to the #code-reviews channel. Google Sheets:** A spreadsheet with headers matching the PR metadata. How to set up GitHub Webhook: Configure your GitHub repository to send "Pull request" events to the Webhook URL provided by this workflow. Credentials: Authenticate your GitHub, OpenAI, Slack, and Google Sheets accounts in their respective nodes. Google Sheets: Select your target Spreadsheet and Sheet name in the "Log to Sheets" node. Slack: Ensure the Slack bot is invited to the channel specified in the "Notify Slack" node. How to customize AI Prompt:* Modify the "System Message" in the *AI Code Reviewer** node to reflect your team's specific coding standards or preferred review tone. Labeling Logic:** Edit the "Analyze File Changes" node to add custom labels based on file paths (e.g., frontend, documentation). Review Logic:* Add an *If Node** after the AI analysis to only auto-approve PRs with a quality score higher than 90.
by yusan25c
How It Works This template is an n8n workflow that integrates with Jira to provide automated replies. When a ticket is assigned to a user, the workflow analyzes the ticket content, retrieves relevant knowledge from a vector database, and generates a response. By continuously enriching the knowledge base, the system improves response quality in Jira. Prerequisites A Jira account with API access A Pinecone account and credentials (API key and environment settings) An AI provider credential (e.g., OpenAI API key) Setup Instructions Jira Credentials Create Jira credentials in n8n (API token and email). In the Jira node, select the registered Jira account ID. Vector Database Setup (Pinecone) Register your Pinecone credentials (API key and environment variables) in n8n. Ensure that your knowledge base is indexed in Pinecone. AI Assistant Node Configure the OpenAI (or other LLM) node with your API key. Provide a system prompt that explains how to respond to Jira tickets using retrieved knowledge. Workflow Execution The workflow runs only via the Scheduled Trigger node at defined intervals. When Jira tickets are assigned, their summary, description, and latest comments are retrieved. These details are passed to the AI assistant, which queries Pinecone and generates a response. The generated response is then posted as a Jira comment. Step by Step Scheduled Trigger The workflow is executed at regular intervals using the Scheduled Trigger node. Jira Trigger (Issue Assigned) Retrieves the summary, description, and latest comments of assigned tickets. AI Assistant Sends ticket details to the AI assistant, which searches and summarizes relevant knowledge from Pinecone. Response Generation / Ticket Update The AI generates a response and automatically posts it as a Jira comment. (Optionally, the workflow can update the ticket status or mention the assignee.) Notes Keep your Pinecone knowledge base updated to improve accuracy. You can customize the AI assistant’s behavior by adjusting the system prompt. Configure the Scheduled Trigger frequency carefully to avoid API rate limits. Further Reference For a detailed walkthrough (in Japanese), see this article: 👉 Automating Jira responses with n8n, AI, and Pinecone (Qiita) You can find the template file on GitHub here: 👉 Template File on GitHub
by Peter Zendzian
This n8n template demonstrates how to build an intelligent entity research system that automatically discovers, researches, and creates comprehensive profiles for business entities, concepts, and terms. Use cases are many: Try automating glossary creation for technical documentation, building standardized definition databases for compliance teams, researching industry terminology for content creation, or developing training materials with consistent entity explanations! Good to know Each entity research typically costs $0.08-$0.34, depending on the complexity and sources required. The workflow includes smart duplicate detection to minimize unnecessary API calls. The workflow requires multiple AI services and a vector database, so setup time may be longer than simpler templates. Entity definitions are stored locally in your Qdrant database and can be reused across multiple projects. How it works The workflow checks your existing knowledge base first to avoid duplicate research on entities you've already processed. If the entity is new, an AI research agent intelligently combines your vector database, Wikipedia, and live web research to gather comprehensive information. The system creates structured entity profiles with definitions, categories, examples, common misconceptions, and related entities - perfect for business documentation. AI-powered validation ensures all entity profiles are complete, accurate, and suitable for business use before storage. Each researched entity gets stored in your Qdrant vector database, creating a growing knowledge base that improves research efficiency over time. The workflow includes multiple stages of duplicate prevention to avoid unnecessary processing and API costs. How to use The manual trigger node is used as an example, but feel free to replace this with other triggers such as form submissions, content management systems, or automated content pipelines. You can research multiple related entities in sequence, and the system will automatically identify connections and relationships between them. Provide topic and audience context to get tailored explanations suitable for your specific business needs. Requirements OpenAI API account for o4-mini (entity research and validation) Qdrant vector database instance (local or cloud) Ollama with nomic-embed-text model for embeddings Automate Web Research with GPT-4, Claude & Apify for Content Analysis and Insights workflow (for live web research capabilities) Anthropic API account for Claude Sonnet 4 (used by the web research workflow) Apify account for web scraping (used by the web research workflow) Customizing this workflow Entity research automation can be adapted for many specialized domains. Try focusing on specific industries like legal terminology (targeting official legal sources), medical concepts (emphasizing clinical accuracy), or financial terms (prioritizing regulatory definitions). You can also customize the validation criteria to match your organization's specific quality standards.