by Don Jayamaha Jr
Get real-time MEXC Spot Market data instantly in Telegram! This workflow connects the MEXC REST v3 API with Telegram and optional GPT-4.1-mini formatting, providing users with latest prices, 24h stats, order book depth, trades, and candlesticks in structured, Telegram-ready messages. ๐ How It Works A Telegram Trigger node listens for commands. User Authentication ensures only authorized Telegram IDs can access the bot. A Session ID is generated from chat.id for lightweight memory. The MEXC AI Agent coordinates multiple API calls via HTTP nodes: Ticker (Latest Price) โ /api/v3/ticker/price?symbol=BTCUSDT 24h Stats โ /api/v3/ticker/24hr?symbol=BTCUSDT Order Book Depth โ /api/v3/depth?symbol=BTCUSDT&limit=50 Best Bid/Ask Snapshot โ /api/v3/ticker/bookTicker?symbol=BTCUSDT Candlesticks (Klines) โ /api/v3/klines?symbol=BTCUSDT&interval=15m&limit=200 Recent Trades โ /api/v3/trades?symbol=BTCUSDT&limit=100 Utility Nodes refine the data: Calculator โ spreads, averages, mid-prices. Think โ formats raw JSON into human-readable summaries. Simple Memory โ saves symbol, sessionId, and context across turns. Message Splitter prevents Telegram messages from exceeding 4000 characters. Results are sent back to Telegram in structured, readable reports. โ What You Can Do with This Agent Get latest prices & 24h stats for any spot pair. Retrieve order book depth (customizable levels). Monitor best bid/ask quotes for spreads. View candlestick OHLCV data for multiple timeframes. Check recent trades (up to 100). Receive clean Telegram reports โ no raw JSON. ๐ ๏ธ Setup Steps Create a Telegram Bot Use @BotFather to create a bot and copy its API token. Configure in n8n Import MEXC AI Agent v1.02.json. Update the User Authentication node with your Telegram ID. Add Telegram API credentials (bot token). Add OpenAI API key (Optional) Add MEXC API key Deploy & Test Activate the workflow in n8n. Send a query like BTCUSDT to your bot. Instantly receive structured MEXC Spot Market data in Telegram. ๐ค Output Rules Output grouped into Price, 24h Stats, Order Book, Candlesticks, Trades. No raw JSON โ formatted summaries only. Complies with Telegramโs 4000-character message limit (auto-split). ๐บ Setup Video Tutorial Watch the full setup guide on YouTube: โก Unlock real-time MEXC Spot Market insights in Telegram โ clean, fast, and API-key free. ๐งพ Licensing & Attribution ยฉ 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. ๐ For support: Don Jayamaha โ LinkedIn
by Don Jayamaha Jr
Instantly access Upbit Spot Market Data in Telegram with AI Automation This workflow integrates the Upbit REST API with GPT-4o-mini and Telegram, giving you real-time price data, order books, trades, and candles directly in chat. Perfect for crypto traders, market analysts, and investors who want structured Upbit data at their fingertipsโno manual API calls required. โ๏ธ How It Works A Telegram bot listens for user queries like upbit KRW-BTC 15m. The Upbit AI Agent parses the request and fetches live data from the official Upbit REST API: Price & 24h stats (/v1/ticker) Order book depth & best bid/ask (/v1/orderbook) Recent trades (/v1/trades/ticks) Dynamic OHLCV candles across all timeframes (/v1/candles/{seconds|minutes|days|weeks|months|years}) A built-in Calculator tool computes spreads, % change, and midpoints. A Think module reshapes raw JSON into simplified, clean fields. The agent formats results into concise, structured text and sends them back via Telegram. ๐ What You Can Do with This Agent โ Get real-time prices and 24h change for any Upbit trading pair. โ View order book depth and best bid/ask snapshots. โ Fetch multi-timeframe OHLCV candles (from 1s to 1y). โ Track recent trades with price, volume, side, and timestamp. โ Calculate midpoints, spreads, and percentage changes. โ Receive clean, human-readable reports in Telegramโno JSON parsing needed. ๐ Set Up Steps Create a Telegram Bot Use @BotFather and save your bot token. Configure Telegram API and OpenAI in n8n Add your bot token under Telegram credentials. Replace your Telegram ID in the authentication node to restrict access. Import the Workflow Load Upbit AI Agent v1.02.json into n8n. Ensure connections to tools (Ticker, Orderbook, Trades, Klines). Deploy and Test Example query: upbit KRW-BTC 15m โ returns price, order book, candles, and trades. Example query: upbit USDT-ETH trades 50 โ returns 50 latest trades. ๐บ Setup Video Tutorial Watch the full setup guide on YouTube: โก Unlock clean, structured Upbit Spot Market data instantlyโdirectly in Telegram! ๐งพ Licensing & Attribution ยฉ 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. ๐ For support: Don Jayamaha โ LinkedIn
by Yahor Dubrouski
๐ How it works This workflow turns voice or text messages from Telegram into structured tasks in Notion, using AI-powered intent detection and task generation. It supports: ๐ Task creation โ๏ธ Task updates (like changing priority, title, or deadline) ๐ง Task analysis (e.g., workload, goal alignment, emotional fatigue) The assistant uses OpenAI to: Detect intent (create, update, or analyze) Extract or update task fields (title, priority, due date, etc.) Auto-format list-style descriptions with bullet points Detect relevant tags like health, money, sport, etc. โ๏ธ Setup steps Clone the GitHub repo or import the .json into n8n manually. Configure: OpenAI credentials Telegram Bot Token Notion credentials Use Telegram to send messages like: โCreate a task to call mom tomorrowโ โUpdate the grocery task to add milkโ โAm I overbooked today?โ
by MANISH KUMAR
This AI Blog Generator is an advanced n8n-powered automation workflow that leverages Google Gemini and Google Sheets to generate SEO-friendly blog articles for Shopify products. It automates the entire process โ from fetching product data to creating structured HTML content โ with zero manual effort. ๐ก Key Advantages Our AI Blog Generator offers five core advantages that make it the perfect solution for automated content creation: ๐ Shopify Product Sync** โ Automatically pulls product data (titles, descriptions, images, etc.) via Shopify API. โ๏ธ SEO Blog Generation** โ Gemini generates blog titles, meta descriptions, and complete articles using product information. ๐๏ธ Structured Content Output** โ Creates well-formatted HTML with headers and bullet points for seamless Shopify blog integration. ๐ Google Sheets Integration** โ Tracks blog creation and prevents duplicate publishing using a centralized Google Sheet. ๐ค Shopify Blog API Integration** โ Publishes the generated blog to Shopify with a single API call. โ๏ธ How It Works The workflow follows a systematic 8-step process that ensures quality and efficiency: Step-by-Step Process Manual Trigger โ Start the workflow via a test trigger or scheduler. Fetch Products from Shopify โ Retrieves all product details, including images and descriptions. Fix Input Format โ Organizes and updates the input table using Code and Google Sheet nodes. Filter Duplicates โ Ensures no previously used rows are processed again. Limit Control โ Processes one row at a time and loops until all blogs are posted. Gemini AI Generation โ Creates SEO-friendly blog content in HTML format from product data. HTML Structure Fix โ Adjusts content for JSON compatibility by cleaning unsupported HTML tags. Article API Posting โ Sends finalized blog content to Shopify for publishing or drafting. ๐ ๏ธ Setup Steps Required Node Configuration To implement this workflow, you'll need to configure the following n8n nodes: Trigger Node:** Start the workflow instantly. Shopify Node:** Fetch product details. Google Sheet Node:** Store input/output data and track blog creation status. Code Node:** Format data as required. Filter Node:** Remove used rows to avoid duplication. Limit Node:** Process one blog at a time. Agent Node:** Sends prompt to Gemini and returns parsed SEO-ready content. HTTP Node:** Posts content to Shopify via the API. ๐ Credentials Required Authentication Setup Before running the workflow, ensure you have the following credentials configured: Shopify Access Token** โ For fetching products and posting blogs Gemini API Key** โ For AI-powered blog generation Google Sheets OAuth** โ For logging and tracking workflow data ๐ค Ideal For Target Users This automation workflow is specifically designed for: Ecommerce teams** automating blogs for hundreds of products Shopify store owners** boosting organic traffic effortlessly Marketing teams** building scalable, AI-driven content workflows ๐ฌ Bonus Tip Extensibility Features The workflow is fully modular and highly customizable. You can easily extend it for: Internal linking** between related products Multi-language translation** for global markets Social media sharing** automation Email marketing** integration All extensions can be implemented within the same n8n flow, making it a comprehensive content automation solution.
by Jose Bossa
๐ฅ Who's it for This workflow is perfect for businesses or individuals who want to automate WhatsApp conversations ๐ฌ with an intelligent AI chatbot that can handle text, voice notes ๐ต, and images ๐ผ๏ธ. No advanced coding required! ๐ค What it does It automatically receives WhatsApp messages through WasenderAPI, intelligently buffers consecutive messages to avoid fragmented responses, processes multimedia content (transcribing audio and analyzing images with AI), and responds naturally using GPT-4o mini with conversation memory. All while protecting your WhatsApp account from being banned. โ๏ธ How it works ๐ฑ Webhook Trigger โ Receives new messages from WasenderAPI ๐๏ธ Redis Buffer System โ Groups consecutive messages intelligently (7-second window) ๐ Content Classifier โ Routes messages by type (text, audio, or image) ๐ต Audio Processing โ Decrypts and transcribes voice notes using OpenAI Whisper ๐ผ๏ธ Image Analysis โ Decrypts and analyzes images with GPT-4O Vision ๐ง AI Agent (GPT-4o mini) โ Generates intelligent responses with 10-message memory โฑ๏ธ Anti-Ban Wait โ 6-second delay to simulate human typing ๐ค Message Sender โ Delivers response back to WhatsApp user ๐ Requirements WasenderAPI account with connected WhatsApp number : https://wasenderapi.com/ Redis database (free tier works fine) OpenAI API key with access to GPT-4o mini and Whisper n8n's AI Agent, LangChain, and Redis nodes ๐ ๏ธ How to set up Create your WasenderAPI account and connect a WhatsApp number Set up a free Redis database and get connection credentials Configure OpenAI API key in n8n credentials Replace the WasenderAPI Bearer token in "Get the audio", "Get the photo", and "Send Message to User" nodes Change the Manual Trigger to a Webhook and configure it in WasenderAPI Customize the AI Agent prompt to match your business needs Adjust wait times if needed (default: 6 seconds for responses, 7 seconds for buffer) Save and activate the workflow โ ๐จ How to customize Modify the AI Agent prompt to change bot personality and instructions Adjust buffer wait time (7 seconds) for faster/slower message grouping Change response delay (6 seconds) based on your use case , its recomendable 30 seconds. Add more content types (documents, videos) by extending the Switch Type node Configure conversation memory window (default: 10 messages)
by Artem Boiko
Estimate material price and total cost for grouped BIM/CAD elements using an LLM-driven analysis. The workflow accepts an existing XLSX (from your model) or, if missing, can trigger a local RvtExporter.exe to generate one. It enriches each element group with quantities, pricing, confidence, and produces a multi-sheet Excel report plus a polished HTML executive report. What it does Reads grouped element data** (from XLSX or extracted via RvtExporter.exe). Builds enhanced prompts with clear rules (volumes/areas are already aggregated per group). Calls your selected LLM (OpenAI/Anthropic/etc.) to identify materials, pick the pricing unit, and estimate price per unit and total cost. Parses AI output, adds per-group KPIs (cost %, rank), and aggregates **project totals (by material, by category). Exports a multi-sheet XLSX and an HTML executive report (charts, KPIs, top groups). Prerequisites LLM credentials** for your chosen provider (e.g., OpenAI, Anthropic). Enable exactly one chat node and connect credentials. Windows host* only if you want to auto-extract from .rvt/.ifc via RvtExporter.exe. If you already have an XLSX, Windows is *not required**. Optional: Internet access on the LLM side for price lookups (model/tooling dependent). How to use Import this JSON into n8n. Open the Setup node(s) and set: project_file โ path to your .rvt/.ifc or to an existing grouped *_rvt.xlsx path_to_converter โ C:\\DDC_Converter_Revit\\datadrivenlibs\\RvtExporter.exe (optional) country โ used to guide price sources/standards (e.g., Germany) In the canvas, enable one LLM node (e.g., OpenAI or Anthropic) and connect credentials; keep others disabled. Execute workflow (Manual Trigger). It will detect/build the XLSX, run analysis, then write the Excel and open the HTML report. Outputs Excel**: Price_Estimation_Report_YYYY-MM-DD.xlsx with sheets: Summary, Detailed Elements, Material Summary, Top 10 Groups HTML**: executive report with charts (project totals, top materials, top groups). Per-group fields include: Material (EU/DE/US), Quantity & Unit, Price per Unit (EUR), Total Cost (EUR), Assumptions, Confidence. Notes & tips Quantities in the input are already aggregated per group โ do not multiply by element count. If you prefer XLSX-only extraction, run your converter with a -no-collada flag upstream. Keep ASCII-safe paths and ensure write permissions to the output folder. Categories Data Extraction ยท Files & Storage ยท ETL ยท CAD/BIM ยท Cost Estimation Tags cad-bim, price-estimation, cost, revit, ifc, xlsx, html-report, llm, materials, qto Author DataDrivenConstruction.io info@datadrivenconstruction.io Consulting and Training We work with leading construction, engineering, consulting agencies and technology firms around the world to help them implement open data principles, automate CAD/BIM processing and build robust ETL pipelines. If you would like to test this solution with your own data, or are interested in adapting the workflow to real project tasks, feel free to contact us. Docs & Issues: Full Readme on GitHub
by Dayong Huang
How it works This template creates a fully automated Twitter content system that discovers trending topics, analyzes why they're trending using AI, and posts intelligent commentary about them. The workflow uses MCP (Model Context Protocol) with the twitter154 MCP server from MCPHub to connect with Twitter APIs and leverages OpenAI GPT models to generate brand-safe, engaging content about current trends. Key Features: ๐ Smart Trend Discovery: Automatically finds US trending topics with engagement scoring ๐ค AI-Powered Analysis: Uses GPT to explain "why it's trending" in 30-60 words ๐ Duplicate Prevention: MySQL database tracks posted trends with 3-day cooldowns ๐ก๏ธ Brand Safety: Filters out NSFW content and low-quality hashtags โก Rate Limiting: Built-in delays to respect API limits ๐ฆ Powered by twitter154: Uses the robust "Old Bird" MCP server for comprehensive Twitter data access Set up steps Setup time: ~10 minutes Prerequisites: OpenAI API key for GPT models Twitter API access for posting MySQL database for trend tracking MCP server access**: twitter154 from aigeon-ai via MCPHub Configuration: Set up MCP integration with twitter154 server endpoint: https://api.mcphub.com/mcp/aigeon-ai-twitter154 Configure credentials for OpenAI, Twitter, and MySQL connections Set up authentication for the twitter154 MCP server (Header Auth required) Create MySQL table for keyword registry (schema provided in workflow) Test the workflow with manual execution before enabling automation Set schedule for automatic trend discovery (recommended: every 2-4 hours) MCP Server Features Used: Search Tweets**: Core functionality for trend analysis Get Trends Near Location**: Discovers trending topics by geographic region AI Tools**: Leverages sentiment analysis and topic classification capabilities Customization Options: Modify trend scoring criteria in the AI agent prompts Adjust cooldown periods in database queries Change target locale from US to other regions (WOEID configuration) Customize tweet formatting and content style Configure different MCP server endpoints if needed Perfect for: Social media managers, content creators, and businesses wanting to stay current with trending topics while maintaining consistent, intelligent posting schedules. Powered by: The twitter154 MCP server ("The Old Bird") provides robust access to Twitter data including tweets, user information, trends, and AI-powered text analysis tools.
by Shinji Watanabe
Whoโs it for Teams that care about space-weather impactโSRE/infra, satellite ops, aviation, power utilities, researchersโor anyone who wants timely, readable alerts when NASA publishes significant solar events. How it works / What it does Every 30 minutes a Cron trigger runs, the NASA DONKI node fetches the past 24 hours of space-weather notifications, and a code step de-duplicates, labels event types, and assigns a severity (CRITICAL / HIGH / OTHER). A Switch routes items: CRITICAL/HIGH** โ an LLM (โAI Agentโ) produces a concise Japanese alert โ Slack posts with local time and source link. OTHER** โ an LLM creates a short summary for record-keeping โ a small merge step prepares fields โ Google Sheets appends a new row. Sticky notes in the canvas explain the schedule, data source, and overall flow. How to set up Add credentials for Slack, Google Sheets, and OpenAI (or compatible LLM). Open the Slack nodes and select your workspace + target channel. Select your Google Sheet and worksheet for logging. (Optional) Adjust the Cron interval and the NASA lookback window. Test with a manual execution, then activate. Requirements Slack Bot with permission to post to the chosen channel Google account with access to the target Sheet OpenAI (or API-compatible) credentials for the LLM nodes Internet access to NASA DONKI (no API key required) How to customize the workflow Tweak severity rules inside the Analyze & Prioritize code node. Edit prompt tone/length in each AI Agent node. Change Slack formatting or mention style (@channel vs none). Add filters (e.g., alert only on CME/FLR) or extend logging fields in the merge step.
by Toshiki Hirao
Managing contracts manually is time-consuming and prone to human error, especially when documents need to be shared, tracked, and stored across different tools. This workflow automates the entire process by capturing contract PDFs and Words uploaded to Slack, extracting key information with GPT, and organizing the data into a structured format inside Google Sheets. Essential fields such as client, service provider, contract value, and important dates are automatically parsed and logged, eliminating repetitive manual entry. Once the data is saved, a confirmation message is posted back to Slack so your team can quickly verify that everything has been recorded accurately. Whoโs it for This workflow is ideal for operations teams, legal departments, or growing businesses that manage multiple contracts and want to maintain accuracy without spending hours on administration. By integrating Slack, GPT, and Google Sheets, you gain a simple but powerful contract management system that reduces risk, improves visibility, and keeps everyone aligned. Instead of scattered files and manual spreadsheets, you have a single automated pipeline that ensures your contract data is always up to date and accessible. How it works The workflow is triggered when a contract in PDF or Word format is shared in the designated Slack channel. The uploaded file is automatically retrieved for processing. Its content is extracted and converted into plain text. If the file is not in PDF or Word format, an error message is sent. GPT interprets the extracted text and structures the essential fields (e.g., Client, Service Provider, Effective Date, Expiration Date, Signature Date, Contract Value). The structured contract information is appended as a new row in the contract tracker spreadsheet on Google Sheets. A summary of the saved data is posted back to Slack for quick validation. How to set up You need to import this workflow into your n8n instance. You must authenticate your Slack account and select the target channel for contract submissions. You should link your Google account and specify the spreadsheet where the contract data will be stored. In this template, the required columns are Client, Service Provider, Effective Date, Expiration Date, Signature Date, and Contract Value. You can adjust the GPT parsing prompt to match the specific fields that your organization requires. You upload a sample contract in PDF or Word format to Slack and verify that the extracted data is correctly recorded in Google Sheets. Requirements You must have an active n8n instance in the cloud. You need a Slack account with permission to upload files and send messages. You must use a Google Sheets account with edit access to the target spreadsheet. You need a GPT integration (e.g., OpenAI) to enable AI-powered text parsing. How to customize the workflow You can modify this workflow to fit your organizationโs unique contract needs. For example, you may update the GPT parsing prompt to capture additional fields, change the target Google Sheets structure, or integrate notifications into other tools. You have full flexibility to expand or simplify the steps so the workflow matches your teamโs processes and compliance requirements.
by rana tamure
This n8n workflow automates the creation of high-quality, SEO-optimized blog posts using AI. It pulls keyword data from Google Sheets, conducts research via Perplexity AI, generates structured content (title, introduction, key takeaways, body, conclusion, and FAQs) with OpenAI and Anthropic models, assembles the post, performs final edits, converts to HTML, and publishes directly to WordPress. Ideal for content marketers, bloggers, or agencies looking to scale content production while maintaining relevance and engagement. Key Features Keyword-Driven Generation: Fetches primary keywords, search intent, and related terms from a Google Sheets spreadsheet to inform content strategy. AI Research & Structuring: Uses Perplexity for in-depth topic research and OpenAI/Anthropic for semantic analysis, outlines, and full content drafting. Modular Content Creation: Generates sections like introductions, key takeaways, outlines, body, conclusions, and FAQs with tailored prompts for tone, style, and SEO. Assembly & Editing: Combines sections into a cohesive Markdown post, adds internal/external links, and applies final refinements for readability and flow. Publishing Automation: Converts Markdown to styled HTML and posts drafts to WordPress. Customization Points: Easily adjust AI prompts, research depth, or output formats via Code and Set nodes. Requirements Credentials: OpenAI API (for GPT models), Perplexity API (for research), Google Sheets OAuth2 (for keyword input), WordPress API (for publishing). Setup: Configure your Google Sheets with columns like "keyword", "search intent", "related keyword", etc. Ensure the sheet is shared with your Google account. Dependencies: No additional packages needed; relies on n8n's built-in nodes for AI, HTTP, and data processing. How It Works Trigger & Input: Start manually or schedule; pulls keyword data from Google Sheets. Research Phase: Uses Perplexity to gather topic insights and citations from reputable sources. Content Generation: AI nodes create title, structure, intro, takeaways, outline, body, conclusion, and FAQs based on research and SEO guidelines. Assembly & Refinement: Merges sections, embeds links, edits for polish, and converts to HTML. Output: Publishes as a WordPress draft or outputs the final HTML for manual use. Benefits Time Savings: Automate 80-90% of content creation, reducing manual writing from hours to minutes. SEO Optimization: Incorporates primary/related keywords naturally, aligns with search intent, and includes semantic structures for better rankings. Scalability: Process multiple keywords in batches; perfect for content calendars or high-volume blogging. Quality Assurance: Built-in editing ensures engaging, error-free content with real-world examples and data-backed insights. Versatility: Adaptable for any niche (e.g., marketing, tech, finance) by tweaking prompts or sheets. Potential Customizations Add more AI models (e.g., via custom nodes) for varied tones. Integrate image generation or social sharing for full content pipelines. Filter sheets for specific topics or add notifications on completion.
by yusan25c
How It Works This template is an n8n workflow that integrates with Jira to provide automated replies. When a ticket is assigned to a user, the workflow analyzes the ticket content, retrieves relevant knowledge from a vector database, and generates a response. By continuously enriching the knowledge base, the system improves response quality in Jira. Prerequisites A Jira account with API access A Pinecone account and credentials (API key and environment settings) An AI provider credential (e.g., OpenAI API key) Setup Instructions Jira Credentials Create Jira credentials in n8n (API token and email). In the Jira node, select the registered Jira account ID. Vector Database Setup (Pinecone) Register your Pinecone credentials (API key and environment variables) in n8n. Ensure that your knowledge base is indexed in Pinecone. AI Assistant Node Configure the OpenAI (or other LLM) node with your API key. Provide a system prompt that explains how to respond to Jira tickets using retrieved knowledge. Workflow Execution The workflow runs only via the Scheduled Trigger node at defined intervals. When Jira tickets are assigned, their summary, description, and latest comments are retrieved. These details are passed to the AI assistant, which queries Pinecone and generates a response. The generated response is then posted as a Jira comment. Step by Step Scheduled Trigger The workflow is executed at regular intervals using the Scheduled Trigger node. Jira Trigger (Issue Assigned) Retrieves the summary, description, and latest comments of assigned tickets. AI Assistant Sends ticket details to the AI assistant, which searches and summarizes relevant knowledge from Pinecone. Response Generation / Ticket Update The AI generates a response and automatically posts it as a Jira comment. (Optionally, the workflow can update the ticket status or mention the assignee.) Notes Keep your Pinecone knowledge base updated to improve accuracy. You can customize the AI assistantโs behavior by adjusting the system prompt. Configure the Scheduled Trigger frequency carefully to avoid API rate limits. Further Reference For a detailed walkthrough (in Japanese), see this article: ๐ Automating Jira responses with n8n, AI, and Pinecone (Qiita) You can find the template file on GitHub here: ๐ Template File on GitHub
by Sagar Budhathoki
AI Blog & LinkedIn Content Publisher How it works Daily trigger scans your Notion database for unpublished blog ideas AI generates complete blog posts + engaging LinkedIn content using OpenAI (Blog Posting is not implemented yet) Creates custom images for posts using Replicate's Flux-Schnell AI model Auto-publishes to LinkedIn with image OR emails draft for review Updates Notion with published content and tracks status Set up steps Connect accounts: Notion, OpenAI, Replicate, LinkedIn, Gmail Create 2 Notion databases: Ideas (input) and Articles (output) Update config node: Add your database IDs and email Test with one idea: Run manually first to verify everything works Enable daily automation: Turn on the cron trigger Perfect for: Content creators, developers, and marketers who want to transform rough ideas into professional blog posts and LinkedIn content automatically.