by Luka Zivkovic
Description Who's it for This workflow is designed for developers, entrepreneurs, and startup enthusiasts who want personalized, AI-driven startup idea generation and analysis. Perfect for solo developers seeking side project inspiration, startup accelerators evaluating concepts, or anyone looking to validate business ideas with professional-grade analysis. How it works The workflow uses a three-stage Claude AI agent pipeline to create comprehensive startup analyses. The first agent generates innovative startup ideas based on your technical skills and preferences. The second agent acts as a venture capitalist, critically analyzing market viability, competition, and execution challenges. The third agent performs sentiment analysis and synthesizes a final recommendation with actionable next steps. How to set up Configure Anthropic API credentials for all three Claude AI model nodes Set up Gmail OAuth2 for email delivery Fill out the "My Information" node with your developer profile Update the recipient email address in the Gmail node Test with the manual trigger before enabling daily automation Requirements n8n account Anthropic API account for Claude AI access Gmail account with OAuth2 configured Basic understanding of developer skills and market preferences How to customize the workflow Modify the AI agent prompts to focus on specific industries or business models. Adjust temperature settings for different creativity levels. Add database storage to track idea history. Configure the form trigger for team-wide idea generation or integrate with Slack for automated sharing. Got a good idea? Visit my site https://techpoweredgrowth.com to get help getting to the next level Or reach out to luka.zivkovic@techpoweredgrowth.com
by Arunava
This workflow finds fresh Reddit posts that match your keywords, decides if they’re actually relevant to your brand, writes a short human-style reply using AI, posts it, and logs everything to Baserow. 💡Perfect for Lead gen without spam: drop helpful replies where your audience hangs out. Get discovered by AI surfaces (AI Overviews / SGE, AISEO/GSEO) via high-quality brand mentions. Customer support in the wild: answer troubleshooting threads fast. Community presence: steady, non-salesy contributions in niche subreddits. 🧠 What it does Searches Reddit for your keyword query on a schedule (e.g., every 30 min) Checks Baserow first so you don’t reply twice to the same post Uses an AI prompt tuned for short, no-fluff, subreddit-friendly comments Softly mentions your brand only when it’s clearly relevant Posts the comment via Reddit’s API Saves post_id, comment_id, reply, permalink, status to Baserow Processes posts one-by-one with an optional short wait to be nice to Reddit ⚡ Requirements Reddit developer API Baserow account, table, and API token AI provider API (OpenAI / Anthropic / Gemini) ⚙️ Setup Instructions Create Baserow table Fields (user-field names exactly): post_id (unique), permalink, subreddit, title, created_utc, reply (long text), replied (boolean), created_on (datetime). Add credentials in n8n Reddit OAuth2* (scopes: read, submit, identity) and set a proper *User-Agent** string (Reddit requires it). LLM**: Google Gemini and/or Anthropic (both can be added; one can be fallback in the AI Agent). Baserow**: API token. Set the Schedule Trigger (Cron) Start hourly (or every 2–3h). Pacing is mainly enforced by the Wait nodes. Update “Check duplicate row” (HTTP Request) URL**: https://api.baserow.io/api/database/rows/table/{TABLE_ID}/?user_field_names=true&filter__post_id__equal={{$json.post_id}} Header**: Authorization: Token YOUR_BASEROW_TOKEN (Use your own Baserow domain if self-hosted.) Configure “Filter Replied Posts” Ensure it skips items where your Baserow record shows replied === true (so you don’t comment twice). Configure “Fetch Posts from Reddit” Set your keyword/search query (and time/sort). Keep User-Agent header present. Configure “Write Reddit Comment (AI)” Update your brand name** (and optional link). Edit the prompt/tone** to your voice; ensure it outputs a short reply field (≤80 words, helpful, non-salesy). Configure “Post Reddit Comment” (HTTP Request) Endpoint: POST https://oauth.reddit.com/api/comment Body: thing_id: "t3_{{$json.post_id}}", text: "{{$json.reply}}" Uses your Reddit OAuth credential and User-Agent header. Update user_agent value in header by your username n8n:reddit-autoreply:1.0 (by /u/{reddit-username}) Store Comment Data on Baserow (HTTP Request) POST https://api.baserow.io/api/database/rows/table/{TABLE_ID}/?user_field_names=true Header: Authorization: Token YOUR_BASEROW_TOKEN Map: post_id, permalink, subreddit, title, created_utc, reply, replied, created_on={{$now}}. Keep default pacing Leave Wait 5m (cool-off) and Wait 6h (global pace) → \~4 comments/day. Reduce waits gradually as account health allows. Test & enable Run once manually, verify a Baserow row and one test comment, then enable the schedule. 🤝 Need a hand? I’m happy to help you get this running smoothly—or tailor it to your brand. Reach out to me via email: imarunavadas@gmail.com
by Jitesh Dugar
Automatically qualify inbound demo requests, scrape prospect websites, and send AI-personalized outreach emails—all on autopilot. What This Workflow Does This end-to-end lead automation workflow helps SaaS companies qualify and nurture inbound leads with zero manual work until human approval. Key Features ✅ Smart Email Filtering - Automatically flags personal emails (Gmail, Yahoo, etc.) and routes them to a polite regret message ✅ Website Intelligence - Scrapes prospect websites and extracts business context ✅ AI Analysis - Uses OpenAI to score ICP fit, identify pain points, and find personalization opportunities ✅ Personalized Outreach - AI drafts custom emails referencing specific details from their website ✅ Human-in-the-Loop - Approval gate before sending to ensure quality control ✅ Professional Branding - Even rejected leads get a thoughtful response Perfect For B2B SaaS companies with inbound lead forms Sales teams drowning in demo requests Businesses wanting to personalize at scale Anyone needing intelligent lead qualification What You'll Need Jotform account (or any form tool with webhooks) Create your form for free on Jotform using this link OpenAI API key Gmail account (or any email service) n8n instance (cloud or self-hosted) Workflow Sections 📧 Lead Intake & Qualification - Capture form submissions and filter personal emails 🕷️ Website Scraping - Extract company information from their domain ❌ Regret Flow - Send polite rejection to unqualified leads 🤖 AI Analysis - Analyze prospects and draft personalized emails 📨 Approved Outreach - Human review + send welcome email Customization Tips: Update the AI prompt with your company's ICP and value proposition Modify the personal email provider list based on your market Adjust the regret email template to match your brand voice Add Slack notifications for high-value leads Connect your CRM to log all activities Time Saved: ~15-20 minutes per lead Lead Response: Under 5 minutes (vs hours/days manually)
by DevCode Journey
Who is this for? This workflow is designed for business founders, CMOs, marketing teams, and landing page designers who want to automatically analyze their landing pages and get personalized, unconventional, high-impact conversion rate optimization (CRO) recommendations. It works by scraping the landing page content, then leveraging multiple AI models to roast the page and generate creative CRO ideas tailored specifically for that page. What this Workflow Does / Key Features Captures a landing page URL through a user-friendly form trigger. Scrapes the landing page HTML content using an HTTP request node. Sends the scraped content to a LangChain AI Agent, which orchestrates various AI models (OpenAI, Google Gemini, Mistral, etc.) for deep analysis. The AI Agent produces a friendly, fun, and unconventional “roast” of the landing page, explaining what’s wrong in human tone. Generates 10 detailed, personalized, easy-to-implement, and 2024-relevant CRO recommendations with a “wow” factor. Delivers the analysis and recommendations via Telegram message, Gmail email, and WhatsApp (via Rapiwa). Utilizes multiple AI tools and search APIs to enhance the quality and creativity of the output. Requirements OpenAI API credentials configured in n8n. Google Gemini (PaLM) API credentials for LangChain integration. Mistral Cloud API credentials for text extraction. Telegram bot credentials for sending messages. Gmail OAuth2 credentials for email delivery. Rapiwa API credentials for WhatsApp notifications. Running n8n instance with nodes: Form Trigger, HTTP Request, LangChain AI Agent, Telegram, Gmail, and custom Rapiwa node. How to Use — step-by-step Setup 1) Credentials Add your OpenAI API key under n8n credentials (OpenAi account 2). Add Google Gemini API key (Google Gemini (PaLM) Api account). Add Mistral Cloud API key (Mistral Cloud account). Set up Telegram Bot credentials (Telegram account). Set up Gmail OAuth2 credentials (Gmail account). Add Rapiwa API key for WhatsApp messages (Rapiwa). 2) Configure the Form Trigger Customize the form title, description, and landing page URL input placeholder if desired. 3) Customize Delivery Nodes Modify the Telegram, Gmail, and Rapiwa nodes with your desired recipient info and messaging preferences. 4) Run the Workflow Open the form URL webhook and submit the landing page URL to get a detailed AI-powered CRO roast and recommendations sent directly to your communication channels. Important Notes The AI Agent prompt is designed to create a fun and unconventional roast to engage users emotionally. Avoid generic advice. All CRO recommendations are personalized and contextual based on the scraped content of the provided landing page. Ensure all API credentials are kept secure and not hard-coded. Use n8n credentials management. Adjust the delivery nodes to match your preferred communication channels and recipients. The workflow supports expansion with additional AI models or messaging platforms as needed. 🙋 For Help & Community 👾 Discord: n8n channel 🌐 Website: devcodejourney.com 🔗 LinkedIn: Connect with Shakil 📱 WhatsApp Channel: Join Now 💬 Direct Chat: Message Now
by Oneclick AI Squad
Monitor Indian (NSE/BSE) and US stock markets with intelligent price alerts, cooldown periods, and multi-channel notifications (Email + Telegram). Automatically tracks price movements and sends alerts when stocks cross predefined upper/lower limits. Perfect for day traders, investors, and portfolio managers who need instant notifications for price breakouts and breakdowns. How It Works Market Hours Trigger - Runs every 2 minutes during market hours Read Stock Watchlist - Fetches your stock list from Google Sheets Parse Watchlist Data - Processes stock symbols and alert parameters Fetch Live Stock Price - Gets real-time prices from Twelve Data API Smart Alert Logic - Intelligent price checking with cooldown periods Check Alert Conditions - Validates if alerts should be triggered Send Email Alert - Sends detailed email notifications Send Telegram Alert - Instant mobile notifications Update Alert History - Records alert timestamps in Google Sheets Alert Status Check - Monitors workflow success/failure Success/Error Notifications - Admin notifications for monitoring Key Features: Smart Cooldown**: Prevents alert spam Multi-Market**: Supports Indian & US stocks Dual Alerts**: Email + Telegram notifications Auto-Update**: Tracks last alert times Error Handling**: Built-in failure notifications Setup Requirements: 1. Google Sheets Setup: Create a Google Sheet with these columns (in exact order): A**: symbol (e.g., TCS, AAPL, RELIANCE.BSE) B**: upper_limit (e.g., 4000) C**: lower_limit (e.g., 3600) D**: direction (both/above/below) E**: cooldown_minutes (e.g., 15) F**: last_alert_price (auto-updated) G**: last_alert_time (auto-updated) 2. API Keys & IDs to Replace: YOUR_GOOGLE_SHEET_ID_HERE - Replace with your Google Sheet ID YOUR_TWELVE_DATA_API_KEY - Get free API key from twelvedata.com YOUR_TELEGRAM_CHAT_ID - Your Telegram chat ID (optional) your-email@gmail.com - Your sender email alert-recipient@gmail.com - Alert recipient email 3. Stock Symbol Format: US Stocks**: Use simple symbols like AAPL, TSLA, MSFT Indian Stocks**: Use .BSE or .NSE suffix like TCS.NSE, RELIANCE.BSE 4. Credentials Setup in n8n: Google Sheets**: Service Account credentials Email**: SMTP credentials Telegram**: Bot token (optional) Example Google Sheet Data: symbol upper_limit lower_limit direction cooldown_minutes TCS.NSE 4000 3600 both 15 AAPL 180 160 both 10 RELIANCE.BSE 2800 2600 above 20 Output Example: Alert: TCS crossed the upper limit. Current Price: ₹4100, Upper Limit: ₹4000.
by masaya kawabe
Who’s it for Marketers, creators, and social managers who want hands-off reposting of a specific X (Twitter) user’s videos — with on-brand AI captions and clean, deduplicated logs. What it does / How it works On a schedule, the workflow resolves a target user, fetches recent tweets with media, filters to video posts, and writes them to Google Sheets for tracking and dedupe. It then builds a shareable video URL, generates a short caption via an AI model (OpenRouter), posts to your X account, and updates the sheet with completion status. Sticky notes inside the workflow explain each step, setup tasks, and best practices. How to set up Add credentials: Twitter (X) OAuth2, Google Sheets OAuth2, OpenRouter. Replace the demo Google Sheet with your own (document ID & sheet name). Set the target X username (or parameterize it). Adjust the schedule (interval/cron) and run a test execution. Verify logs and posting format, then enable. Requirements Twitter (X) OAuth2 credential Google Sheets OAuth2 credential OpenRouter credential (choose an affordable model) How to customize Edit the caption prompt (tone, hashtags count, CTAs, compliance lines). Add filters (language, min/max tweet age, exclude replies/retweets, since_id). Extend logging (timestamps, posted text, account, errors). Introduce a dry-run boolean to skip posting while testing. Swap the caption model or add retry rules for robustness. Security: Don’t hardcode tokens in HTTP nodes. Use n8n Credentials only and remove personal IDs before publishing.
by George Zargaryan
Multichannel AI Assistant Demo for Chatwoot This simple n8n template demonstrates a Chatwoot integration that can: Receive new messages via a webhook. Retrieve conversation history. Process the message history into a format suitable for an LLM. Demonstrate an AI Assistant processing a user's query. Send the AI Assistant's response back to Chatwoot. Use Case: If you have multiple communication channels with your clients (e.g., Telegram, Instagram, WhatsApp, Facebook) integrated with Chatwoot, you can use this template as a starting point to build more sophisticated and tailored AI solutions that cover all channels at once. How it works A webhook receives the message created event from Chatwoot. The webhook data is then filtered to keep only the necessary information for a cleaner workflow. The workflow checks if the message is "incoming." This is crucial to prevent the assistant from replying to its own messages and creating endless loops. The conversation history is retrieved from Chatwoot via an API call using the HTTP Request node. This allows the assistant's interaction to be more natural and continuous without needing to store conversation history locally. A simple AI Assistant processes the conversation history and generates a response to the user based on its built-in knowledge base (see the prompt in the assistant node). The final HTTP Request node sends the AI-generated response back to the appropriate Chatwoot conversation. How to Use In Chatwoot, go to Settings → Integrations → Webhooks and add your n8n webhook URL. Be sure to select the message created event. In the HTTP Request nodes, replace the placeholder values: https://yourchatwooturl.com api_access_token You can find these values on your Chatwoot super admin page. The LLM node is configured to use OpenRouter. Add your OpenRouter credentials, or replace the node with your preferred LLM provider. Requirements An API key for OpenRouter or credentials for your preferred LLM provider. A Chatwoot account with at least one integrated channel and super admin access to obtain the api_access_token. Need Help Building Something More? Contact me on: Telegram:** @ninesfork LinkedIn:** George Zargaryan Happy Hacking! 🚀
by Muhammad Farooq Iqbal
🔄 How It Works - LinkedIn Post with Image Automation Overview This n8n automation creates and publishes LinkedIn posts with AI-generated images automatically. It's a complete end-to-end solution that transforms simple post titles into engaging social media content. Step-by-Step Process 1. Content Trigger & Management Google Sheets Trigger** monitors a spreadsheet for new post titles Only processes posts with "pending" status Limits to one post at a time for controlled execution 2. AI Content Generation AI Agent** uses Google Gemini to create engaging LinkedIn posts Takes the post title and generates: Compelling opening hooks 3-4 informative paragraphs Engagement questions Relevant hashtags (4-6) Appropriate emojis Output is structured and formatted for LinkedIn 3. AI Image Creation Google Gemini Image Generation** creates custom visuals Uses the AI-generated post content as context Generates professional images featuring: Modern workspace with coding elements Flutter development themes Professional, LinkedIn-appropriate aesthetics 16:9 aspect ratio, high resolution No text or captions** in the generated image 4. Image Processing & Storage Generated images are uploaded to Google Drive Files are shared with public access permissions Image URLs are stored back in the spreadsheet for tracking 5. LinkedIn Publishing LinkedIn API integration** handles the posting process: Registers image uploads Uploads images to LinkedIn's servers Creates posts with text + image Publishes to your LinkedIn profile Updates spreadsheet status to "posted" Technical Architecture Google Sheets → AI Content → AI Image → Google Drive → LinkedIn API → Status Update ↓ ↓ ↓ ↓ ↓ ↓ Trigger Gemini LLM Gemini File Upload Posting Tracking Content Gen Image Gen Key Features ✅ Fully Automated - Runs continuously without manual intervention ✅ AI-Powered - Both content and images generated by AI ✅ Professional Quality - LinkedIn-optimized formatting and visuals ✅ Real-time Tracking - Monitor status and performance ✅ Scalable - Handle multiple posts and campaigns How to Use Setup Requirements Google Gemini API for content and image generation LinkedIn API credentials for posting Google Sheets for content management Google Drive for image storage n8n instance for workflow execution Content Management Add new post titles to your Google Sheet Set status to "pending" Automation automatically processes and publishes Status updates to "posted" upon completion Customization Options Modify AI prompts for different content styles Adjust image generation parameters Change posting frequency and timing Add multiple LinkedIn accounts Integrate with other content sources Use Cases �� Perfect for: Startups** wanting consistent LinkedIn presence Marketing teams** overwhelmed with content creation HR departments** building employer branding Agencies** managing multiple client accounts Solo entrepreneurs** needing professional social media presence Benefits ⏰ Time Savings: 20+ hours per week for content teams 📈 Consistency: Daily, professional posts without gaps 🎨 Quality: AI-optimized content and visuals 📊 Scalability: Handle unlimited content volume 💰 Cost Effective: Reduce manual content creation costs 🔄 The automation runs continuously, ensuring your LinkedIn presence stays active and engaging 24/7! For inquiries: mfarooqiqbal143@gmail.com
by Wan Dinie
Generate Social Hub (link-in-bio) page with FireCrawl AI and Apify This n8n template demonstrates how to create a link-in-bio style landing page (similar to Linktree or Beacons.ai) that automatically aggregates all social media links from any website. Author's Note: I built this because I was tired of manually inserting my social media links and copying business descriptions from my website one by one into the link-in-bio platform. What used to take me 10-15 minutes now happens automatically in under a minute. Hope it saves you time too. Use cases are many: Try creating instant social hubs for sales leads or generating quick bio pages for directory listings. Good to know At time of writing, FireCrawl offers free tier with 500 credits per month. See FireCrawl pricing for updated info. Apify offers a free tier with enough credits for testing and small projects. Actor usage costs vary based on compute time. The generated HTML is fully responsive and can be embedded directly or saved as a standalone page. Average processing time is 30-45 seconds per website depending on the size and complexity. How it works We'll collect a website URL via a form trigger (accessible through webhook). Apify's Contact Details Scraper Actor extracts emails, phone numbers, and social media links from the submitted website. FireCrawl AI analyzes the website content and generates a short, compelling business description (1-2 sentences, under 150 characters). Both results are merged and processed to identify specific social platforms (Facebook, Instagram, Twitter/X, LinkedIn, YouTube, TikTok, WhatsApp). The system generates a HTML page with link-in-bio style layout, featuring brand colors, social icons, and the business description. The result is displayed as a formatted HTML response directly in the form - ready to share, embed, or save. How to use The form trigger is used as the entry point, but you can replace this with other triggers such as a webhook, schedule, or manual trigger for batch processing. You can process multiple websites by looping through a list, though processing will take longer (approximately 30-45 seconds per site). The debug node at the bottom lets you preview and edit the HTML styling before deployment. Requirements Apify API key (get one at https://apify.com) Enable the Contact Details Scraper Actor at https://console.apify.com/actors/WYyiMAvNXhfc2Rthx/input FireCrawl API key (get free access at https://www.firecrawl.dev) Valid website URLs to analyze (must be publicly accessible) Customizing this workflow Adjust wait time: The default wait time is 30 seconds in the "Wait for the Apify Scraper Process" node. Increase this if your scraper needs more time for larger websites. Modify description extraction: Edit the extraction prompt in the "Scrape website description" node to change the description length or style. FireCrawl's /extract endpoint supports natural language prompts for structured data extraction. Change HTML styling: Edit the CSS in the "Create html format" node to customize colors, fonts, layout, or add animations. The current design uses a purple gradient background with white cards. Debug HTML output: Use the "View HTML for redesign or debug" node at the bottom to preview the generated HTML without submitting through the webhook.
by Davide
This workflow automates the creation of long AI-generated videos from prompts, merges the generated clips into a single video, and automatically distributes the final content across multiple platforms. The process starts with a Google Sheet that acts as the control panel for the workflow. Each row in the sheet contains a prompt, the duration of the clip, and a starting frame. The workflow reads this data and generates video clips sequentially. Using the RunPod WAN 2.5 video generation API, the workflow creates individual video segments based on the prompt and input image. Each segment is then stored and tracked in the spreadsheet. Once all clips are generated, the workflow uses the Fal.run FFmpeg API to merge them into a single long video. After merging, the final video is retrieved automatically. The workflow also extracts the last frame of each generated clip to use as the starting frame for the next clip, ensuring smooth visual continuity between scenes. Finally, the completed video is automatically: Uploaded to Google Drive for storage Published to YouTube Uploaded to Postiz, which distributes it to social platforms such as TikTok, Instagram, Facebook, X, and YouTube This creates a fully automated pipeline that transforms prompts in a spreadsheet into a finished long-form video distributed across multiple platforms. Key Advantages 1. ✅ Fully Automated Video Production The workflow automates the entire process of generating, assembling, and publishing videos, eliminating manual editing and upload steps. 2. ✅ Spreadsheet-Based Control Using Google Sheets as the input system makes the workflow easy to manage and scalable. Users can create or modify video scenes simply by editing rows in the sheet. 3. ✅ Scalable AI Video Generation The workflow can generate multiple clips and combine them into longer videos, enabling the creation of long-form content from short AI-generated segments. 4. ✅ Seamless Scene Continuity By extracting the last frame of each clip and using it as the starting frame for the next scene, the workflow maintains visual continuity between segments. 5. ✅ Automatic Video Merging The Fal.run FFmpeg API merges all generated clips into a single final video without requiring external editing tools. 6. ✅ Multi-Platform Distribution Once the video is completed, it is automatically uploaded and published to multiple platforms, significantly reducing the time needed for content distribution. 7. ✅ Centralized Storage The final video is saved to Google Drive, providing organized and secure storage for the generated content. 8. ✅ Error Handling and Status Monitoring The workflow continuously checks the status of generation and processing tasks, waiting and retrying until the job is completed. How it works This workflow automates the creation of long videos by generating multiple clips from a Google Sheet and merging them together. Here's the process: Trigger & Data Loading: When manually executed, the workflow reads a Google Sheet containing video generation parameters (prompts, durations, and starting images). Video Generation Loop: For each row marked for processing, it: Sends the prompt and parameters to RunPod's WAN 2.5 video generation API Waits for completion (with status checking every 60 seconds) Retrieves the generated video URL and updates the Google Sheet Frame Extraction: After each video is generated, it extracts the last frame using Fal.ai's FFmpeg API and updates the next row's starting image (creating visual continuity). Video Merging: Once all individual clips are generated (marked with "x" in the MERGE column), the workflow: Collects all video URLs Sends them to Fal.ai's FFmpeg merge API Polls for completion every 60 seconds Retrieves the final merged video Distribution: The final long video is: Uploaded to Google Drive Posted to YouTube via Upload-Post API Posted to multiple social platforms (TikTok, Instagram, Facebook, X) via Postiz Setup steps Google Sheet Setup: Clone this template sheet Update the sheet ID in all Google Sheets nodes Fill in columns: START (initial image URL), PROMPT, DURATION (4, 6, or 8 seconds) Mark rows to merge with "x" in the MERGE column API Credentials Required: Google Sheets OAuth2: For reading/writing spreadsheet data Google Drive OAuth2: For uploading final videos Fal.ai API Key: For frame extraction and video merging RunPod API Key: For WAN 2.5 video generation Upload-Post API Key: For YouTube uploads Postiz API Key: For social media posting Configure Nodes: Update YOUR_USERNAME in the "Upload to Youtube" node Set channel IDs and titles in the "Upload to Social" node (integrationId, content) Verify folder IDs in Google Drive nodes Test: Run the workflow manually to generate your first long video sequence 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by David P
Curate & post AI news to X, Bluesky, Threads & more via GPT-5 mini & Cue This n8n template automatically curates AI news from RSS feeds and generates platform-tailored social media posts using GPT-5 mini. Posts are saved as drafts in Cue for review before publishing to X, Bluesky, Threads, Mastodon, and Facebook. Use cases include: Daily automated AI/tech news curation Multi-platform social media content creation Building thought leadership with consistent posting Staying on top of industry news without manual effort Who is this for? This workflow is ideal for: Tech content creators who want to share AI news across multiple platforms Social media managers handling multiple accounts Anyone building an audience around AI/tech topics Teams who want consistent daily content without manual curation What problem does this workflow solve? Manually curating news, writing platform-specific posts, and publishing across 5 different social networks is time-consuming. This workflow automates the entire process: Curation** - Pulls from 4 trusted AI/tech RSS feeds daily Deduplication** - Tracks posted articles in Google Sheets so you never share the same story twice Content creation** - GPT-5 mini writes posts tailored to each platform's style and character limits Review workflow** - Creates drafts in Cue so you can review before publishing How it works Schedule Trigger - Runs daily at 9am (configurable) RSS Feeds - Fetches articles from TechCrunch AI, Ars Technica AI, The Verge AI, and MIT Tech Review Filter & Merge - Combines all feeds and filters to articles from the last 7 days Deduplication - Compares against Google Sheets to find unposted articles Random Selection - Picks one random article from available stories AI Generation - GPT-5 mini generates 5 platform-specific posts with appropriate tone and length Save to Cue - Creates a draft post with all 5 platform variations Log to Sheet - Records the article URL to prevent future duplicates Setup Requirements Cue account with connected social accounts OpenAI API key Google account for Sheets Step 1: Install the Cue community node Go to Settings → Community Nodes Click Install Enter @cuehq/n8n-nodes-cue Step 2: Create tracking spreadsheet Create a new Google Sheet named "AI News Tracker" Add these column headers in row 1: article_url title source processed_at Step 3: Configure credentials Google Sheets - Add OAuth2 credentials and connect to the "Get Recent Posts" node OpenAI - Add your API key and connect to the "GPT-5 mini" node Cue - Add your API key from Cue Settings Step 4: Configure the Cue node Open the Create Draft in Cue node Select your Profile For each platform slot, select your social account: Slot 1 → X/Twitter Slot 2 → Bluesky Slot 3 → Threads Slot 4 → Mastodon Slot 5 → Facebook Don't have all 5 platforms? Simply delete the unused slots. Step 5: Publish Save and click Publish to activate the workflow. Customizing this workflow Change the schedule Edit the Daily 9am Trigger node to run at a different time or frequency. Use different RSS feeds Replace the feed URLs with sources relevant to your niche. The workflow handles any standard RSS feed. Keep 3-6 feeds for best results. Auto-publish instead of drafts To publish immediately instead of creating drafts, enable Publish Immediately in the Cue node settings. Adjust the AI tone Modify the system prompt in the Write Social Posts node to match your brand voice or adjust platform-specific guidelines. Good to know Cost** - Each run uses one OpenAI API call. With GPT-5 mini, this costs approximately $0.01-0.02 per execution. Draft review** - Posts are created as drafts in Cue, giving you a chance to review and edit before publishing. Deduplication** - The Google Sheet tracks all posted URLs, so the same article is never shared twice. About Cue Cue is a social media scheduling platform that lets you manage and publish content across X, Bluesky, Threads, Mastodon, Facebook, LinkedIn, TikTok, and Instagram from a single dashboard. Key features: Multi-platform publishing** - Schedule once, publish everywhere Platform-specific content** - Tailor each post for different audiences Draft workflow** - Review and edit before publishing API & integrations** - Connect with n8n, Zapier, Make, and custom apps Get started free · Documentation · n8n Community Node
by Jitesh Dugar
Schedule social media posts from local files using UploadToURL, OpenAI, and Buffer Marketing teams often have design files sitting locally — campaign images, product videos, event graphics — that need to be published on social media. The usual process means downloading files, switching apps, uploading to each platform separately, and writing captions by hand. This workflow removes those steps. Send a file link or binary upload to the webhook. UploadToURL hosts it instantly and returns a clean public URL. OpenAI GPT-4.1 mini reads the filename and context to generate a platform-specific caption, hashtags, alt text, and a scroll-stopping hook. A Switch node routes to the correct Buffer profile — Twitter/X, Instagram, or LinkedIn — and the post is scheduled at the AI-suggested best time. What this workflow does Receives a file URL or binary upload via webhook along with platform, tone, and brand preferences Validates the payload — checks the platform, detects content type from the file extension, cleans the filename into readable words for the AI prompt Uploads the file to UploadToURL and retrieves a permanent public link Sends the link and context to OpenAI, which returns a structured JSON caption with hashtags, alt text, a hook line, and a UTC posting time Routes to the correct Buffer profile based on the platform field Schedules the post and returns a confirmation with the schedule ID, caption, hashtags, and estimated engagement Who this is for Marketing agencies** managing multiple brand accounts who need to go from a finished design file to a scheduled post without switching tools Solo creators** who want to publish immediately after finishing a piece of content without manually uploading to each platform E-commerce teams** who want to trigger social posts whenever new product photos are ready Setup Install the UploadToURL community node: n8n-nodes-uploadtourl Add credentials for UploadToURL API, OpenAI API, and Buffer (as HTTP Header Auth with your Buffer access token) Set three workflow variables: BUFFER_PROFILE_TWITTER, BUFFER_PROFILE_INSTAGRAM, BUFFER_PROFILE_LINKEDIN — find these IDs in your Buffer account under each profile's settings Activate and copy the webhook URL Webhook payload { "fileUrl": "https://cdn.example.com/summer-campaign.jpg", "filename": "summer-campaign.jpg", "platform": "instagram", "tone": "casual", "brand": "Acme Studio", "hashtags": true } To upload a binary file instead, send as multipart/form-data with field name file and omit fileUrl. Pass scheduleTime as an ISO 8601 string to override the AI scheduling suggestion. Notes The OpenAI node uses gpt-4.1-mini with response_format: json_object to guarantee structured output — no post-processing of freetext required Caption length is validated against per-platform limits before scheduling (Twitter: 280, Instagram: 2200, LinkedIn: 3000) To add Facebook or TikTok, add a new output on the Switch node and duplicate one of the Buffer HTTP request nodes The error handler returns a structured JSON 400 response so calling apps receive actionable feedback without needing to check n8n logs