by Sabrina Ramonov 🍄
Description Fully automated pipeline where you send an email to yourself with a rough idea (subject contains “thread”), n8n’s Gmail trigger picks it up, OpenAI ChatGPT rewrites/apply a viral-thread template, and Blotato posts the long-form thread to X/Twitter, Bluesky, and Meta Threads (optionally schedule or include images/videos). Template is easily extensible to other social platforms. Who Is This For? Digital creators, content marketers, social media managers, agencies, entrepreneurs, and influencers who want fast, automated long-form thread posting. 📄 Documentation Full Step-by-Step Tutorial How It Works 1. Trigger: Gmail Connect your Gmail account. n8n monitors emails sent from you and filters for subjects containing the word “thread”. 2. AI Thread Writer: OpenAI ChatGPT Connect your OpenAI account. Prompt ChatGPT to clean up your draft and format a long-form viral thread. 3. Publish to Social Media via Blotato Connect your Blotato account and choose social accounts (X/Twitter, Threads, Bluesky). Schedule or post immediately. Supports optional image/video URLs via a mediaUrls array (publicly accessible URLs). Example email to trigger the workflow: Email Subject: thread Email Body: I'm obsessed with voice AI apps. Super Whisper is my current favorite because it runs locally and keeps my voice data private. I talk to it instead of typing. Way faster. Setup & Required Accounts Gmail account (used as trigger) n8n Gmail OAuth doc: https://docs.n8n.io/integrations/builtin/credentials/google/oauth-single-service OpenAI Platform account (access to ChatGPT) Blotato account: https://blotato.com Generate Blotato API Key: Settings > API > Generate API Key (paid feature only) Sign in to Blotato and create an API Key (required for posting) n8n: Ensure "Verified Community Nodes" enabled in your n8n Admin Panel Install the "Blotato" community node and create Blotato credentials Optional: Media & Style Tweaks Attach images/videos: insert publicly accessible URLs into the mediaUrls array (advanced). To emulate a specific tone/structure, provide ChatGPT examples of your favorite viral threads or replace the example viral-thread prompt with your preferred example. Voice-to-text tip: record ideas (e.g., Superwhispr) and send the transcript by email — ChatGPT will clean it up. Tips & Tricks During testing, use “Scheduled Time” in Blotato instead of immediate posting to preview before going live. Start with a single social platform while testing. If your script is long or includes media, processing may take longer. Many users prefer speaking their ideas (voice notes) then letting AI edit — faster than typing. Troubleshooting Check your Blotato API Dashboard to inspect each request, response, and error. Confirm API key validity, n8n node credentials, and that emails sent have subject containing “thread”. Need Help? In the Blotato web app, click the orange support button in the bottom right to access Blotato support.
by Weiser22
Shopify Multilingual Product Copy with n8n & Gemini 2.5 Flash-Lite Use for free Created by <Weiser22> · Last update 2025-09-02 Categories: E-commerce, Product Content, Translation, Computer Vision Description Generate language-specific Shopify product copy (ES, DE, EN, FR, IT, PT) from each product’s main image and metadata. The workflow performs a vision analysis to extract objective, verifiable details, then produces product names, descriptions, and handles per language, and stores the results in Google Sheets for review or publishing. Good to know Model:** models/gemini-2.5-flash-lite (supports image input). Confirm pricing/limits in your account before scaling. Image requirement:** products should have images[0].src; add a fallback if some products lack a primary image. Sheets mapping:** the sheet node uses Auto-map; ensure your matching column aligns with the field you emit (id vs product_id). Strict output:** the Agent enforces a multilingual JSON contract (es,de,en,fr,it,pt), each with shopify_product_name, shopify_description, handle. How it works Manual Trigger:** start a test run on demand. Get many products (Shopify):** fetch products and their images. Analyze image (Gemini Vision):** send images[0].src with an objective, 3–5 sentence prompt. AI Agent (Gemini Chat):** merge Shopify fields + vision text under anti-hallucination rules and a strict JSON schema. Structured Output Parser:** validates the exact JSON shape. Expand Languages & Sanitize (Code):** split into 6 items and normalize handles/HTML content as needed. Append row in sheet (Google Sheets):** add one row per language to your spreadsheet. Requirements Shopify Access Token with product read permissions. Google AI Studio (Gemini) API key for Vision + Chat Model nodes. Google Sheets credentials (OAuth or Service Account) with access to the target spreadsheet. How to use Connect credentials: Shopify, Gemini (same key for Vision and Chat), and Google Sheets. Configure nodes: Get many products: adjust limit/filters. Analyze image: verify ={{ $json.images[0].src }} resolves to a public image URL. AI Agent & Parser: keep the strict JSON contract as provided. Code (Expand & Sanitize): emits product_id, lang, handle, shopify_product_name, shopify_description, base_handle_es. Google Sheets (Append): set documentId and tab name; confirm the matching column. Run a test: execute the workflow and confirm six rows per product (one per language) appear in the sheet. Data contract (Agent output) { "es": {"shopify_product_name": "", "shopify_description": "", "handle": ""}, "de": {"shopify_product_name": "", "shopify_description": "", "handle": ""}, "en": {"shopify_product_name": "", "shopify_description": "", "handle": ""}, "fr": {"shopify_product_name": "", "shopify_description": "", "handle": ""}, "it": {"shopify_product_name": "", "shopify_description": "", "handle": ""}, "pt": {"shopify_product_name": "", "shopify_description": "", "handle": ""} } Customising this workflow Publish to Shopify:** after review in Sheets, add a product.update step to write finalized copy/handles. Handle policy:** tweak slug rules (diacritics, separators, max length) in the Code node to match store conventions. No-image fallback:** add an IF/Switch to skip vision when images[0].src is missing and generate copy from title + body only. Tone/length:** adjust temperature and token limits on the Chat Model for brand-fit. Troubleshooting No rows in Sheets:** confirm spreadsheet ID, tab name, Auto-map status, and that the matching column matches your emitted field. Vision errors:** ensure images[0].src is reachable. Parser failures:* the Agent must return *bare JSON** with the six root keys and three fields per language—no extra text.
by Rajeet Nair
📖 Description 🔹 How it works This workflow uses AI (Mistral LLM + Pollinations.ai) to generate high-quality visual content for social media campaigns. It automates the process from brand/campaign input to final image upload, ensuring consistency and relevance. Input Brand & Campaign Data Retrieves brand profile and campaign goals from Google Drive. Cleans and merges the data into a structured JSON format. Campaign Goal Generation AI summarizes campaign goals, audience, success metrics, and keywords. Produces a clear campaign goal summary for content planning. Image Prompt Generation AI creates 5 detailed image prompts reflecting the campaign story. Includes 1 caption and 4–6 relevant hashtags. Image Creation Pollinations.ai generates images based on the AI prompts. Each image is renamed systematically (photo1 → photo5). Post-Processing & Upload All images are merged into a single item. Workflow uploads the final output to Google Drive for campaign use. ⚙️ Set up steps Connect Credentials Add Google Drive and Mistral API credentials in n8n. Configure Google Drive Input Nodes Set fileId for brand profile and campaign goals. Customize AI Prompts Sticky notes explain AI nodes for goal summary and image prompt generation. Optionally modify tone, keywords, or target audience for brand-specific campaigns. Check Image Output Nodes Ensure Pollinations.ai HTTP request nodes are active. Verify renaming code nodes for proper photo sequence. Activate Workflow Test workflow manually to ensure images are generated and uploaded correctly. 🔹 Data Handling & Output This workflow pulls brand profile and campaign goal data from Google Drive. Data is processed into structured JSON, including: Brand Profile: name, mission, vision, values, services, tone, keywords, contact info. Campaign Goal: primary goal, focus, success metrics, target audience, core message. Supports population of multiple campaigns or brands dynamically. JSON output can be used downstream for image prompt generation, reporting, or analytics. All processing is automated, with clear nodes for extraction, parsing, and merging. pollinations.ai is an open-source free text and image generation API available. No signups or API keys required. which prioritize your privacy with zero data storage and completely anonymous usage. ⚡ Result: A fully automated AI-to-image workflow that transforms campaign goals into ready-to-use social media visuals, saving time and maintaining brand consistency.
by n8n Team
This workflow provides a simple example of how to use itemMatching(itemIndex: Number) in the Code node to retrieve linked items from earlier in the workflow.
by Lucas Peyrin
How it works This workflow is an interactive, hands-on tutorial designed to teach you the absolute basics of JSON (JavaScript Object Notation) and, more importantly, how to use it within n8n. It's perfect for beginners who are new to automation and data structures. The tutorial is structured as a series of simple steps. Each node introduces a new, fundamental concept of JSON: Key/Value Pairs: The basic building block of all JSON. Data Types: It then walks you through the most common data types one by one: String (text) Number (integers and decimals) Boolean (true or false) Null (representing "nothing") Array (an ordered list of items) Object (a collection of key/value pairs) Using JSON with Expressions: The most important step! It shows you how to dynamically pull data from a previous node into a new one using n8n's expressions ({{ }}). Final Exam: A final node puts everything together, building a complete JSON object by referencing data from all the previous steps. Each node has a detailed sticky note explaining the concept in simple terms. Set up steps Setup time: 0 minutes! This is a tutorial workflow, so there is no setup required. Simply click the "Execute Workflow" button to run it. Follow the instructions in the main sticky note: click on each node in order, from top to bottom. For each node, observe the output in the right-hand panel and read the sticky note next to it to understand what you're seeing. By the end, you'll have a solid understanding of what JSON is and how to work with it in your own n8n workflows.
by Mohamed Salama
Let AI agents fetch communicate with your Bubble app automatically. It connects direcly with your Bubble data API. This workflow is designed for teams building AI tools or copilots that need seamless access to Bubble backend data via natural language queries. How it works Triggered via a webhook from an AI agent using the MCP (Model-Chain Prompt) protocol. The agent selects the appropriate data tool (e.g., projects, user, bookings) based on user intent. The workflow queries your Bubble database and returns the result. Ideal for integrating with ChatGPT, n8n AI-Agents, assistants, or autonomous workflows that need real-time access to app data. Set up steps Enable access to your Bubble data or backend APIs (as needed). Create a Bubble admin token. Add your Bubble node/s to your n8n workflow. Add your Bubble admin token. Configer your Bubble node/s. Copy the generated webhook URL from the MCP Server Trigger node and register it with your AI tool (e.g., LangChain tool loader). (Optional) Adjust filters in the “Get an Object Details” node to match your dataset needs. Once connected, your AI agents can automatically retrieve context-aware data from your Bubble app, no manual lookups required.
by moosa
Daily Tech & Startup Digest: Notion-Powered News Curation Description This n8n workflow automates the curation of a daily tech and startup news digest from articles stored in a Notion database. It filters articles from the past 24 hours, refines them using keyword matching and LLM classification, aggregates them into a single Markdown digest with categorized summaries, and publishes the result as a Notion page. Designed for manual testing or daily scheduled runs, it includes sticky notes (as required by the n8n creator page) to document each step clearly. This original workflow is for educational purposes, showcasing Notion integration, AI classification, and Markdown-to-Notion conversion. Data in Notion Workflow Overview Triggers Manual Trigger**: Tests the workflow (When clicking ‘Execute workflow’). Schedule Trigger**: Runs daily at 8 PM (Schedule Trigger, disabled by default). Article Filtering Fetch Articles**: Queries the Notion database (Get many database pages) for articles from the last 24 hours using a date filter. Keyword Filtering**: JavaScript code (Code in JavaScript) filters articles containing tech/startup keywords (e.g., "tech," "AI," "startup") in title, summary, or full text. LLM Classification**: Uses OpenAI’s gpt-4.1-mini (OpenAI Chat Model) with a text classifier (Text Classifier) to categorize articles as "Tech/Startup" or "Other," keeping only relevant ones. Digest Creation Aggregate Articles**: Combines filtered articles into a single object (Code in JavaScript1) for processing. Generate Digest**: An AI agent (AI Agent) with OpenAI’s gpt-4.1-mini (OpenAI Chat Model1) creates a Markdown digest with an intro paragraph, categorized article summaries (e.g., AI & Developer Tools, Startups & Funding), clickable links, and a closing note. Notion Publishing Format for Notion**: JavaScript code (Code in JavaScript2) converts the Markdown digest into a Notion-compatible JSON payload, supporting headings, bulleted lists, and links, with a title like “Tech & Startup Daily Digest – YYYY-MM-DD”. Create Notion Page**: Sends the payload via HTTP request (HTTP Request) to the Notion API to create a new page. Credentials Uses Notion API and OpenAI API credentials. Notes This workflow is for educational purposes, demonstrating Notion database querying, AI classification, and Markdown-to-Notion publishing. Enable and adjust the schedule trigger (e.g., 8 PM daily) for production use to create daily digests. Set up Notion and OpenAI API credentials in n8n before running. The date filter can be modified (e.g., hours instead of days) to adjust the article selection window.
by Interlock GTM
Summary Turns a plain name + email into a fully-enriched HubSpot contact by matching the person in Apollo, pulling their latest LinkedIn activity, summarising the findings with GPT-4o, and upserting the clean data into HubSpot Key use-cases SDRs enriching inbound demo requests before routing RevOps teams keeping executive records fresh Marketers building highly-segmented email audiences Inputs |Field |Type| Example| |-|-|-| name |string| “Jane Doe” email| string |“jane@acme.com” Required credentials |Service |Node |Notes| |-|-|-| Apollo.io API key | HTTP Request – “Enrich with Apollo” |Set in header x-api-key RapidAPI key| (Fresh-LinkedIn-Profile-Data) “Get recent posts”| Header x-rapidapi-key OpenAI 3 LangChain nodes| Supply an API key| default model gpt-4o-mini HubSpot OAuth2| “Enrich in HubSpot”| Add/create any custom contact properties referenced High-level flow Trigger – Runs when another workflow passes name & email. Clean – JS Code node normalises & deduplicates emails. Apollo match – Queries /people/match; skips if no person. LinkedIn fetch – Grabs up to 3 original posts from last 30 days. AI summary chain OpenAI → Structured/Auto-fixing parsers Produces a strict JSON block with job title, location, summaries, etc. HubSpot upsert – Maps every key (plus five custom properties) into the contact record. Sticky-notes annotate the canvas; error-prone bits have retry logic.
by Trung Tran
AI-Powered YouTube Auto-Tagging Workflow (SEO Automation) Watch the demo video below: > Supercharge your YouTube SEO with this AI-powered workflow that automatically generates and applies smart, SEO friendly tags to your new videos every week. No more manual tagging, just better discoverability, improved reach, and consistent optimization. Plus, get instant Slack notifications so your team stays updated on every video’s SEO boost. Who’s it for YouTube creators, channel admins, and marketing teams who publish regularly and want consistent, SEO-friendly tags without manual effort. Agencies managing multiple channels who need an auditable, automated tagging process with Slack notifications. How it works / What it does Weekly Schedule Trigger Runs the workflow once per week. Get all videos uploaded last week Queries YouTube for videos uploaded by the channel in the past 7 days. Get video detail Retrieves each video’s title, description, and ID. YouTube Video Auto Tagging Agent (LLM) Inputs: video.title, video.description, channelName. Uses a SEO-specialist system prompt to generate 15–20 relevant, comma-separated tags. Update video with AI-generated tags Writes the tags back to the video via YouTube Data API. Inform via Slack message Posts a confirmation message (video title + ID + tags) to a chosen Slack channel for visibility. How to set up YouTube connection Create a Google Cloud project and enable YouTube Data API v3. Configure OAuth client (Web app / Desktop as required). Authorize with the Google account that manages the channel. In your automation platform, add the YouTube credential and grant scopes (see Requirements). Slack connection Create or use an existing Slack app/bot. Install to your workspace and capture the Bot Token. Add the Slack credential in your automation platform. LLM / Chat Model Select your model (e.g., OpenAI GPT). Paste the System Prompt (SEO expert) and the User Prompt template: Inputs: {{video_title}}, {{video_description}}, {{channel_name}}. Output: comma-separated list of 15–20 tags (no #, no duplicates). Node configuration Weekly Schedule Trigger: choose day/time (e.g., Mondays 09:00 local). Get all videos uploaded last week: date filter = now() - 7 days. Get video detail: map each video ID from previous node. Agent node: map fields to the prompt variables. Update video: map the agent’s tag string to the YouTube tags field. Slack message: The video "{{video_title}} - {{video_id}}" has been auto-tagged successfully. Tags: {{tags}} Test run Manually run the workflow with one recent video. Verify the tags appear in YouTube Studio and the Slack message posts. Requirements APIs & Scopes YouTube Data API v3** youtube.readonly (to list videos / details) youtube or youtube.force-ssl (to update video metadata incl. tags) Slack Bot Token Scopes** chat:write (post messages) channels:read or groups:read if selecting channels dynamically (optional) Platform Access to a chat/LLM provider (e.g., OpenAI). Outbound HTTPS allowed. Rate limits & quotas YouTube updates consume quota; tag updates are write operations—avoid re-writing unchanged tags. Add basic throttling (e.g., 1–2 updates/sec) if you process many videos. How to customize the workflow Schedule:** switch to daily, or run on publish events instead of weekly. Filtering:** process only videos matching rules (e.g., title contains “tutorial”, or missing tags). Prompt tuning:** Add brand keywords to always include (e.g., “WiseStack AI”). Constrain to language (e.g., “Vietnamese tags only”). Enforce max 500 chars total for tags if you want a stricter cap. Safety guardrails:** Validate model output: split by comma, trim whitespace, dedupe, drop empty/over-long tags. If the agent fails, fall back to a heuristic generator (title/keywords extraction). Change log:** write a row per update to a sheet/DB (videoId, oldTags, newTags, timestamp, runId). Human-in-the-loop:** send tags to Slack as buttons (“Apply / Edit / Skip”) before updating YouTube. Multi-channel support:** loop through a list of channel credentials and repeat the pipeline. Notifications:** add error Slack messages for failed API calls; summarize weekly results. Tip: Keep a small allow/deny list (e.g., banned terms, mandatory brand terms) and run a quick sanitizer right after the agent node to maintain consistency across your channel.
by Automate With Marc
## Podcast on Autopilot — Generate Podcast Ideas, Scripts & Audio Automatically with Eleven Labs, GPT-5 and Claude Sonnet 4.0 Bring your solo podcast to life — on full autopilot. This workflow uses GPT-5 and Claude Sonnet to turn a single topic input into a complete podcast episode intro and ready-to-send audio file. How it works Start a chat trigger – enter a seed idea or topic (e.g., “habits,” “failure,” “technology and purpose”). Podcast Idea Agent (GPT-5) instantly crafts a thought-provoking, Rogan- or Bartlett-style episode concept with a clear angle and takeaway. Podcast Script Agent (Claude 4.0 Sonnet) expands that idea into a natural, engaging 60-second opening monologue ready for recording. Text-to-Speech via ElevenLabs automatically converts the script into a high-quality voice track. Email automation sends the finished MP3 directly to your inbox. Perfect for • Solo creators who want to ideate, script and voice short podcasts effortlessly • Content teams prototyping daily or weekly audio snippets • Anyone testing AI-driven storytelling pipelines Customization tips • Swap ElevenLabs with your preferred TTS service by editing the HTTP Request node. • Adjust prompt styles for tone or audience in the Idea and Script Agents. • Modify the Gmail (or other mail service) node to send audio to any destination (Drive, Slack, Notion, etc.). • For reuse at scale, add variables for episode number, guest name, or theme category — just clone and update the trigger node. Watch step-by-step tutorial (how to build it yourself) https://www.youtube.com/watch?v=Dan3_W1JoqU Requirements & disclaimer • Requires API keys for OpenAI + Anthropic + ElevenLabs (or your chosen TTS). • You’re responsible for managing costs incurred through AI or TTS usage. • Avoid sharing sensitive or private data as input into prompt flows. • Designed with modularity so you can turn off or swap/deep-link any stage (idea → script → voice → email) without breaking the chain.
by Paul Abraham
This n8n template demonstrates how to turn a Telegram bot into a personal AI-powered assistant that understands both voice notes and text messages. The assistant can transcribe speech, interpret user intent with AI, and perform smart actions such as managing calendars, sending emails, or creating notes. Use cases Hands-free scheduling with Google Calendar Quickly capturing ideas as Notion notes via voice Sending Gmail messages directly from Telegram A personal productivity assistant available on-the-go Good to know Voice notes are automatically transcribed into text before being processed. This template uses Google Gemini for AI reasoning.The AI agent supports memory, enabling more natural and contextual conversations. How it works Telegram Trigger – Starts when you send a text or voice note to your Telegram bot. Account Check – Ensures only authorized users can interact with the bot. Audio Handling – If it’s a voice message, the workflow retrieves and transcribes the recording. AI Agent – Both transcribed voice or text are sent to the AI Agent powered by Google Gemini + Simple Memory. Smart Actions – Based on the query, the AI can: Read or create events in Google Calendar Create notes in Notion Send messages in Gmail Reply in Telegram – The bot sends a response confirming the action or providing the requested information. How to use Clone this workflow into your n8n instance. Replace the Telegram Trigger with your bot credentials. Connect Google Calendar, Notion, and Gmail accounts where required. Start chatting with your Telegram bot to add events, notes, or send emails using just your voice or text. Requirements Telegram bot & API key Google Gemini account for AI Google Calendar, Notion, and Gmail integrations (optional, depending on use case) Customising this workflow Add more integrations (Slack, Trello, Airtable, etc.) for extended productivity. Modify the AI prompt in the agent node to fine-tune personality or task focus. Swap in another transcription service if preferred.
by Raphael De Carvalho Florencio
What this workflow is (About) This workflow turns a Telegram bot into an AI-powered lyrics assistant. Users send a command plus a lyrics URL, and the flow downloads, cleans, and analyzes the text, then replies on Telegram with translated lyrics, summaries, vocabulary, poetic devices, or an interpretation—all generated by AI (OpenAI). What problems it solves Centralizes lyrics retrieval + cleanup + AI analysis in one automated flow Produces study-ready outputs (translation, vocabulary, figures of speech) Saves time for teachers, learners, and music enthusiasts with instant results in chat Key features AI analysis** using OpenAI (no secrets hardcoded; uses n8n Credentials) Line-by-line translation, **concise summaries, vocabulary lists Poetic/literary device detection* and *emotional/symbolic interpretation** Robust ETL (extract, download, sanitize) and error handling Clear Sticky Notes documenting routing, ETL, AI prompts, and messaging Who it’s for Language learners & teachers Musicians, lyricists, and music bloggers Anyone studying lyrics for meaning, style, or vocabulary Input & output Input:* Telegram command with a public *lyrics URL** Output:** Telegram messages (Markdown/MarkdownV2), split into chunks if long How it works Telegram → Webhook** receives a user message (e.g., /get_lyrics <URL>). Routing (If/Switch)** detects which command was sent. Extract URL + Download (HTTP Request)** fetches the lyrics page. Cleanup (Code)** strips HTML/scripts/styles and normalizes whitespace. OpenAI (Chat)** formats the result per command (translation, summary, vocabulary, analysis). Telegram (Send Message)** returns the final text; long outputs are split into chunks. Error handling** replies with friendly guidance for unsupported/incomplete commands. Set up steps Create a Telegram bot with @BotFather and copy the bot token. In n8n, create Credentials → Telegram API and paste your token (no hardcoded keys in nodes). Create Credentials → OpenAI and paste your API key. Import the workflow and set a short webhook path (e.g., /lyrics-bot). Publish the webhook and set it on Telegram: https://api.telegram.org/bot<YOUR_BOT_TOKEN>/setWebhook?url=https://[YOUR_DOMAIN]/webhook/lyrics-bot (Optional) Restrict update types: curl -X POST https://api.telegram.org/bot<YOUR_BOT_TOKEN>/setWebhook \ -H "Content-Type: application/json" \ -d '{ "url": "https://[YOUR_DOMAIN]/webhook/lyrics-bot", "allowed_updates": ["message"] }' Test by sending /start and then /get_lyrics <PUBLIC_URL> to your bot. If messages are long, ensure MarkdownV2 is used and special characters are escaped.