by Yang
Who’s it for This template is perfect for content creators, social media strategists, and marketing teams who want to uncover trending questions directly from real TikTok audiences. If you spend hours scrolling through videos to find content ideas or audience pain points, this workflow automates the entire research process and delivers clean, ready-to-use insights in minutes. What it does The workflow takes a keyword, searches TikTok for matching creator profiles, retrieves their latest videos, extracts viewer comments, and uses GPT-4 to identify the most frequently asked questions. These questions can be used to inspire new content, shape engagement strategies, or create FAQ-style videos that directly address what your audience is curious about. Here’s what happens step by step: Accepts a keyword from a form trigger Uses Dumpling AI to search TikTok for relevant profiles Fetches the most recent videos from each profile Extracts and cleans comments using a Python script Sends cleaned comments to GPT-4 to find recurring audience questions Saves the top questions and video links into a Data Table for easy review How it works Form Trigger: Collects the keyword input from the user Dumpling AI: Searches TikTok to find relevant creators based on the keyword Video Retrieval: Gets recent videos from the discovered profiles Comments Extraction: Gathers and cleans all video comments using Python GPT-4: Analyzes the cleaned text to extract top audience questions Data Table: Stores the results for easy access and content planning Requirements ✅ Dumpling AI API key stored as credentials ✅ OpenAI GPT-4 credentials ✅ Python node enabled in n8n ✅ A Data Table in n8n to store questions and related video details How to customize Adjust the GPT prompt to refine the tone or format of the extracted questions Add filters to target specific types of TikTok profiles or content niches Integrate the output with your content calendar or idea tracking tool Set up scheduled runs to build a constantly updating library of audience questions Extend the workflow to analyze TikTok hashtags or trends alongside comments > This workflow turns TikTok keyword searches into structured audience insights, helping you quickly discover real questions your audience is asking—perfect for fueling content strategies, campaigns, and engagement.
by Konstantin
How it works This workflow powers an intelligent, conversational AI bot for VK that can understand and respond to both text and voice messages. The bot uses an AI agent with built-in memory, allowing it to remember the conversation history for each unique user (or in each chat) and answer follow-up questions. It's a complete solution for creating an engaging, automated assistant within your VK group. Step-by-step VK Webhook (Trigger): The workflow starts when the Webhook node receives a new message from your VK group. Duplicate Filtering: The data immediately passes through the Filter Dubles node, which checks for the x-retry-counter header. This is a crucial step to prevent processing duplicate retry requests sent by the VK API. Voice or Text Routing: A Voice/Text (Switch) node checks if the message contains text (message.text) or a voice attachment (audio_message.link_mp3). Voice Transcription: If it's a voice note, the Get URL (HTTP Request) node downloads the audio file. The file is then passed to the Transcribe (OpenAI) node, which uses the Whisper model to convert the audio to text. Input Unification: Both the original text (from the 'Text' path) and the newly transcribed text (from the 'Voice' path) are routed to the Set Prompt node. This node standardizes the input into a single prompt variable. AI Agent Processing: The prompt variable is passed to the AI Agent. This agent is powered by an OpenAI Chat Model and uses Simple Memory to retain conversation history, using the VK peer_id as the sessionKey. This allows it to maintain a separate history for both private messages and group chats. Response Generation: The successful AI response is passed to the Send to VK (HTTP Request) node, which sends the message back to the user. Error Handling: The AI Agent node has error handling enabled (onError). If it fails, the flow is redirected to the Error (HTTP Request) node, which sends a fallback message to the user. Set up steps Estimated set up time: 10 minutes Add your OpenAI credentials to the OpenAI Chat Model and Transcribe nodes. Add your VK group's API Bearer Token credentials to the two HTTP Request nodes named Send to VK and Error. Webhook Setup (Important\!): This is a two-stage process: confirmation and operation. Copy the Production Webhook URL from the Webhook node. Stage A: Confirm Address (One-time) In the Webhook node settings, set Response Mode to On Received. In Options -\> Response Data, temporarily paste the confirmation string that VK provides. Activate the workflow (toggle "Active" in the top-right). Paste the URL into your VK group's Callback API settings (Management -\> API -\> Callback API) and click "Confirm". Stage B: Operational Mode (Permanent) Return to the Webhook node. Set Response Mode to Immediate. In Options -\> Response Data, type the word ok (lowercase). Save and reactivate the workflow. The bot is now live. (Optional) Customize the system prompt in the AI Agent node to define your bot's name and personality.
by Robert Breen
🧑💻 Description This workflow automatically compares the version of your n8n instance with the latest release available. Keeping your n8n instance up-to-date is essential for security patches, bug fixes, performance improvements, and access to new automation features. By running this workflow, you’ll know right away if your instance is behind and whether it’s time to upgrade. After the comparison, the workflow clearly shows whether your instance is up-to-date or outdated, along with the version numbers for both. This makes it easy to plan updates and keep your automation environment secure and reliable. ⚙️ Setup Instructions 1️⃣ Set Up n8n API Credentials In your n8n instance → go to Admin Panel → API Copy your API Key In n8n → Credentials → New → n8n API Paste the API Key Save it Attach this credential to the n8n node (Set up your n8n credentials) ✅ How It Works Get Most Recent n8n Version** → Fetches the latest release info from docs.n8n.io. Extract Version + Clean Value** → Parses the version string for accuracy. Get your n8n version** → Connects to your own n8n instance via API and retrieves the current version. Compare* → Evaluates the difference and tells you if your instance is *current* or needs an *update**. 🎛️ Customization Guidance Notifications**: Add an Email or Slack node to automatically notify your team when a new n8n update is available. Scheduling: Use a **Schedule Trigger to run this workflow daily or weekly for ongoing monitoring. Conditional Actions**: Extend the workflow to log version mismatches into Google Sheets, or even trigger upgrade playbooks. Multi-Instance Tracking**: Duplicate the version-check step for multiple n8n environments (e.g., dev, staging, production). 💬 Example Output “Your instance (v1.25.0) is up-to-date with the latest release (v1.25.0).” “Your instance (v1.21.0) is behind the latest release (v1.25.0). Please update to get the latest bug fixes and features.” 📬 Contact Need help setting up API credentials or automating version checks across environments? 📧 robert@ynteractive.com 🔗 Robert Breen 🌐 ynteractive.com
by Dean Gallop
Trigger & Topic Extraction The workflow starts manually or from a chat/Telegram/webhook input. A “topic extractor” node scans the incoming text and cleans it (handles /topic … commands). If no topic is detected, it defaults to a sample news headline. Style & Structure Setup A style guide node defines the blog’s tone: practical, medium–low formality, clear sections, clean HTML only. It also enforces do’s (citations, links, actionable steps) and don’ts (no clickbait, no low-quality sources). Research & Drafting A GPT node generates a 1,700–1,800 word article following the style guide. Sections include: What happened, Why it matters, Opportunities/risks, Action plan, FAQ. The draft is then polished for clarity and flow. Quality Control A word count guard checks that the article is at least 1,600 words. If too short, a GPT “expand draft” node deepens the Why it matters, Risks, and Action plan sections. Image Creation The final article’s title and content are used to generate an editorial-style image prompt. Leonardo AI creates a cinematic, text-free featured image suitable for Google News/Discover. The image is uploaded to WordPress, with proper ALT text generated by GPT. Publishing to WordPress The final post (title, content, featured image) is automatically published. Sources are extracted from the article, compiled into a “Sources” section with clickable links. Posts are categorized and published immediately.
by Shohani
Auto backup n8n workflows to GitLab with AI-generated documentation This n8n template automatically backs up your workflows to a GitLab repository whenever they're updated or activated, and generates README documentation using AI. This workflow can be aslo added as a sub-workflow to any existing workflow to enable backup functionality. Who's it for This template is perfect for n8n users who want to: Maintain version control of their workflows Create automatic backups in Git repositories Generate documentation for their workflows using AI Keep their workflow library organized and documented How it works The workflow monitors n8n for workflow updates and activations, then automatically saves the workflow JSON to GitLab and generates a README file using OpenAI: Trigger Detection: Uses n8n Trigger to detect when workflows are updated or activated Workflow Retrieval: Fetches the complete workflow data using the n8n API Repository Check: Lists existing files in GitLab to determine if the workflow already exists Smart File Management: Either creates a new file or updates an existing one based on the repository state AI Documentation: Generates a README.md file using OpenAI's GPT model to document the workflow GitLab Storage: Saves both the workflow JSON and README to organized folders in your GitLab repository Requirements GitLab account** with API access and a repository named "all_projects" n8n API credentials** for accessing workflow data OpenAI API key** for generating documentation GitLab personal access token** with repository write permissions How to set up Configure GitLab credentials: Add your GitLab API credentials in the GitLab nodes Set up n8n API: Configure your n8n API credentials for the workflow retrieval node Add OpenAI credentials: Set up your OpenAI API key in the "Message a model" node Update repository details: Modify the owner and repository name in GitLab nodes to match your setup Test the workflow: Save and activate the workflow to test the backup functionality How to customize the workflow Change repository structure**: Modify the file path expressions to organize workflows differently Customize commit messages**: Update the commit message templates in GitLab nodes Enhance AI documentation**: Modify the OpenAI prompt to generate different styles of documentation Add file filtering**: Include conditions to backup only specific workflows Extend triggers**: Add webhook or schedule triggers for different backup scenarios Multiple repositories**: Duplicate GitLab nodes to backup to multiple repositories simultaneously
by Avkash Kakdiya
How it works This workflow starts when a user triggers a custom slash command in Slack. The workflow checks if a valid message (email address or HubSpot contact ID) was provided. Based on the input, it searches HubSpot for the contact either by email or by ID. Once the contact is found, the workflow formats the details into a clean, Slack-friendly message card and posts it back into the Slack channel. Step-by-step Start with Slack Slash Command The workflow is triggered whenever someone uses a custom slash command in Slack. It checks if the user actually entered something (email or ID). If nothing is entered, the workflow stops with an error. Parse Search Input The workflow cleans up the user’s input and determines whether it’s an email address or a HubSpot contact ID. This ensures the correct HubSpot search method is used. Search in HubSpot If the input is an email → the workflow searches HubSpot by email. If the input is an ID → the workflow retrieves the contact directly using the HubSpot contact ID. Format Contact Info The retrieved HubSpot contact details (name, email, phone, company, deal stage, etc.) are formatted into a Slack-friendly message card. Send Contact Info to Slack Finally, the formatted contact information is posted back into the Slack channel, making it instantly visible to the user and team. Why use this? Quickly look up HubSpot contacts directly from Slack without switching tools. Works with both email addresses and HubSpot IDs. Provides a clean, structured contact card in Slack with key details. Saves time for sales and support teams by keeping workflows inside Slack. Runs automatically once set up — no extra clicks or manual searches.
by Mattis
Stay informed about the latest n8n updates automatically! This workflow monitors the n8n GitHub repository for new pull requests, filters updates from today, generates an AI-powered summary, and sends notifications to your Telegram channel. Who's it for n8n users who want to stay up-to-date with platform changes Development teams tracking n8n updates Anyone managing n8n workflows who needs to know about breaking changes or new features How it works Daily scheduled check at 10 AM for new pull requests Fetches latest PR from n8n GitHub repository Filters to only process today's updates Extracts the pull request summary AI generates a clear, technical summary in English Sends notification to your Telegram channel
by Robert Breen
Pull recent Instagram post media for any username, fetch the image binaries, and run automated visual analysis with OpenAI — all orchestrated inside n8n. This workflow uses a Google Sheet to supply target usernames, calls Apify’s Instagram Profile Scraper to fetch recent posts, downloads the images, and passes them to an OpenAI vision-capable model for structured analysis. Results can then be logged, stored, or routed onward depending on your use case. 🧑💻 Who’s it for Social media managers analyzing competitor or brand posts Marketing teams tracking visual trends and campaign content Researchers collecting structured insights from Instagram images ⚙️ How it works Google Sheets – Supplies Instagram usernames (one per row). Apify Scraper – Fetches latest posts (images and metadata). HTTP Request – Downloads each image binary. OpenAI Vision Model – Analyzes visuals and outputs structured summaries. Filter & Split Nodes – Ensure only the right rows and posts are processed. 🔑 Setup Instructions 1) Connect Google Sheets (OAuth2) Go to n8n → Credentials → New → Google Sheets (OAuth2) Sign in with your Google account and grant access In the Get Google Sheet node, select your spreadsheet + worksheet (must contain a User column with Instagram usernames) 2) Connect Apify (HTTP Query Auth) Get your Apify API token at Apify Console → Integrations/API In n8n → Credentials → New → HTTP Query Auth, add a query param token=<YOUR_APIFY_TOKEN> In the Scrape Details node, select that credential and use the provided URL: 3) Connect OpenAI (API Key) Create an API key at OpenAI Platform In n8n → Credentials → New → OpenAI API, paste your key In the OpenAI Chat Model node, select your credential and choose a vision-capable model (gpt-4o-mini, gpt-4o, or gpt-5 if available) 🛠️ How to customize Change the Google Sheet schema (e.g., add campaign tags or notes). Adjust the OpenAI system prompt to refine what details are extracted (e.g., brand logos, colors, objects). Route results to Slack, Notion, or Airtable instead of storing only in Sheets. Apply filters (hashtags, captions, or timeframe) directly in the Apify scraper config. 📋 Requirements n8n (Cloud or self-hosted) Google Sheets account Apify account + API token OpenAI API key with a funded account 📬 Contact Need help customizing this (e.g., filtering by campaign, sending reports by email, or formatting your PDF)? 📧 rbreen@ynteractive.com 🔗 Robert Breen 🌐 ynteractive.com
by Snehasish Konger
Target audience Solo creators, PMs, and content teams who queue LinkedIn ideas in Google Sheets and want them posted on a fixed schedule with AI-generated copy. How it works The workflow runs on a schedule (Mon/Wed/Fri at 09:30). It pulls the first Google Sheet row with Status = Pending, generates a LinkedIn-ready post from Post title using an OpenAI prompt, publishes to your LinkedIn profile, then updates the same row to Done and writes the final post back to the sheet. Prerequisites (use your own credentials) Google Sheets (OAuth2)** with access to the target sheet LinkedIn OAuth2* tied to the account that should post — set the *person** field to your profile’s URN in the LinkedIn node OpenAI API key** for the Chat Model node Store secrets in n8n Credentials. Never hard-code keys in nodes. Google Sheet structure (exact columns) Minimum required columns id — unique integer/string used to update the same row later Status — allowed values: Pending or Done Post title — short prompt/topic for the AI model Recommended columns Output post — where the workflow writes the final text (use this header, or keep your existing Column 5) Hashtags (optional) — comma-separated list (the prompt can append these) Image URL (optional) — public URL; add an extra LinkedIn “Create Post” input if you post with media later Notes (optional) — extra hints for tone, audience, or CTA Example header row id | Status | Post title | Hashtags | Image URL | Output post | Notes Example rows (inputs → outputs) 1 | Pending | Why I moved from Zapier to n8n | #automation,#nocode | | | Focus on cost + flexibility 2 | Done | 5 lessons from building a rules engine | #product,#backend | | This is the final posted text... | Resulting Output post (for row 1 after publish) I switched from Zapier to n8n for three reasons: control, flexibility, and cost. Here’s what changed in my stack and what I’d repeat if I had to do it again. #automation #nocode > If your sheet already has a column named Column 5, either rename it to Output post and update the mapping in the final Google Sheets Update node, or keep Column 5 as is and leave the node mapping untouched. Step-by-step Schedule Trigger Runs on Mon/Wed/Fri at 09:30. Fetch pending rows (Google Sheets → Get Rows) Reads the sheet and filters rows where Status = Pending. Limit Keeps only the first pending row so one post goes out per run. Writing the post (Agent + OpenAI Chat Model + Structured Output Parser) Uses Post title (and optional Notes/Hashtags) as input. The agent returns JSON with a post field. Model set to gpt-4o-mini by default. Create a post (LinkedIn) Publishes {{$json.output.post}} to the configured person (your profile URN). Update the sheet (Google Sheets → Update) Matches by id, sets Status = Done, and writes the generated text into Output post (or your existing output column). Customization Schedule** — change days/time in the Schedule node. Consider your n8n server timezone. Posts per run* — remove or raise the *Limit** to post more than one item. Style and tone** — edit the Agent’s system prompt. Add rules for line breaks, hashtags, or a closing CTA. Hashtags handling** — parse the Hashtags column in the prompt so the model appends them cleanly. Media posts** — add a branch that attaches Image URL (requires LinkedIn media upload endpoints). Company Page* — switch the *person* field to an *organization** URN tied to your LinkedIn app scope. Troubleshooting No post created** Check the If/Limit path: is there any row with Status = Pending? Confirm the sheet ID and tab name in the Google Sheets nodes. Sheet not updating** The Update node must receive the original id. If you changed field names, remap them. Make sure id values are unique. LinkedIn errors (403/401/404)** Refresh LinkedIn OAuth2 in Credentials. The person/organization URN is wrong or missing. Copy the exact URN from the LinkedIn node helper. App lacks required permissions for posting. Rate limit (429) or model errors** Add a short Wait before retries. Switch to a lighter model or simplify the prompt. Post too long or broken formatting** LinkedIn hard limit is \~3,000 characters. Add a truncation step in Code or instruct the prompt to cap length. Replace double line breaks in the LinkedIn node if you see odd spacing. Timezone mismatch** The Schedule node uses the n8n instance timezone. Adjust or move to a Cron with explicit TZ if needed. Need to post at a different cadence, or push two posts per day? Tweak the Schedule and Limit nodes and you’re set.
by raas
🎵 AI Spotify Playlist Generator (Telegram → Spotify) instantly create a curated Spotify playlist based on a single song recommendation sent via Telegram. This workflow uses an AI Agent to generate similar tracks, creates a new playlist, sends you the link immediately, and then populates the playlist in the background. ✨ Features Instant Feedback: Creates the Spotify playlist and sends the URL back to your Telegram chat immediately, before the AI finishes processing. AI Curation: Uses an AI Agent (via OpenRouter) to act as a "Greatest DJ," generating 25 songs similar to your input. Smart Searching: Automatically searches Spotify for the generated tracks. Error Handling: Includes logic to skip tracks that cannot be found on Spotify. Rate Limiting: Includes a wait loop to ensure Spotify API rate limits are respected during population. 🛠️ Prerequisites To use this workflow, you need: n8n: An active instance of n8n. Spotify Developer Account: You need a Client ID and Client Secret to authenticate the Spotify node. Telegram Bot: A bot token created via @BotFather. OpenRouter Account: An API key for OpenRouter to access the LLM (Language Model). Note: You can easily swap the OpenRouter node for an OpenAI or Anthropic node if preferred. 🔄 How it Works Trigger: You send a message to your Telegram Bot (e.g., "Daft Punk - One More Time"). Create Playlist: The workflow immediately creates a new empty playlist on your Spotify account named with your username and the prompt. Reply: The bot replies to you with the link to the new playlist. AI Generation: The prompt is sent to an AI Agent which generates a JSON list of ~25 similar tracks and artists. Processing: The workflow splits the list into individual items. It loops through every item. It searches Spotify for the specific Track + Artist. If found, it adds the track to the playlist. It waits 1 second between adds to prevent API errors. ⚙️ Setup Instructions Credentials: Set up your Telegram API credentials in the Trigger and Send Message nodes. Set up your Spotify OAuth2 credentials in the Create, Search, and Add Item nodes. Set up your OpenRouter API credentials in the Chat Model node. Model Selection: The template is configured to use openai/gpt-5-nano via OpenRouter. If this model is unavailable to you, simply open the OpenRouter Chat Model node and change the model to openai/gpt-4o or meta-llama/llama-3-70b-instruct. Activate: Save the workflow and click Activate. Open your Telegram bot and send it a song name! 📦 Dependencies n8n-nodes-base.telegramTrigger @n8n/n8n-nodes-langchain.agent n8n-nodes-base.spotify n8n-nodes-base.splitOut n8n-nodes-base.splitInBatches
by Robert Breen
Web-Form Auto-Responder: Instant Email + SMS Follow-Up 📝 Description Embed a simple web form on your site and let this workflow: Collect a visitor’s name, email, phone, and question Generate a professional email and a friendly SMS using GPT-4o-mini Delay briefly (1 min by default) to simulate human writing time Send the AI-crafted email via Microsoft Outlook Send the AI-crafted text via Twilio Perfect for solo consultants or small teams who want rapid, personalized responses without manual typing. ⚙️ Setup Instructions Import the workflow n8n → Workflows → Import from File (or Paste JSON) → Save Add credentials | Service | Where to get credentials | Node(s) to update | |---------|-------------------------|-------------------| | OpenAI | <https://platform.openai.com> → create API key | OpenAI Chat Model | | Microsoft Outlook | Azure/M365 account with email-send permissions | Send email to the submitter | | Twilio | <https://console.twilio.com> → Account SID, Auth Token | Send text to the submitter | Embed the form on your website Open Form to be embedded on website Click “Embed” → copy the iframe code → paste into your contact page Set your Twilio “From” number In Send text to the submitter, change phone to your verified Twilio number Adjust wait times (optional) Wait some time to write the email response (default 1 min) Wait some time to write the text response (default 1 min) Customize the AI prompt (optional) Edit the AI Agent system message to tweak tone, questions asked, or signature Test the flow Open the form URL (generated by the Form node) Submit a test entry → after ~1 min you should receive both an email and an SMS Activate Toggle Active so the form handles real submissions 24/7 🧩 Customization Ideas Pipe form data into Pipedrive, HubSpot, or Airtable for lead tracking Trigger a Slack/Teams alert to notify your team of hot questions Add a calendar link in the email so visitors can book a call instantly Use a language-detection node to reply in the visitor’s native language Contact Email:** rbreen@ynteractive.com Website:** https://ynteractive.com YouTube:** https://www.youtube.com/@ynteractivetraining LinkedIn:** https://www.linkedin.com/in/robertbreen
by M Ayoub
Who is this for? DevOps engineers, sysadmins, and website owners who manage multiple domains and need proactive SSL certificate expiration monitoring without manual checks. What it does Automatically monitors SSL certificates across multiple domains, tracks expiration status in a Google Sheet dashboard, and sends beautifully formatted HTML email alerts before certificates expire. ✅ No API rate limits — Uses direct OpenSSL commands, so you can scan unlimited domains with zero API costs or restrictions. How it works Triggers on schedule (every 3 days at 10AM) Reads domain list from your Google Sheet Checks each domain's SSL certificate using OpenSSL commands Parses expiration dates, issuer info, and calculates days remaining Updates Google Sheet with current status for all domains Sends styled email alerts only when certificates are expiring soon Set up steps Connect your Google Sheets OAuth2 credentials Create a Google Sheet with these columns: Domain, Expiry Date, Days Left, Status, Issuer, Last Checked (the workflow matches on the Domain column to update results) Add your domains to scan in the Domain column Update the Sheet ID in the Read Domain List from Google Sheets and Update Google Sheet with Results nodes Connect SMTP credentials in the Send Alert Email via SMTP node Optionally adjust ALERT_THRESHOLD_DAYS in two nodes: Prepare Domain List and Set Threshold and Parse SSL Results and Identify Expiring Certs (default: 20 days) Setup time: ~10 minutes