by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: Ted's Tech Talks
by System Admin
Tagged with: , , , ,
by Mirajul Mohin
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. What this workflow does Monitors Google Drive for new video file uploads Downloads and processes videos using VLM Run AI transcription Generates accurate transcripts with timestamps, audio content, and video descriptions Saves formatted reports to Google Docs for instant access and sharing Setup Prerequisites: Google Drive account, VLM Run API credentials, Google Docs access, self-hosted n8n. You need to install VLM Run community node Quick Setup: Configure Google Drive OAuth2 and create video upload folder Add VLM Run API credentials Set up Google Docs integration for transcript storage Update folder/document IDs in workflow nodes Test with sample video files and activate Perfect for Meeting transcription and documentation Content creation and video accessibility Educational content processing and analysis Interview transcription and note-taking Podcast and webinar documentation Legal deposition and testimony recording Customer support call analysis Key Benefits Asynchronous processing** handles large video files without timeouts Multi-format support** for MP4, AVI, MOV, WebM, MKV formats Dual content extraction** captures both audio transcripts and video descriptions Eliminates manual transcription** saving hours of documentation time High accuracy speech recognition** with multi-language support Structured output** with timestamps and scene descriptions How to customize Extend by adding: Speaker identification and voice separation Sentiment analysis and keyword extraction Integration with project management tools Email notifications for transcription completion Summary generation and key point extraction Multi-language translation capabilities Search indexing for transcript databases Integration with video editing software This workflow transforms manual video transcription into an automated, accurate, and efficient process, making video content accessible and searchable for your business operations, educational needs, or content creation workflows.
by Abhiman G S
Short description Transcribe Telegram voice/audio messages to text using Groq’s OpenAI-compatible Whisper endpoint. Replies are delivered either as a Telegram message or as a downloadable .txt file, plug-and-play for n8n with minimal setup. Who’s it for / Uses Educators, podcasters, interviewers, and support teams who need quick voice → text conversions. Automating meeting notes, voice feedback, voicemail transcription, or speech logging. Useful when you want transcripts pushed to chat or saved as files for archiving. How it works (overview) Telegram Trigger — workflow starts on incoming message. Switch (Audio/Voice) — detects voice or audio. If neither, replies “Wrong file type” and stops. Telegram Download — downloads the audio using the file_id, outputs file path/binary. Set Node (credentials + options) — stores Groq_API and Telegram_access_token (required) and transcript_output_format (message or file). HTTP Request → Groq (Whisper) — uploads audio (multipart/form-data) to Groq’s transcription endpoint and receives text. Reply Switch — routes to either: Message branch: send transcribed text as a Telegram message. File branch: convert transcript to .txt and send as a document. Requirements n8n instance (cloud or self-hosted) with internet access Telegram bot token (create via BotFather) Groq API key (create at https://console.groq.com/keys) Basic n8n nodes: Telegram Trigger, Switch, Telegram Download, Set, HTTP Request, Convert to File, Telegram Send Message/Document Important setup & notes Mandatory:* Add Groq_API and Telegram_access_token in the *Set** node (or use n8n Credentials). The workflow will fail without them. Do not hardcode** keys in HTTP node fields that will be exported/shared. Use Set fields or n8n Credentials. Include sticky notes explaining each node (yellow note with full description recommended). Sticky notes should show setup steps and required fields. Before publishing: remove personal IDs and secrets, test with sample voice messages, and verify Groq response schema to map the transcript field correctly. Security & best practices Use n8n Credentials or environment variables in production. Rotate API keys if they become exposed. Keep the Set node private when sharing templates; instruct users to replace keys with their own.
by Masaki Go
What it does This workflow automates your X (Twitter) engagement by acting as an auto-responder. It runs on a schedule, searches for new tweets based on a specific query (like a hashtag, keyword, or mention), and automatically sends a reply. How it works Schedule Trigger: Runs the workflow automatically at your chosen interval (e.g., every 15 minutes). Search Tweets (HTTP): Uses the X (Twitter) API v2 to find recent tweets matching your search query. Error & Success Handling: If the search is successful, it proceeds to prepare a reply. It includes error handling for common issues like Rate Limits or if No Tweets are found. Send Reply (HTTP): Posts the reply to the tweet. Duplicate Check: Includes logic to check if a reply has already been sent to avoid spamming. How to set up Credentials: You must have an X (Twitter) Developer Account (v2). Add your credentials to n8n. Search Node: In the "Search Tweets" node, update the query parameter with your own search terms (e.g., #n8n or from:username). Reply Node: In the "Prepare Reply" node, customize the text you want to send. Activate: Set your desired schedule in the "Schedule Trigger" node and activate the workflow. Requirements An active n8n instance. An X (Twitter) Developer Account with Elevated (v2) access. X (Twitter) API v2 credentials. How to customize the workflow Change Schedule:** Modify the "Schedule Trigger" to run more or less frequently. Dynamic Replies:** Enhance the "Prepare Reply" node with an AI node (like OpenAI) to generate unique replies instead of static text. Add Filters:** Add an "IF" node after "Search Tweets" to filter out tweets you don't want to reply to.
by Geoffroy
🪐 Use case Automatically surface and insert the three most relevant “Related articles” at the end of every Shopify blog post to boost session depth, SEO, and reader engagement. ⚙️ What this workflow does Pulls all published articles from a selected Shopify Blog using the Admin API. Cleans the HTML content (removes existing .related-articles blocks) and extracts text for embeddings. Generates OpenAI embeddings (text-embedding-3-small) and stores them in n8n Data Tables. Calculates semantic similarity (cosine distance) between articles to identify the top matches. Selects the Top 3 most relevant related posts for each article (configurable). Dynamically builds a `` HTML section and updates the article on Shopify. Runs on a weekly schedule to keep relations fresh as new content is added. 🧩 Setup Create 3 Data Tables: articles article_relations article_related_links_snapshot Add credentials: Shopify Admin API Access Token OpenAI API Key Set environment variables in the Workflow Configuration node: shopifyBlogId shopifyBlogDomain shopifyStoreName shopApiVersion percent_minimum_similarity (default 70) (Optional) Keep or modify the Schedule Trigger (default: every week at 20:00). 🛠️ How to adjust this template Modify the similarity threshold or number of related posts displayed. Edit the HTML snippet or CSS classes for the related section. Integrate a second OpenAI model to rewrite link titles or summaries for better UX. 💡 Ideal for Shopify content teams and SEO strategists who want automated, context-aware internal linking to improve engagement and organic ranking.
by Wan Dinie
Simple Profile Picture Generator (No API Keys Needed) Finding the perfect profile picture can be time-consuming and frustrating. Whether you need avatars for testing, placeholder images for development, or simply want to explore different styles before committing to a design, browsing multiple avatar services one by one is tedious. This workflow solves that universal problem by generating 12 different avatar styles instantly from a single trigger, giving you a complete gallery to choose from in seconds. How it works Trigger the workflow manually. The workflow generates a unique seed (random number or custom keyword) and randomly selects a gender. The simple JavaScript code generates 12 different avatar URLs from multiple free APIs using the same seed for consistency. All avatar URLs are passed to an HTML generator that creates a responsive gallery. The final HTML displays all 12 avatar styles in a grid with metadata (seed and gender). Each avatar includes a download button for easy saving. How to Set Up No API keys required. All avatar services used are completely free and public. Simply import the workflow and click "Execute Workflow" to generate your avatar gallery. The workflow works immediately without any configuration needed. The JavaScript code is beginner-friendly, meaning you can understand it just by reading through it. Customize If you are an advanced user, you can use custom seed instead of random: In the "Generate APIs" node (line 1), change const userInput = ''; to const userInput = 'your-name'; to generate consistent avatars based on your chosen keyword. Set fixed gender: In the "Generate APIs" node (line 4), change const gender = Math.random() > 0.5 ? 'male' : 'female'; to const gender = 'female'; or const gender = 'male'; for consistent gender across compatible APIs. Add or remove avatar APIs: Edit the apis array in the "Generate APIs" node to include your preferred avatar services or remove unwanted styles. Change avatar size: Replace size=200 or &size=200 with your desired dimensions (e.g., size=300) in the API URLs within the "Generate APIs" node.
by Harshil Agrawal
No description available
by Robert Breen
📖 Description Ask natural-language questions about your Pipedrive leads. This workflow pulls live lead data from Pipedrive and has OpenAI answer questions like “leads added this week”, “stuck leads by owner”, or “next activities due today.” Responses are grounded only in your Pipedrive data. ⚙️ Setup Instructions 1️⃣ Set Up OpenAI Connection Go to OpenAI Platform Navigate to OpenAI Billing Add funds to your billing account Copy your API key into the OpenAI credentials in n8n 2️⃣ Connect Pipedrive In Pipedrive → Personal preferences → API → copy your API token URL shortcut: https://{your-company}.pipedrive.com/settings/personal/api In n8n → Credentials → New → Pipedrive API Company domain: {your-company} (the subdomain in your Pipedrive URL) API Token: paste the token from step 1 → Save In the Pipedrive Tool node, select your Pipedrive credential and (optionally) set filters (e.g., owner, label, created time). 🗣️ Example Questions You Can Ask “Summarize leads added this week by owner.” “Which leads have no upcoming activity?” “Show overdue activities and who owns them.” “Top 10 leads by value that are still open.” “Leads created in the last 7 days with the label ‘Inbound’.” “What are the next actions due today?” “Which leads are stuck >14 days without updates?” “Give me a one-paragraph pipeline health summary.” 📬 Contact Need help extending this (e.g., posting summaries to Slack/Email or auto-creating activities)? 📧 robert@ynteractive.com 🔗 Robert Breen 🌐 ynteractive.com
by Robert Breen
This workflow lets you chat with Reddit using OpenAI and the Reddit API. The chatbot pulls posts from a subreddit and uses GPT to answer your questions. ⚙️ Setup Instructions 2️⃣ Set Up OpenAI Connection Go to OpenAI Platform Navigate to OpenAI Billing Add funds to your billing account Copy your API key into the OpenAI credentials in n8n 2️⃣ Set Up Reddit API Go to Reddit Apps Click Create App → Choose script type Fill in: Name: (your choice) Redirect URI: http://localhost:8080 (or your n8n URL) Save → copy Client ID and Secret In n8n → Credentials → New → Reddit OAuth2 API Enter Client ID + Client Secret Log in with your Reddit account and approve access Attach this credential to the Reddit Tool node 🧠 How It Works Reddit Tool Node** → Fetches posts from chosen subreddit (e.g., r/n8n) OpenAI Agent** → Processes posts and your chat question Chatbot** → Returns summarized answers with Reddit context 📬 Contact Need help customizing this (e.g., targeting multiple subreddits or filtering posts)? 📧 robert@ynteractive.com 🔗 Robert Breen 🌐 ynteractive.com
by Rodrigo
How it works This workflow helps you supercharge your cold email campaigns by enriching leads in Google Sheets with AI-generated personalization. For every lead in your sheet, the workflow: Fetches rows from Google Sheets Loops through each lead one by one Uses OpenAI to generate: A personalized cold email icebreaker A shortened version of the company name Saves the results back into your Google Sheet The result: a lead list that’s instantly ready for highly personalized cold outreach. Setup steps Connect your Google Sheets account and select the sheet with your leads. Your sheet should have columns like: first name, last name, company name, industry, city, etc. Adjust the column mapping in the “Update Row in Sheet” node to match your sheet’s structure. Connect your OpenAI account in the “Message a Model” node. Optionally, tweak the AI prompt to match your preferred tone of voice for icebreakers. Click Execute Workflow whenever you want to enrich your sheet with new personalized content. Requirements OpenAI account (API key) Google Sheets account with a lead list n8n instance (self-hosted or cloud)