by David Olusola
🧠 Workflow Summary Automates lead management by: 🔗 Webhook Trigger: Captures form data from your website. 🧼 Code Node: Standardizes the data format. 📄 Google Sheets: Appends a new row with lead info. 🔔 Slack Notification: Alerts your team instantly. ⚙️ Quick Setup 1. Import Workflow In n8n, go to Workflows → + New → Import from JSON. 2. Add Credentials Google Sheets**: Use OAuth2 to connect your account. Slack**: Create a Slack App → Add bot scopes → Connect via OAuth2. 3. Google Sheet Prep Create a sheet with these columns in row 1: Full Name Email Address Business Name Project Intent/Needs Project Timeline Budget Range Received At 4. Configure Nodes Webhook**: Use the generated URL in your form settings. Code**: Cleans and maps form fields. Google Sheets Node**: Set to Append Map fields using expressions like ={{ $json.email }} Slack Node**: Choose your channel Send a templated lead alert message with form data 5. Activate & Test Click Activate Send a test POST to the Webhook Confirm: New row in Sheet ✅ Slack alert sent ✅ 🎯 Use this automation to capture leads, log data, and notify your team—all hands-free.
by ermanatalay
Create a powerful brand/company monitoring system that fetches news headlines, performs AI-powered sentiment analysis, and delivers witty, easy-to-read reports via email. This workflow turns brand mentions into a lively “personality analysis” — making your reports not only insightful but also fun to read. Perfect for teams that want to stay informed and entertained. How it works ++Data Collection++: A Google Sheets table captures brand name and recipient email which triggers the workflow. ++News Aggregation++: The RSS Read node fetches recent news headlines from Google News based on the specified brand or company keyword. ++Content Processing++: News headlines are aggregated and formatted for AI analysis. ++AI Analysis++: Gemini 2.5 Flash model plays the role of a brand analyst, writing reports as if the brand were a character in a story. It highlights strengths, quirks, and challenges in a witty, narrative-driven style — while still providing sentiment scores and action points. ++Report Generation++: JavaScript code structures the AI response into well-formatted HTML paragraphs for a smooth email reading experience. ++Automated Delivery++: Gmail integration sends the analysis report directly to the specified email address. How to use First, create a google sheets document with sheet name="page1", A1 cell name="keyword" and B1 cell name="email". The system will read the keyword & email data when a new row data is entered. Paste the url of your google sheets document into the first trigger node. Select trigger on "row added" in the node. Enter your credentials to connect Gemini PaLM API account in the "message a model" node of Google. Enter your credentials to connect Gmail account in the "send a message" node. The workflow automatically runs when new row is detected. Recipients receive comprehensive sentiment analysis reports within minutes! Requirements -Google Sheets URL -Google Gemini API credentials for AI analysis -Gmail API credentials for email delivery
by Dahiana
This template demonstrates how to build an AI-powered name generator that creates realistic names perfect for UX/UI designers, developers, and content creators. Use cases: User persona creation, mockup development, prototype testing, customer testimonials, team member listings, app interface examples, website content, accessibility testing, and any scenario requiring realistic placeholder names. How it works AI-Powered Generation:** Uses any LLM to generate names based on your specifications Customizable Parameters:** Accepts gender preferences, name count, and optional reference names for style matching UX/UI Optimized:** Names are specifically chosen to work well in design mockups and prototypes Smart Formatting:** Returns clean JSON arrays ready for integration with design tools and applications Reference Matching:** Can generate names similar in style to a provided reference name How to set up Replace "Dummy API" credentials with your preferred language model API key Update webhook path and authentication as needed for your application Test with different parameters: gender (masculine/feminine/neutral), count (1-20), reference_name (optional) Integrate webhook URL with your design tools, Bubble apps, or other platforms Requirements LLM API access (OpenAI, Claude, or other language model) n8n instance (cloud or self-hosted) Platform capable of making HTTP POST requests API Usage POST to webhook with JSON body: { "gender": "masculine", "count": 5, "reference_name": "Alex Chen" // optional } Response: { "success": true, "names": ["Marcus Johnson", "David Kim", "Sofia Rodriguez", "Chen Wei", "James Wilson"], "count": 5 } How to customize Modify AI prompt for specific naming styles or regions Add additional parameters (age, profession, cultural background) Connect to databases for persistent name storage Integrate with design tools APIs (Figma, Sketch, Adobe XD) Create batch processing for large mockup projects
by Yehor EGMS
🎙️ n8n Workflow: Voice Message Transcription with Access Control This n8n workflow enables automated transcription of voice messages in Telegram groups with built-in access control and intelligent fallback mechanisms. It's designed for teams that need to convert audio messages to text while maintaining security and handling various audio formats. 📌 Section 1: Trigger & Access Control ⚡ Receive Message (Telegram Trigger) Purpose: Captures incoming messages from users in your Telegram group. How it works: When a user sends a message (voice, audio, or text), the workflow is triggered and the sender's information is captured. Benefit: Serves as the entry point for the entire transcription pipeline. 🔐 Sender Verification Purpose: Validates whether the sender has permission to use the transcription service. Logic: Check sender against authorized users list If authorized → Proceed to next step If not authorized → Send "Access denied" message and stop workflow Benefit: Prevents unauthorized users from consuming AI credits and accessing the service. 📌 Section 2: Message Type Detection 🎵 Audio/Voice Recognition Purpose: Identifies the type of incoming message and audio format. Why it's needed: Telegram handles different audio types with different statuses: Voice notes (voice messages) Audio files (standard audio attachments) Text messages (no audio content) Process: Check if message contains audio/voice content If no audio file detected → Send "No audio file found" message If audio detected → Assign file ID and proceed to format detection 🧩 File Type Determination (IF Node) Purpose: Identifies the specific audio format for proper processing. Supported formats: OGG (Telegram voice messages) MPEG/MP3 MP4/M4A Other audio formats Logic: If format recognized → Proceed to transcription If format not recognized → Send "File format not recognized" message Benefit: Ensures compatibility with transcription services by validating file types upfront. 📌 Section 3: Primary Transcription (OpenAI) 📥 File Download Purpose: Downloads the audio file from Telegram for processing. 🤖 OpenAI Transcription Purpose: Transcribes audio to text using OpenAI's Whisper API. Why OpenAI: High-quality transcription with cost-effective pricing. Process: Send downloaded file to OpenAI transcription API Simultaneously send notification: "Transcription started" If successful → Assign transcribed text to variable and proceed If error occurs → Trigger fallback mechanism Benefit: Fast, accurate transcription with multi-language support. 📌 Section 4: Fallback Transcription (Gemini) 🛟 Gemini Backup Transcription Purpose: Provides a safety net if OpenAI transcription fails. Process: Receives file only if OpenAI node returns an error Downloads and processes the same audio file Sends to Google Gemini for transcription Assigns transcribed text to the same text variable Benefit: Ensures high reliability—if one service fails, the other takes over automatically. 📌 Section 5: Message Length Handling 📏 Text Length Check (IF Node) Purpose: Determines if the transcribed text exceeds Telegram's character limit. Logic: If text ≤ 4000 characters → Send directly to Telegram If text > 4000 characters → Split into chunks Why: Telegram has a 4,000-character limit per message. ✂️ Text Splitting (Code Node) Purpose: Breaks long transcriptions into 4,000-character segments. Process: Receives text longer than 4,000 characters Splits text into chunks of ≤4,000 characters Maintains readability by avoiding mid-word breaks Outputs array of text chunks 📌 Section 6: Response Delivery 💬 Send Transcription (Telegram Node) Purpose: Delivers the transcribed text back to the Telegram group. Behavior: Short messages:** Sent as a single message Long messages:** Sent as multiple sequential messages Benefit: Users receive complete transcriptions regardless of length, ensuring no content is lost. 📊 Workflow Overview Table | Section | Node Name | Purpose | |---------|-----------|---------| | 1. Trigger | Receive Message | Captures incoming Telegram messages | | 2. Access Control | Sender Verification | Validates user permissions | | 3. Detection | Audio/Voice Recognition | Identifies message type and audio format | | 4. Validation | File Type Check | Verifies supported audio formats | | 5. Download | File Download | Retrieves audio file from Telegram | | 6. Primary AI | OpenAI Transcription | Main transcription service | | 7. Fallback AI | Gemini Transcription | Backup transcription service | | 8. Processing | Text Length Check | Determines if splitting is needed | | 9. Splitting | Code Node | Breaks long text into chunks | | 10. Response | Send to Telegram | Delivers transcribed text | 🎯 Key Benefits 🔐 Secure access control: Only authorized users can trigger transcriptions 💰 Cost management: Prevents unauthorized credit consumption 🎵 Multi-format support: Handles various Telegram audio types 🛡️ High reliability: Dual-AI fallback ensures transcription success 📱 Telegram-optimized: Automatically handles message length limits 🌍 Multi-language: Both AI services support numerous languages ⚡ Real-time notifications: Users receive status updates during processing 🔄 Automatic chunking: Long transcriptions are intelligently split 🧠 Smart routing: Files are processed through the optimal path 📊 Complete delivery: No content loss regardless of transcription length 🚀 Use Cases Team meetings:** Transcribe voice notes from team discussions Client communications:** Convert client voice messages to searchable text Documentation:** Create text records of verbal communications Accessibility:** Make audio content accessible to all team members Multi-language teams:** Leverage AI transcription for various languages
by WhySoSerious
📬 Plex Recently Added → Responsive Email Newsletter (Tautulli Alternative) What it is This workflow automatically creates a weekly Plex newsletter that highlights recently added Movies & TV Shows. It’s designed to be mobile-friendly and Gmail/iOS Mail compatible, making it easy to share Plex updates with friends, family, or your community. How it works • ⏰ Runs on a weekly schedule (customizable). • 🎬 Fetches recently added Movies & TV Shows from Tautulli API. • 📰 Builds a responsive HTML newsletter that works in Gmail, iOS Mail, and most clients. • 📧 Sends one personalized email per recipient via SMTP. • 🗒️ Every node has a Sticky Note explaining setup and purpose. How to set up Replace the placeholders in the nodes with your own details: • YOUR_TAUTULLI_URL • YOUR_API_KEY • YOUR_PLEX_TOKEN • YOUR_PLEX_SERVER_ID Update the recipient list in Prepare Emails for Recipients. Add your SMTP credentials in Send Newsletter Emails. (Optional) Customize the HTML/CSS in Generate HTML Newsletter. Requirements • Plex Media Server with Tautulli installed. • SMTP account (Gmail, custom domain, etc.). Customization • Change the schedule to daily/weekly as needed. • Edit the HTML template for your own branding. • Extend with additional nodes (Discord, Slack, etc.). ⸻ ⚡ Workflow Overview: ``⏰ Schedule Trigger → 🎬 Fetch Movies → 📺 Fetch TV → 🔗 Merge → 📰 Build HTML → 📧 Prepare Recipients → 📤 Send → ✅ Finish ``
by spencer owen
YNAB Super Budget Ever wish that Y.N.A.B was just a little smarter when auto-categorizing your transactions? Now you can supercharge your YNAB budget with ChatGPT! No more manual categorization. Setup Get a YNAB Api Key Get YNAB Budget ID & Account ID (They are part of the URL) https://app.ynab.com/BUDGETID/accounts/ACCOUNTID Additional information Every transaction that this workflow modifies will be tagged with n8n and color yellow. You can easily review all changes by selecting just that tag. Customization By default it pulls transactions from the last 30 days. This workflow will post a message in a discord channel showing which transactions it modified and what categories it chose. Discord notifications are optional. Considerations YNAB allows for 200 api calls per hour. If you have more than 200 Uncategorized transactions, consider reducing the previous_days value.
by Davide
This workflow automates the process of extracting structured, usable information from unstructured email messages across multiple platforms. It connects directly to Gmail, Outlook, and IMAP accounts, retrieves incoming emails, and sends their content to an AI-powered parsing agent built on OpenAI GPT models. The AI agent analyzes each email, identifies relevant details, and returns a clean JSON structure containing key fields: From** – sender’s email address To** – recipient’s email address Subject** – email subject line Summary** – short AI-generated summary of the email body The extracted information is then automatically inserted into an n8n Data Table, creating a structured database of email metadata and summaries ready for indexing, reporting, or integration with other tools. Key Benefits ✅ Full Automation: Eliminates manual reading and data entry from incoming emails. ✅ Multi-Source Integration: Handles data from different email providers seamlessly. ✅ AI-Driven Accuracy: Uses advanced language models to interpret complex or unformatted content. ✅ Structured Storage: Creates a standardized, query-ready dataset from previously unstructured text. ✅ Time Efficiency: Processes emails in real time, improving productivity and response speed. ✅ *Scalability:** Easily extendable to handle additional sources or extract more data fields. How it works This workflow automates the transformation of unstructured email data into a structured, queryable format. It operates through a series of connected steps: Email Triggering: The workflow is initiated by one of three different email triggers (Gmail, Microsoft Outlook, or a generic IMAP account), which constantly monitor for new incoming emails. AI-Powered Parsing & Structuring: When a new email is detected, its raw, unstructured content is passed to a central "Parsing Agent." This agent uses a specified OpenAI language model to intelligently analyze the email text. Data Extraction & Standardization: Following a predefined system prompt, the AI agent extracts key information from the email, such as the sender, recipient, subject, and a generated summary. It then forces the output into a strict JSON structure using a "Structured Output Parser" node, ensuring data consistency. Data Storage: Finally, the clean, structured data (the from, to, subject, and summarize fields) is inserted as a new row into a specified n8n Data Table, creating a searchable and reportable database of email information. Set up steps To implement this workflow, follow these configuration steps: Prepare the Data Table: Create a new Data Table within n8n. Define the columns with the following names and string type: From, To, Subject, and Summary. Configure Email Credentials: Set up the credential connections for the email services you wish to use (Gmail OAuth2, Microsoft Outlook OAuth2, and/or IMAP). Ensure the accounts have the necessary permissions to read emails. Configure AI Model Credentials: Set up the OpenAI API credential with a valid API key. The workflow is configured to use the model, but this can be changed in the respective nodes if needed. Connect the Nodes: The workflow canvas is already correctly wired. Visually confirm that the email triggers are connected to the "Parsing Agent," which is connected to the "Insert row" (Data Table) node. Also, ensure the "OpenAI Chat Model" and "Structured Output Parser" are connected to the "Parsing Agent" as its AI model and output parser, respectively. Activate the Workflow: Save the workflow and toggle the "Active" switch to ON. The triggers will begin polling for new emails according to their schedule (e.g., every minute), and the automation will start processing incoming messages. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Avkash Kakdiya
How it works This workflow starts whenever a new lead submits a Typeform. It captures the lead’s details, checks their budget, and routes them based on priority and source. High-budget leads are pushed into HubSpot with a follow-up task for sales. Facebook leads are logged in Google Sheets for marketing, while SurveyMonkey leads are stored in Airtable for campaign tracking. Finally, every lead receives an automated Gmail acknowledgment to confirm receipt and set expectations. Step-by-step Capture Leads The workflow listens for new form responses from Typeform. Each lead’s details — name, email, phone, budget, and message — are captured for processing. Prioritize High-Budget Leads The budget field is checked. If the budget is greater than $5,000 → the lead is flagged as high priority. These leads are added or updated in HubSpot CRM. A priority follow-up task is created in HubSpot for the sales team. Route by Lead Source If the source is Facebook → the lead is logged in a Google Sheet for marketing analysis. If the source is SurveyMonkey → the lead is stored in Airtable for structured campaign tracking. Send Auto-Response After storage, every lead receives an automated Gmail reply. The email thanks them for their interest and assures them that the sales team will follow up within 24 hours. Why use this? Captures and organizes leads from multiple channels in one workflow. Flags and escalates high-budget leads instantly to sales. Routes leads to the right system (HubSpot, Google Sheets, Airtable) based on their source. Automates acknowledgment emails, improving response time and customer experience. Saves manual effort by centralizing lead capture, qualification, and routing in one place.
by Supira Inc.
How It Works This workflow automatically classifies incoming Gmail messages into categories such as High Priority, Inquiry, and Finance/Billing, and then generates professional draft replies using GPT-4. By combining Gmail integration with AI-powered text generation, the workflow helps business owners and freelancers reduce the time spent managing emails while ensuring that important messages are handled quickly and consistently. When a new email arrives, the workflow: Triggers via Gmail. Uses an AI classifier to categorize the message. Applies the appropriate Gmail label. Passes the email body to GPT-4 to generate a tailored draft reply. Saves the draft in Gmail, ready for review and sending. Requirements A Gmail account with API access enabled. An OpenAI API key with GPT-4 access. n8n account or self-hosted instance. Setup Instructions Import this workflow into your n8n instance. Under Credentials, connect your Gmail account and OpenAI API key. Replace placeholder YOUR_LABEL_ID_XXX values with your Gmail label IDs (obtainable via Gmail → List Labels). Execute the workflow and check that draft replies are generated in your Gmail account. Customization Add or edit categories to fit your business needs (e.g., “Sales Leads” or “Support”). Adjust the GPT-4 prompts inside each “Generate Draft” node to match your preferred tone and style. Combine with other workflows (e.g., CRM integration, Slack alerts) for a complete email automation system. This template is especially useful for small businesses and freelancers who want to save time, improve response speed, and maintain professional communication without manually writing every reply.
by Dahiana
YouTube Transcript Extractor This n8n template demonstrates how to extract transcripts from YouTube videos using two different approaches: automated Google Sheets monitoring and direct webhook API calls. Use cases: Content creation, research, accessibility, meeting notes, content repurposing, SEO analysis, or building transcript databases for analysis. How it works Google Sheets Integration:** Monitor a sheet for new YouTube URLs and automatically extract transcripts Direct API Access:** Send YouTube URLs via webhook and get instant transcript responses Smart Parsing:** Extracts video ID from various YouTube URL formats (youtube.com, youtu.be, embed) Rich Metadata:** Returns video title, channel, publish date, duration, and category alongside transcript Fallback Handling:** Gracefully handles videos without available transcripts Two Workflow Paths Automated Sheet Processing: Add URLs to Google Sheet → Auto-extract → Save results to sheet Webhook API: Send POST request with video URL → Get instant transcript response How to set up Replace "Dummy YouTube Transcript API" credentials with your YouTube Transcript API key Create your own Google Sheet with columns: "url" (input sheet) and "video title", "transcript" (results sheet) Update Google Sheets credentials to connect your sheets Test each workflow path separately Customize the webhook path and authentication as needed Requirements YouTube Transcript API access (youtube-transcript.io or similar) Google Sheets API credentials (for automated workflow) n8n instance (cloud or self-hosted) YouTube videos How to customize Modify transcript processing in the Code nodes Add additional metadata extraction Connect to other storage solutions (databases, CMS) Add text analysis or summarization steps Set up notifications for new transcripts
by Rakin Jakaria
Use cases are many: Automate resume screening, candidate scoring, and interview communication in one seamless pipeline. Perfect for HR teams hiring at scale, startups that need quick filtering of applicants, or enterprises like Samsung running multiple roles at once. Good to know At time of writing, each Gemini request is billed per token. See Gemini Pricing for the latest info. The workflow automatically sends acceptance or rejection emails to candidates — be sure to configure your Gmail account and email templates carefully. How it works Form Submission**: Applicants fill out a custom form with their name, email, job role (Executive Assistant, IT Specialist, or Manager), and resume (PDF). Resume Processing**: The PDF resume is extracted into text using the Extract from File node. AI Evaluation**: Gemini-powered AI reviews the resume against the job role and generates: A score (0–10) A status (Accepted/Rejected) A personalized email (acceptance or rejection) Information Extraction**: The AI output is structured into fields: Score, Status, Mail Subject, and Mail Body. Email Sending**: The candidate automatically receives their personalized result via Gmail. Record Keeping**: All candidate details (Name, Job, Score, Status, Email, Email Status) are stored in Google Sheets for tracking. How to use Share the generated form link with applicants. When they submit, the system handles scoring and sends an email instantly. HR teams can review all results directly in Google Sheets. Requirements Google Gemini API key (for resume evaluation) Gmail account with OAuth2 (for sending acceptance/rejection emails) Google Sheets (for candidate tracking) n8n form node (for application collection) Customising this workflow Add more job positions to the form dropdown. Adjust the acceptance threshold (e.g., accept at 8/10 instead of 7/10). Modify email templates for a more formal or branded tone. Extend the pipeline to trigger a Calendly invite for accepted candidates. Integrate with Slack or Teams to notify HR when a candidate is accepted.
by Gabriel Santos
This workflow streamlines how employees request equipment/items and how those requests reach the Procurement team. It validates the employee by enrollment number, detects whether a manager exists, and then either requests approval (if the requester has a manager) or routes the request directly to Procurement (if the requester is a manager). All messages are written in a professional tone using an LLM, and emails are sent via Gmail with a built-in approve/deny step for managers. Who’s it for HR/IT/Operations teams that handle equipment requests, need a lightweight approval flow, and want clean, professional emails without manual drafting. How it works Employee submits their enrollment number. Workflow fetches employee (and manager, if any). Employee describes the requested item(s). If a manager exists → an approval email (double opt-in) is sent to the manager. Approved → notify employee and forward a polished request to Procurement. Denied → notify employee. If the requester is a manager → skip approval and send directly to Procurement. End pages confirm the outcome. Requirements MySQL (or compatible DB) with an employees table (id, name, email, enrollment_number, manager). Gmail credentials (OAuth) in n8n. LLM provider (OpenAI or compatible) for message polishing. How to customize Replace the Procurement NoOp nodes with your email, helpdesk, or ERP integration. Adjust email copy and tone in the LLM prompt nodes. Add tracking IDs, SLA text, or CCs for auditing. Style the forms with your brand (CSS blocks provided).