by Vitorio Magalhães
🎯 What this workflow does This workflow automatically monitors Reddit subreddits for new image posts and downloads them to Google Drive. It's perfect for content creators, meme collectors, or anyone who wants to automatically archive images from their favorite subreddits without manual work. The workflow intelligently prevents duplicate downloads by checking existing files in Google Drive and sends you Telegram notifications about the download status, so you always know when new content has been saved. 🚀 Key Features Multi-subreddit monitoring**: Configure multiple subreddits to monitor simultaneously Smart duplicate detection**: Never downloads the same image twice Automated scheduling**: Runs on a customizable cron schedule Real-time notifications**: Get instant Telegram updates about download activity Rate limit friendly**: Built-in delays to respect Reddit's API limits Cloud storage integration**: Direct upload to organized Google Drive folders 📋 Prerequisites Before using this workflow, you'll need: Reddit Developer Account**: Create an app at reddit.com/prefs/apps Google Cloud Project**: With Drive API enabled and OAuth2 credentials Telegram Bot**: Created via @BotFather with your chat ID Basic n8n knowledge**: Understanding of credentials and node configuration ⚙️ Setup Instructions 1. Configure Reddit API Access Visit reddit.com/prefs/apps and create a new "script" type application Note your Client ID and Client Secret Add Reddit OAuth2 credentials in n8n 2. Set up Google Drive Integration Enable Google Drive API in Google Cloud Console Create OAuth2 credentials with appropriate scopes Configure Google Drive OAuth2 credentials in n8n Update the folder ID in the workflow to your desired destination 3. Configure Telegram Notifications Create a bot via @BotFather on Telegram Get your chat ID (message @userinfobot) Add Telegram API credentials in n8n 4. Customize Your Settings Update the Settings node with: Your Telegram chat ID List of subreddits to monitor (e.g., ['memes', 'funny', 'pics']) Optional: Adjust wait time between requests Optional: Modify the cron schedule 🔄 How it works Scheduled Trigger: The workflow starts automatically based on your cron configuration Random Selection: Picks a random subreddit from your configured list Fetch Posts: Retrieves the latest 30 posts from the subreddit's "new" section Image Filtering: Keeps only posts with i.redd.it image URLs Duplicate Check: Searches Google Drive to avoid re-downloading existing images Download & Upload: Downloads new images and uploads them to your Drive folder Notification: Sends a Telegram message with the download summary 🛠️ Customization Options Scheduling Modify the cron trigger to run hourly, daily, or at custom intervals Add timezone considerations for your location Content Filtering Add upvote threshold filters to get only popular content Filter by image dimensions or file size Implement NSFW content filtering Storage & Organization Create subfolders by subreddit Add date-based folder organization Implement file naming conventions Notifications & Monitoring Add Discord webhook notifications Create download statistics tracking Log failed downloads for debugging 📊 Use Cases Content Creators**: Automatically collect memes and trending images for social media Digital Marketers**: Monitor visual trends across different communities Researchers**: Archive visual content from specific subreddits for analysis Personal Use**: Build a curated collection of images from your favorite subreddits 🎯 Best Practices Respect Rate Limits**: Keep the wait time between requests to avoid being blocked Monitor Storage**: Regularly check Google Drive storage usage Subreddit Selection**: Choose active subreddits with regular image posts Credential Security**: Use n8n's credential system and never hardcode API keys 🚨 Important Notes This workflow only downloads images from i.redd.it (Reddit's image host) Some subreddits may have bot restrictions Reddit's API has rate limits (~60 requests per minute) Ensure your Google Drive has sufficient storage space Always comply with Reddit's Terms of Service and content policies
by iamvaar
Workflow explanation: Watch on YouTube Automated HVAC triage & AI diagnosis with Gemini Vision, GoHighLevel & WhatsApp Prerequisites & Setup Before running or deploying this workflow, you need to configure the following services and credentials: GoHighLevel Custom Fields:** You must create the corresponding custom fields in your GoHighLevel account to capture the AI's output (e.g., Property Address, Issue, Brand, Model Number, Serial Number, Manufactured Date, Refrigerant Type, Probable Issue, Repair Action, Estimated Cost Level, Repair Score, and AI Recommendation). GoHighLevel Developer App (OAuth2):** * Create a free GoHighLevel Developer App. Add the following scopes: contacts.readonly, contacts.write, locations.readonly, locations/customFields.readonly, and locations/customFields.write (Note: these custom field scopes replace the standard opportunities scopes). Generate your Client ID and Secret within the Developer App. Enter these details into your n8n GoHighLevel OAuth2 credentials. Copy the OAuth Redirect URL from n8n, paste it into the App's OAuth redirection settings, and complete the authentication process. WhatsApp API Credentials:** You will need your WhatsApp Access Token and WhatsApp Business Account ID. Gemini API Key:** A valid Google Gemini API key to power the AI vision and diagnostic analysis. Node-by-Node Explanation 1. When Form Submitted (Trigger) This is the starting point of the workflow. It uses n8n's native form trigger to collect a service request. It captures the customer's Name, Email, Phone Number, Property Address, a text description of the issue, and crucially, a file upload of the HVAC unit's nameplate or data sticker. 2. Extract File Content This node takes the image uploaded in the previous form step and converts/extracts the file content. By assigning the binary data to the image property, it makes the picture readable and ready to be sent to external APIs for analysis. 3. Post to Gemini API This HTTP Request node acts as the brain of the operation. It sends the reported issue and the extracted image to the gemini-3.1-flash-lite model. The prompt instructs the AI to act as an expert HVAC diagnostic assistant to: Read the nameplate to extract the Brand, Model, Serial Number, Date, and Refrigerant Type. Analyze the text issue against the equipment data to generate a probable issue, repair action, and an estimated cost level. Calculate a "Repair Score" (0 to 100) to recommend whether the unit is worth repairing or replacing. Format the entire response strictly as a JSON object. 4. Parse API Response Because AI models can sometimes wrap JSON in markdown blocks or conversational text, this custom JavaScript Code node uses regex to locate the exact JSON block inside the Gemini output. It safely parses it and outputs clean, structured JSON data for the rest of the workflow to use. 5. Create Lead in GoHighLevel This node connects to your CRM. It maps the original client contact details (Name, Email, Phone) alongside all of the deeply detailed AI insights (Property Address, HVAC Brand, Model, Serial, AI Repair Score, Recommended Action) directly into the GoHighLevel custom fields you set up in the prerequisites. 6. Prepare Binary Image Since the workflow's primary data stream has been replaced by the parsed JSON text, this Code node "teleports" the original binary image data from the Extract File Content node and stitches it together with the current JSON data. This ensures the WhatsApp node has access to both the text details and the photo. 7. Send WhatsApp to Technician The final step dispatches a WhatsApp message directly to the lead technician. It sends the original photo of the HVAC nameplate accompanied by a neatly formatted caption. The caption includes the client's name, property address, the reported issue, the AI's probable diagnosis and cost estimation, and instructions on next steps.
by Recrutei Automações
Overview: Automated LinkedIn Job Posting with AI This workflow automates the publication of new job vacancies on LinkedIn immediately after they are created in the Recrutei ATS (Applicant Tracking System). It leverages a Code node to pre-process the job data and a powerful AI model (GPT-4o-mini, configured via the OpenAI node) to generate compelling, marketing-ready content. This template is designed for Recruitment and Marketing teams aiming to ensure consistent, timely, and high-quality job postings while saving significant operational time. Workflow Logic & Steps Recrutei Webhook Trigger: The workflow is instantly triggered when a new job vacancy is published in the Recrutei ATS, sending all relevant job data via a webhook. Data Cleaning (Code Node 1): The first Code node standardizes boolean fields (like remote, fixed_remuneration) from 0/1 to descriptive text ('yes'/'no'). Prompt Transformation (Code Node 2): The second, crucial Code node receives the clean job data and: Maps the original data keys (e.g., title, description) to user-friendly labels (e.g., Job Title, Detailed Description). Cleans and sanitizes the HTML description into readable Markdown format. Generates a single, highly structured prompt containing all job details, ready for the AI model. AI Content Generation (OpenAI): The AI Model receives the structured prompt and acts as a 'Marketing Copywriter' to create a compelling, engaging post specifically optimized for the LinkedIn platform. LinkedIn Post: The generated text is automatically posted to the configured LinkedIn profile or Company Page. Internal Logging (Google Sheets): The workflow concludes by logging the event (Job Title, Confirmation Status) into a Google Sheet for internal tracking and auditing. Setup Instructions To implement this workflow successfully, you must configure the following: Credentials: Configure OpenAI (for the Content Generator). Configure LinkedIn (for the Post action). Configure Google Sheets (for the logging). Node Configuration: Set up the Webhook URL in your Recrutei ATS settings. Replace YOUR_SHEET_ID_HERE in the Google Sheets Logging node with your sheet's ID. Select the correct LinkedIn profile/company page in the Create a post node.
by Oneclick AI Squad
Automatically creates complete videos from a text prompt—script, voiceover, stock footage, and subtitles all assembled and ready. How it works Send a video topic via webhook (e.g., "Create a 60-second video about morning exercise"). The workflow uses OpenAI to generate a structured script with scenes, converts text to natural-sounding speech, searches Pexels for matching B-roll footage, and downloads everything. Finally, it merges audio with video, generates SRT subtitles, and prepares all components for final assembly. The workflow handles parallel processing—while generating voiceover, it simultaneously searches and downloads stock footage to save time. Setup steps Add OpenAI credentials for script generation and text-to-speech Get a free Pexels API key from pexels.com/api for stock footage access Connect Google Drive for storing the final video output Install FFmpeg (optional) for automated video assembly, or manually combine the components Test the webhook by sending a POST request with your video topic Input format: { "prompt": "Your video topic here", "duration": 60, "style": "motivational" } What you get ✅ AI-generated script broken into scenes ✅ Professional voiceover audio (MP3) ✅ Downloaded stock footage clips (MP4) ✅ Timed subtitles file (SRT) ✅ All components ready for final editing Note: The final video assembly requires FFmpeg or a video editor. All components are prepared and organized by scene number for easy manual editing if needed.
by Khair Ahammed
Meet Troy, your intelligent personal assistant that seamlessly manages your Google Calendar and Tasks through Telegram. This workflow combines AI-powered natural language processing with MCP (Model Context Protocol) integration to provide a conversational interface for scheduling meetings, managing tasks, and organizing your digital life. Key Features 📅 Smart Calendar Management Create single and recurring events with conflict detection Support for multiple attendees (1-2 attendee variants) Automatic time zone handling (Bangladesh Standard Time) Weekly recurring event scheduling Event retrieval, updates, and deletion ✅ Task Management Create, update, and delete tasks in Google Tasks Mark tasks as completed Retrieve task lists with completion status Task repositioning and organization Parent-child task relationships 🤖Intelligent Processing Natural language understanding for scheduling requests Automatic conflict detection before event creation Context-aware responses with conversation memory Error handling with fallback messages 📱 Telegram Interface Real-time chat interaction Simple commands and natural language Instant confirmations and updates Error notifications Workflow Components Core Architecture: Telegram Trigger for user messages AI Agent with GPT-4o-mini processing MCP Client Tools for Google services Conversation memory for context Error handling with backup responses MCP Integrations: Google Calendar MCP Server (6 specialized tools) Google Tasks MCP Server (5 task operations) Custom HTTP tool for advanced task positioning Use Cases Calendar Scenarios: "Schedule a meeting tomorrow at 3 PM with john@example.com" "Set up weekly team standup every Monday at 10 AM" "Check my calendar for conflicts this afternoon" "Delete the meeting with ID xyz123" Task Management: "Add a task to buy groceries" "Mark the project report task as completed" "Update my presentation task due date to Friday" "Show me all pending tasks" Setup Requirements Required Credentials: Google Calendar OAuth2 Google Tasks OAuth2 OpenAI API key Telegram Bot token ** MCP Configuration:** Two MCP server endpoints for Google services Proper webhook configurations SSL-enabled n8n instance for MCP triggers Business Benefits Productivity: Voice-to-action task and calendar management *Efficiency: *Eliminate app switching with chat interface Intelligence: AI prevents scheduling conflicts automatically Accessibility: Simple Telegram commands for complex operations Technical Specifications Components: 1 Telegram trigger 1 AI Agent with memory 2 MCP triggers (Calendar & Tasks) 13 Google service tools Error handling flows Response Time: Sub-second for most operations *Memory: *Session-based conversation context Timezone: Automatic Bangladesh Standard Time conversion This personal assistant transforms how you interact with Google services, making scheduling and task management as simple as sending a text message to Troy on Telegram. Tags: personal-assistant, mcp-integration, google-calendar, google-tasks, telegram-bot, ai-agent, productivity
by vinci-king-01
Software Vulnerability Patent Tracker ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow automatically tracks newly-published patent filings that mention software-security vulnerabilities, buffer-overflow mitigation techniques, and related technology keywords. Every week it aggregates fresh patent data from USPTO and international patent databases, filters it by relevance, and delivers a concise JSON digest (and optional Intercom notification) to R&D teams and patent attorneys. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or n8n cloud, v1.7.0+) ScrapeGraphAI community node installed Basic understanding of patent search syntax (for customizing keyword sets) Optional: Intercom account for in-app alerts Required Credentials | Credential | Purpose | |------------|---------| | ScrapeGraphAI API Key | Enables ScrapeGraphAI nodes to fetch and parse patent-office webpages | | Intercom Access Token (optional) | Sends weekly digests directly to an Intercom workspace | Additional Setup Requirements | Setting | Recommended Value | Notes | |---------|-------------------|-------| | Cron schedule | 0 9 * * 1 | Triggers every Monday at 09:00 server time | | Patent keyword matrix | See example CSV below | List of comma-separated keywords per tech focus | Example keyword matrix (upload as keywords.csv or paste into the “Matrix” node): topic,keywords Buffer Overflow,"buffer overflow, stack smashing, stack buffer" Memory Safety,"memory safety, safe memory allocation, pointer sanitization" Code Injection,"SQL injection, command injection, injection prevention" How it works This workflow automatically tracks newly-published patent filings that mention software-security vulnerabilities, buffer-overflow mitigation techniques, and related technology keywords. Every week it aggregates fresh patent data from USPTO and international patent databases, filters it by relevance, and delivers a concise JSON digest (and optional Intercom notification) to R&D teams and patent attorneys. Key Steps: Schedule Trigger**: Fires weekly based on the configured cron expression. Matrix (Keyword Loader)**: Loads the CSV-based technology keyword matrix into memory. Code (Build Search Queries)**: Dynamically assembles patent-search URLs for each keyword group. ScrapeGraphAI (Fetch Results)**: Scrapes USPTO, EPO, and WIPO result pages and parses titles, abstracts, publication numbers, and dates. If (Relevance Filter)**: Removes patents older than 1 year or without vulnerability-related terms in the abstract. Set (Normalize JSON)**: Formats the remaining records into a uniform JSON schema. Intercom (Notify Team)**: Sends a summarized digest to your chosen Intercom workspace. (Skip or disable this node if you prefer to consume the raw JSON output instead.) Sticky Notes**: Contain inline documentation and customization tips for future editors. Set up steps Setup Time: 10-15 minutes Install Community Node Navigate to “Settings → Community Nodes”, search for ScrapeGraphAI, and click “Install”. Create Credentials Go to “Credentials” → “New Credential” → select ScrapeGraphAI API → paste your API key. (Optional) Add an Intercom credential with a valid access token. Import the Workflow Click “Import” → “Workflow JSON” and paste the template JSON, or drag-and-drop the .json file. Configure Schedule Open the Schedule Trigger node and adjust the cron expression if a different frequency is required. Upload / Edit Keyword Matrix Open the Matrix node, paste your custom CSV, or modify existing topics & keywords. Review Search Logic In the Code (Build Search Queries) node, review the base URLs and adjust patent databases as needed. Define Notification Channel If using Intercom, select your Intercom credential in the Intercom node and choose the target channel. Execute & Activate Click “Execute Workflow” for a trial run. Verify the output. If satisfied, switch the workflow to “Active”. Node Descriptions Core Workflow Nodes: Schedule Trigger** – Initiates the workflow on a weekly cron schedule. Matrix** – Holds the CSV keyword table and makes each row available as an item. Code (Build Search Queries)** – Generates search URLs and attaches meta-data for later nodes. ScrapeGraphAI** – Scrapes patent listings and extracts structured fields (title, abstract, pub. date, link). If (Relevance Filter)** – Applies date and keyword relevance filters. Set (Normalize JSON)** – Maps scraped fields into a clean JSON schema for downstream use. Intercom** – Sends formatted patent summaries to an Intercom inbox or channel. Sticky Notes** – Provide inline documentation and edit history markers. Data Flow: Schedule Trigger → Matrix → Code → ScrapeGraphAI → If → Set → Intercom Customization Examples Change Data Source to Google Patents // In the Code node const base = 'https://patents.google.com/?q='; items.forEach(item => { item.json.searchUrl = ${base}${encodeURIComponent(item.json.keywords)}&oq=${encodeURIComponent(item.json.keywords)}; }); return items; Send Digest via Slack Instead of Intercom // Replace Intercom node with Slack node { "text": 🚀 New Vulnerability-related Patents (${items.length})\n + items.map(i => • <${i.json.link}|${i.json.title}>).join('\n') } Data Output Format The workflow outputs structured JSON data: { "topic": "Memory Safety", "keywords": "memory safety, safe memory allocation, pointer sanitization", "title": "Memory protection for compiled binary code", "publicationNumber": "US20240123456A1", "publicationDate": "2024-03-21", "abstract": "Techniques for enforcing memory safety in compiled software...", "link": "https://patents.google.com/patent/US20240123456A1/en", "source": "USPTO" } Troubleshooting Common Issues Empty Result Set – Ensure that the keywords are specific but not overly narrow; test queries manually on USPTO. ScrapeGraphAI Timeouts – Increase the timeout parameter in the ScrapeGraphAI node or reduce concurrent requests. Performance Tips Limit the keyword matrix to <50 rows to keep weekly runs under 2 minutes. Schedule the workflow during off-peak hours to reduce load on patent-office servers. Pro Tips: Combine this workflow with a vector database (e.g., Pinecone) to create a semantic patent knowledge base. Add a “Merge” node to correlate new patents with existing vulnerability CVE entries. Use a second ScrapeGraphAI node to crawl citation trees and identify emerging technology clusters.
by Laiye-ADP
Workflow Introduction Core Competence Our invoice extraction workflow is completed end-to-end automatically: Gmail invoice email screening → extraction of key fields from multi-format invoices → automatic archiving of results to Google Drive, replacing the repetitive manual labor of finance staff in opening and entering invoices. Differentiated Advantage High Accuracy: The extraction accuracy of core fields is 91%+, far exceeding that of similar products in the industry; Advantages of table extraction: For invoices containing tables, the line-extract technology has a significant extraction effect; Multi-format Compatibility: Natively supports invoice formats such as PDF, images, Word, Excel, etc., without the need for conversion; Lightweight Integration: Seamlessly integrates with Gmail and Google Drive, Out Of The Box. Company Introduction Laiye ADP (Agentic Document Processing), based on large language models and vision-language models, combined with agent technology, is a new generation platform that enables end-to-end automated document processing. For more information, please visit the official website of Laiye Technology: ++https://laiye.com/++ Use Cases Multi-supplier integration: Efficiently process invoices from multiple suppliers, automatically extract key invoice information for archiving; Accounting firms batch process large amounts of invoice data: they can handle the increased invoice processing. requirements brought about by the growth in the number of clients without adding staff; Cross-border trade enterprises: For multi-language/complex layout overseas invoice scenarios, without the need for manual setup and human processing, achieve understanding of documents and then complete extraction of important data; Small and Medium-sized Technology Enterprises: Quickly identify important information such as invoice date, invoice amount, and invoice number from employee reimbursement invoices, and say goodbye to manual data entry. How it works Step 1: Complete Gmail authorization You need to authorize your Google email. We will automatically obtain your email attachments from Google email. To accurately obtain and identify the invoice attachments, you can set up your email filter configuration by yourself, for example: Emails with attachments and subjects containing keywords like "invoice"; Emails from supplier; Emails under the designated label. Step 2: Automate document filtering We have configured the document automatic filterer for you, which will filter out the documents that meet the automated processing flow. There is no need for manual operation. For those that do not meet the conditions, our workflow will store these documents in the designated folder for quick retrieval during manual processing. There is no need to sift through emails one by one. The documents that we preset for automatic processing include the following conditions (union): The attachment title contains text: invoice, receipt, expenses, fee (any one of them is sufficient. If you want to match other commonly used words in actual business, you can directly modify the {{ $json.attachment_extension }} field in the filter to assign the value. File size: < 50MB File format: .jpeg, .jpg, .png, .bmp, .tiff, .pdf, .doc, .docx, .xls, .xlsx. Step 3: Complete ADP permission configuration You need to go to adp.laiye.com to register for free, after which you can obtain 100 free calls per month Select the Out Of The Box Agent Application-Invoice/Receipt Card, and click the more menu [...] on the card to directly obtain your exclusive API Key, App Secret, and Access Key Simply fill the obtained Key into the configuration item of the 【Laiye ADP HTTP Request Node】 After ADP completes invoice extraction, it will return structured Json data, which includes more than 10 text fields, such as "Invoice Name", "Invoice Number", "Invoice Date", etc., as well as complete invoice table fields, such as "Item Name", "Description", "Quantity", "Unit Price", etc. Step 4: Complete Google Drive authorization Files processed by ADP will be automatically converted into binary data to ensure smooth import into Google Drive (Sheet) Files that have not undergone ADP processing will be saved as the original files to the [Untreated Document] folder. If all files have been automatically processed, this folder will not be created Extracted documents can be automatically saved to any folder you specify. By default, they are stored in MY Drive. If you wish to save them to a custom folder, simply modify the Parent Folder setting Professional Community and Latest News Follow us on LinkedIn for more updates! ++https://www.linkedin.com/company/laiye++ We will share the latest updates on Laiye ADP products; You can share your successful invoice processing cases. Problems & Support If you encounter any issues during use, you can try contacting us for technical support (++global_product@laiye.com++), and your message requesting technical support can include the following content: Describe the problem you encountered in as much detail as possible Your current invoice processing volume and type The specific supplier format or invoice layout you handle Target Accounting Software or System Integration Any technical errors or issues with extraction accuracy Current manual processing workflow and pain points Response time: within 24 hours on working days Professional Areas: Invoice Processing Automation, Order Processing Automation, Receipt Processing Automation
by Cuong Nguyen
Description: Start your day with a personalized news podcast delivered directly to your Telegram. This workflow helps you stay informed without scrolling through endless feeds. It automatically collects news from your favorite websites and YouTube channels, filters out the noise, and uses AI to turn them into a short, listenable audio briefing. It’s like having a personal news assistant that reads the most important updates to you while you commute or drink your morning coffee. Who is this for This template is perfect for busy professionals, commuters, and learners who want to keep up with specific topics (like Tech, Finance, or AI) but don't have time to read dozens of articles every morning. How it works Collects News: The workflow automatically checks your chosen RSS feeds (e.g., TechCrunch, BBC) and searches for trending YouTube videos on topics you care about. Filters Noise: It smartly removes duplicate stories and filters out promotional content or spam, ensuring you only get high-quality news. Summarizes: Google Gemini (AI) reads the collected data, picks the top stories, and rewrites them into a clear, engaging script. Creates Audio: OpenAI turns that script into a natural-sounding MP3 file (Text-to-Speech). Delivers: You receive a neat text summary and the audio file in your Telegram chat, ready to play. Requirements API Keys: Google Gemini (PaLM) OpenAI YouTube Data API Telegram Bot Token How to set up Get Credentials: Sign up for the required services (Google, OpenAI, Telegram) and get your API keys. Connect Nodes: Use your API credentials into the respective nodes in the workflow. Set Chat ID: Enter your Telegram Chat ID in the Telegram nodes (or set it as a variable) so the bot knows where to send the message. Turn on: Activate the workflow switch to let it run automatically every morning at 7:00 AM (or any time you want). How to customize the workflow Your Interests:** Simply change the URLs in the RSS Feed Read nodes to follow your favorite blogs. Your Topics:** Update the keywords in the YouTube - Search node (e.g., change "AI" to "Football" or "Marketing") to get relevant video news. Your Voice:** You can change the voice style (e.g., from alloy to echo) in the Code - Build TTS Payload node to suit your preference. Contact me for consulting and support Email: cuongnguyen@aiops.vn
by RenderIO
Who is this for Content creators, YouTubers, and social media managers who want to repurpose long form videos into short clips without doing it manually. Works on self hosted n8n instances. What it does Monitors a Google Drive folder for new videos. When a video appears, the workflow downloads it, extracts the audio, transcribes it using Whisper, and sends the transcript to OpenAI to identify the best highlight moments. Each selected clip is then rendered in three aspect ratios (9:16 for TikTok, 9:16 for Reels, 1:1 for Square) using cloud based FFmpeg through RenderIO. The finished clips are uploaded back to Google Drive and every run is logged to a Google Sheet. How it works Watch Drive Folder polls your source folder every minute and triggers when a new video file is detected. Set Config holds all tunable settings in one place: clip count, folder IDs, sheet IDs, and LLM model. The video is downloaded from Google Drive and uploaded to RenderIO for processing. Extract Audio runs an FFmpeg command to pull the audio track from the video. The audio is sent to Whisper for transcription. Both TXT and SRT transcript files are saved to Google Drive. Pick Clips sends the transcript to OpenAI, which returns timestamped highlight suggestions. Validate Clips checks that all timestamps and durations are valid before rendering. Each clip is rendered in three formats through RenderIO with separate FFmpeg commands for each aspect ratio. All rendered clips are downloaded and uploaded to a dedicated output folder in Google Drive. Append Clip Row logs each clip to a Google Sheet and Append Run Summary records the overall processing stats. Requirements A self hosted or cloud n8n instance (uses a community node) The n8n-nodes-renderio community node installed via Settings > Community Nodes A free RenderIO account and API key from renderio.dev Google Drive and Google Sheets OAuth credentials An OpenAI API key How to set up Install the n8n-nodes-renderio community node from Settings > Community Nodes. Create credentials for Google Drive (OAuth2), Google Sheets (OAuth2), OpenAI, and RenderIO API. Import the workflow and open the Set Config node. Update the outputParentFolderId with the Google Drive folder ID where output folders should be created. Update the sheetId with your Google Sheet document ID. Set sheetTab and sheetRunsTab to the correct sheet tab IDs for clip logging and run summaries. Configure the Watch Drive Folder trigger node to point at your source video folder. Activate the workflow and drop a test video into the folder. How to customize Change clipCount in Set Config to generate more or fewer clips per video. Swap llmModel from gpt-4o-mini to gpt-4o or another model for different clip selection quality. Modify the FFmpeg commands in Build Commands for Clip to adjust resolution, bitrate, add watermarks, or change output formats. Replace Google Drive with S3 or another storage provider if that fits your stack. Add a Slack or Telegram notification node after the summary step to get alerted when processing finishes.
by Vinay Gangidi
LOB Underwriting with AI This template ingests borrower documents from OneDrive, extracts text with OCR, classifies each file (ID, paystub, bank statement, utilities, tax forms, etc.), aggregates everything per borrower, and asks an LLM to produce a clear underwriting summary and decision (plus next steps). Good to know AI and OCR usage consume credits (OpenAI + your OCR provider). Folder lookups by name can be ambiguous—use a fixed folderId in production. Scanned image quality drives OCR accuracy; bad scans yield weak text. This flow handles PII—mask sensitive data in logs and control access. Start small: batch size and pagination keep costs/memory sane. How it works Import & locate docs: Manual trigger kicks off a OneDrive folder search (e.g., “LOBs”) and lists files inside. Per-file loop: Download each file → run OCR → classify the document type using filename + extracted text. Aggregate: Combine per-file results into a borrower payload (make BorrowerName dynamic). LLM analysis: Feed the payload to an AI Agent (OpenAI model) to extract underwriting-relevant facts and produce a decision + next steps. Output: Return a human-readable summary (and optionally structured JSON for systems). How to use Start with the Manual Trigger to validate end-to-end on a tiny test folder. Once stable, swap in a Schedule/Cron or Webhook trigger. Review the generated underwriting summary; handle only flagged exceptions (unknown/unreadable docs, low confidence). Setup steps Connect accounts Add credentials for OneDrive, OCR, and OpenAI. Configure inputs In Search a folder, point to your borrower docs (prefer folderId; otherwise tighten the name query). In Get items in a folder, enable pagination if the folder is large. In Split in Batches, set a conservative batch size to control costs. Wire the file path Download a file must receive the current file’s id from the folder listing. Make sure the OCR node receives binary input (PDFs/images). Classification Update keyword rules to match your region/lenders/utilities/tax forms. Keep a fallback Unknown class and log it for review. Combine Replace the hard-coded BorrowerName with: a Set node field, a form input, or parsing from folder/file naming conventions. AI Agent Set your OpenAI model/credentials. Ask the model to output JSON first (structured fields) and Markdown second (readable summary). Keep temperature low for consistent, audit-friendly results. Optional outputs Persist JSON/Markdown to Notion/Docs/DB or write to storage. Customize if needed Doc types: add/remove categories and keywords without touching core logic. Error handling: add IF paths for empty folders, failed downloads, empty OCR, or Unknown class; retry transient API errors. Privacy: redact IDs/account numbers in logs; restrict execution visibility. Scale: add MIME/size filters, duplicate detection, and multi-borrower folder patterns (parent → subfolders).
by Rahul Joshi
Description Automatically extract a structured skill matrix from PDF resumes in a Google Drive folder and store results in Google Sheets. Uses Azure OpenAI (GPT-4o-mini) to analyze predefined tech stacks and filters for relevant proficiency. Fast, consistent insights ready for review. 🔍📊 What This Template Does Fetches all resumes from a designated Google Drive folder (“Resume_store”). 🗂️ Downloads each resume file securely via Google Drive API. ⬇️ Extracts text from PDF files for analysis. 📄➡️📝 Analyzes skills with Azure OpenAI (GPT-4o-mini), rating 1–5 and estimating years. 🤖 Parses and filters to include only skills with proficiency > 2, then updates Google Sheets (“Resume store” → “Sheet2”). ✅ Key Benefits Saves hours on manual resume screening. ⏱️ Produces a consistent, structured skill matrix. 📐 Focuses on intermediate to expert skills for faster shortlisting. 🎯 Centralizes candidate data in Google Sheets for easy sharing. 🗃️ Features Predefined tech stack focus: React, Node.js, Angular, Python, Java, SQL, Docker, Kubernetes, AWS, Azure, GCP, HTML, CSS, JavaScript. 🧰 Proficiency scoring (1–5) and estimated years of experience. 📈 PDF-to-text extraction for robust parsing. 🧾 JSON parsing with error handling for invalid outputs. 🛡️ Manual Trigger to run on demand. ▶️ Requirements n8n instance (cloud or self-hosted). Google Drive access with credentials to the “Resume_store” folder. Google Sheets access to the “Resume store” spreadsheet and “Sheet2” tab. Azure OpenAI with GPT-4o-mini deployed and connected via secure credentials. PDF text extraction enabled within n8n. Target Audience HR and Talent Acquisition teams. 👥 Recruiters and staffing agencies. 🧑💼 Operations teams managing hiring pipelines. 🧭 Tech hiring managers seeking consistent skill insights. 💡 Step-by-Step Setup Instructions Place candidate resumes (PDF) into Google Drive → “Resume_store”. In n8n, add Google Drive and Google Sheets credentials and authorize access. In n8n, add Azure OpenAI credentials (GPT-4o-mini deployment). Import the workflow, assign credentials to each node, and confirm folder/sheet names. Run the Manual Trigger to execute the flow and verify data in “Resume store” → “Sheet2”.
by Lidia
Who’s it for Teams who want to automatically generate structured meeting minutes from uploaded transcripts and instantly share them in Slack. Perfect for startups, project teams, or any company that collects meeting transcripts in Google Drive. How it works / What it does This workflow automatically turns raw meeting transcripts into well-structured minutes in Markdown and posts them to Slack: Google Drive Trigger – Watches a specific folder. Any new transcript file added will start the workflow. Download File – Grabs the transcript. Prep Transcript – Converts the file into plain text and passes the transcript downstream. Message a Model – Sends the transcript to OpenAI GPT for summarization using a structured system prompt (action items, decisions, N/A placeholders). Make Minutes – Formats GPT’s response into a Markdown file. Slack: Send a message – Posts a Slack message announcing the auto-generated minutes. Slack: Upload a file – Uploads the full Markdown minutes file into the chosen Slack channel. End result: your Slack channel always has clear, standardized minutes right after a meeting. How to set up Google Drive Create a folder where you’ll drop transcript files. Configure the folder ID in the Google Drive Trigger node. OpenAI Add your OpenAI API credentials in the Message a Model node. Select a supported GPT model (e.g., gpt-4o-mini or gpt-4). Slack Connect your Slack account and set the target channel ID in the Slack nodes. Run the workflow and drop a transcript file into Drive. Minutes will appear in Slack automatically. Requirements Google Drive account (for transcript upload) OpenAI API key (for text summarization) Slack workspace (for message posting and file upload) How to customize the workflow Change summary structure*: Adjust the system prompt inside *Message a Model (e.g., shorter summaries, language other than English). Different output format*: Modify *Make Minutes to output plain text, PDF, or HTML instead of Markdown. New destinations**: Add more nodes to send minutes to email, Notion, or Confluence in parallel. Multiple triggers**: Replace Google Drive trigger with Webhook if you want to integrate with Zoom or MS Teams transcript exports. Good to know OpenAI API calls are billed separately. See OpenAI pricing. Files must be text-based (.txt or .md). For PDFs or docs, add a conversion step before summarization. Slack requires the bot user to be a member of the target channel, otherwise you’ll see a not_in_channel error.