by Ms. Phuong Nguyen (phuongntn)
An AI Recruiter that screens, scores, and ranks candidates in minutes โ directly inside n8n. ๐ง Overview An AI-powered recruiter workflow that compares multiple candidate CVs with a single Job Description (JD). It analyzes text content, calculates fit scores, identifies strengths and weaknesses, and provides automated recommendations. โ๏ธ How it works ๐น Webhook Trigger โ Upload one Job Description (JD) and multiple CVs (PDF or text) ๐น File Detector โ Auto-identifies JD vs CV ๐น Extract & Merge โ Reads text and builds candidate dataset ๐น ๐ค AI Recruiter Agent โ Compares JD & CVs โ returns Fit Score, Strengths, Weaknesses, and Recommendation ๐น ๐ค Output Node โ Sends structured JSON or summary table for HR dashboards or Chat UI Example: Upload JD.pdf + 3 candidate CVs โ get instant JSON report with top match and recommendations. ๐งฉ Requirements OpenAI or compatible AI Agent connection (no hardcoded API keys). Input files in PDF or text format (English or Vietnamese supported). n8n Cloud or Self-Hosted v1.50+ with AI Agent nodes enabled. ๐ธ โOpenAI API Key or n8n AI Agent credential requiredโ ๐งฑ Customizing this workflow Swap the AI model with Gemini, Claude, or another LLM. Add a Google Sheets export node to save results. Connect to SAP HR or internal employee APIs. Adjust scoring logic or include additional attributes (experience, skills, etc.). ๐ฉโ๐ผ Author https://www.linkedin.com/in/nguyen-phuong-17a71a147/ Empowering HR through intelligent, data-driven recruitment.
by Jay Emp0
Ebook to Audiobook Converter โถ๏ธ Watch Full Demo Video What It Does Turn any PDF ebook into a professional audiobook automatically. Upload a PDF, get an MP3 audiobook in your Google Drive. Perfect for listening to books, research papers, or documents on the go. Example: Input PDF โ Output Audiobook Key Features Upload PDF via web form โ Get MP3 audiobook in Google Drive Natural-sounding AI voices (MiniMax Speech-02-HD) Automatic text extraction, chunking, and audio merging Customizable voice, speed, and emotion settings Processes long books in batches with smart rate limiting Perfect For Students**: Turn textbooks into study audiobooks Professionals**: Listen to reports and documents while commuting Content Creators**: Repurpose written content as audio Accessibility**: Make content accessible to visually impaired users Requirements | Component | Details | |-----------|---------| | n8n | Self-hosted ONLY (cannot run on n8n Cloud) | | FFmpeg | Must be installed in your n8n environment | | Replicate API | For MiniMax TTS (Sign up here) | | Google Drive | OAuth2 credentials + "Audiobook" folder | โ ๏ธ Important: This workflow does NOT work on n8n Cloud because FFmpeg installation is required. Quick Setup 1. Install FFmpeg Docker users: docker exec -it <n8n-container-name> /bin/bash apt-get update && apt-get install -y ffmpeg Native installation: sudo apt-get install ffmpeg # Linux brew install ffmpeg # macOS 2. Get API Keys Replicate**: Sign up at replicate.com and copy your API token Google Drive**: Set up OAuth2 in n8n and create an "Audiobook" folder in Drive 3. Import & Configure Import n8n.json into your n8n instance Replace the Replicate API token in the "MINIMAX TTS" node Configure Google Drive credentials and select your "Audiobook" folder Activate the workflow Cost Estimate | Component | Cost | |-----------|------| | MiniMax TTS API | $0.15 per 1000 characters ($3-5 for average book) | | Google Drive Storage | Free (up to 15GB) | | Processing Time | ~1-2 minutes per 10 pages | How It Works PDF Upload โ Extract Text โ Split into Chunks โ Convert to Speech (batches of 5) โ Merge Audio Files (FFmpeg) โ Upload to Google Drive The workflow uses four main modules: Extraction: PDF text extraction and intelligent chunking Conversion: MiniMax TTS processes text in batches Merging: FFmpeg combines all audio files seamlessly Upload: Final audiobook saved to Google Drive Voice Settings (Customizable) { "voice_id": "Friendly_Person", "emotion": "happy", "speed": 1, "pitch": 0 } Available emotions: happy, neutral, sad, angry, excited Limitations โ ๏ธ Self-hosted n8n ONLY (not compatible with n8n Cloud) PDF files only (not EPUB, MOBI, or scanned images) Large books (500+ pages) take longer to process Requires FFmpeg installation (see setup above) Troubleshooting FFmpeg not found? Docker: Run docker exec -it <container> /bin/bash then apt-get install ffmpeg Native: Run sudo apt-get install ffmpeg (Linux) or brew install ffmpeg (macOS) Rate limit errors? Increase wait time in the "WAITS FOR 5 SECONDS" node to 10-15 seconds Google Drive upload fails? Make sure you created the "Audiobook" folder in your Google Drive Reconfigure OAuth2 credentials in n8n Created by emp0 | More workflows: n8n Gallery
by Julian Kaiser
Scan Any Workout Plan into the Hevy App with AI This workflow automates the creation of workout routines in the Hevy app by extracting exercise information from an uploaded PDF or Image using AI. What problem does this solve? Tired of manually typing workout plans into the Hevy app? Whether your coach sends them as Google Docs, PDFs, or you have a screenshot of a routine, entering every single exercise, set, and rep is a tedious chore. This workflow ends the madness. It uses AI to instantly scan your workout plan from any file, intelligently extract the exercises, and automatically create the routine in your Hevy account. What used to take 15 minutes of mind-numbing typing now happens in seconds. How it works Trigger: The workflow starts when a PDF file is submitted through an n8n form. Data Extraction: The PDF is converted to a Base64 string and sent to an AI model to extract the raw text of the workout plan. Context Gathering: The workflow fetches a complete list of available exercises directly from the Hevy API. This list is then consolidated. AI Processing: A Google Gemini model analyzes the extracted text, compares it against the official Hevy exercise list, and transforms the raw text into a structured JSON format that matches the Hevy API requirements. Routine Creation: The final structured data is sent to the Hevy API to create the new workout routine in your account. Set up steps Estimated set up time:** 15 minutes. Configure the On form submission trigger or replace it with your preferred trigger (e.g., Webhook). Ensure it's set up to receive a file upload. Add your API credentials for the AI service (in this case, OpenRouter.ai) and the Hevy app. You will need to create 'Hevy API' and OpenRouter API credentials in your n8n instance. In the Structured Data Extraction node, review the prompt and the json schema in the Structured Output Parser. You may need to adjust the prompt to better suit the types of files you are uploading. Activate the workflow. Test it by uploading a sample workout plan document.
by InfyOm Technologies
โ What problem does this workflow solve? Sending a plain PDF resume doesnโt stand out anymore. This workflow allows candidates to convert their resume and photo into a personalized video resume. Recruiters get a more engaging first impression, while candidates showcase their profile in a modern, impactful way. โ๏ธ What does this workflow do? Presents a form for uploading: ๐ Resume (PDF) ๐ผ Photo (headshot) Extracts key details from the resume (education, experience, skills). Detects gender from the photo to choose a suitable voice/avatar. Generates a script (spoken resume summary) based on the extracted information. Uploads the photo to HeyGen to create an avatar. Requests video generation on HeyGen: Uses the avatar photo Uses gender-specific settings Uses the generated script as narration Monitors video generation status until completion. Stores the final video URL in a Google Sheet for easy access and tracking. ๐ง Setup Instructions Google Services Connect Google Sheets to n8n to store records with: Candidate name Resume link Video link HeyGen Setup Get an API key from HeyGen. Configure: Avatar upload endpoint (image upload) Video generation endpoint (image ID + script) Form Setup Use the n8n Form Trigger to allow candidates to upload: Resume (PDF) Photo (JPEG/PNG) ๐ง How it Works โ Step-by-Step 1. Candidate Submission A candidate fills out a form and uploads: Resume (PDF) Photo 2. Extract Resume Data The resume PDF is processed using OCR/AI to extract: Name Experience Skills Education highlights 3. Gender Detection The uploaded photo is analyzed to detect gender (used for voice/avatar selection). 4. Script Generation Based on the extracted resume info, a concise, natural script is generated automatically. 5. Avatar Upload & Video Creation The photo is uploaded to HeyGen to create a custom avatar. A video generation request is made using: The script The avatar (image ID) A matching voice for the detected gender 6. Video Status Monitoring The workflow polls HeyGenโs API until the video is ready. 7. Save Final Video URL Once complete, the video link is added to a Google Sheet alongside the candidateโs details. ๐ค Who can use this? This workflow is ideal for: ๐งโ๐ Students and job seekers looking to stand out ๐งโ๐ผ Recruitment agencies offering modern resume services ๐ข HR teams wanting engaging candidate submissions ๐ฅ Portfolio builders for professionals ๐ Impact Instead of a static PDF, you can now send a dynamic video resume that captures attention, adds personality, and makes a lasting impression.
by Automate With Marc
Gemini 3 Image & PDF Extractor (Google Drive โ Gemini 3 โ Summary) Automatically summarize newly uploaded images or PDF reports using Google Gemini 3, triggered directly from a Google Drive folder. Perfect for anyone who needs fast AI-powered analysis of financial reports, charts, screenshots, or scanned documents. ๐ฅ Watch the full step-by-step video tutorial: https://www.youtube.com/watch?v=UuWYT_uXiw0 What this template does This workflow watches a Google Drive folder for new files and automatically: Detects new uploaded files Uses Google Drive Trigger Watches a specific folder for fileCreated events Filters by MIME type: image/png image/webp application/pdf Downloads the file automatically Depending on the file type: Images โ Download via HTTP Request โ Send to Gemini 3 Vision PDFs โ Download via HTTP Request โ Extract content โ Send to Gemini 3 Analyzes content using Gemini 3 Two separate processing lanes: ๐ผ๏ธ Image Lane Image is sent to Gemini 3 (Vision / Image Analyze) Extracts textual + visual meaning from charts, diagrams, or screenshots Passes structured output to an AI Analyst Agent Agent summarizes and highlights top 3 findings ๐ PDF Lane PDF is downloaded Text is extracted using Extract From File Processed using Gemini 3 via OpenRouter Chat Model AI Analyst Agent summarizes charts/tables and extracts insights Why this workflow is useful Save hours manually reading PDFs, charts, and screenshots Convert dense financial or operational documents into digestible insights Great for: Financial analysts Operations teams Market researchers Content & reporting teams Anyone receiving frequent reports via Drive Requirements Before using this template, you will need: Google Drive OAuth credential (for Drive trigger + file download) Gemini 3 / PaLM or OpenRouter API key (Optional) Update folder ID to your own Google Drive target folder โ ๏ธ No credentials are included in this template. Add them manually after importing it. Node Overview Google Drive Trigger Watches a specific Drive folder for newly added files Provides metadata like webContentLink and MIME type Filter by Type (IF Node) Routes files to Image lane or PDF lane png or webp โ Image pdf โ PDF ๐ผ๏ธ Image Processing Lane Download Image (HTTP Request) Analyze Image (Gemini Vision) Analyzer Agent Summarizes findings Highlights actionable insights Powered by OpenRouter Gemini 3 ๐ PDF Processing Lane Download PDF (HTTP Request) Extract From File โ PDF Analyzer Agent (PDF) Summarizes extracted chart/report information Highlights key takeaways Setup Guide Import the template into your n8n workspace Open Google Drive Trigger Select your Drive OAuth credential Replace folder ID with your target folder Open Gemini 3 / OpenRouter AI Model nodes Add your API credentials Test by uploading: A PNG/WebP chart screenshot A multi-page PDF report Check the execution to view summary outputs Customization Ideas Add email delivery (send the summary to yourself daily) Save summaries into: Google Sheets Notion Slack channels n8n Data Tables Add a second agent to convert summaries into: Weekly reports PowerPoint slides Slack-ready bullet points Add classification logic: Revenue reports Marketing analytics Product dashboards Financial charts Troubleshooting Trigger not firing? Confirm your Drive OAuth credential has read access to the folder. Gemini errors? Ensure your model ID matches your API provider: models/gemini-3-pro-preview google/gemini-3-pro-preview PDF extraction empty? Check if the file contains selectable text or only images. (You can add OCR if needed.)
by Dhruv Mali
Description This workflow acts as your automated HR assistant, scanning for employee milestones and posting AI-generated celebration messages to Google Chat. How it works Daily Scan:** Checks your Google Sheet every morning to identify birthdays and work anniversaries. AI Drafting:* Uses *Google Gemini** to write unique, warm messages for each employee, ensuring wishes never sound robotic or repetitive. Delivery:* Automatically posts the message to your team's *Google Chat** space and logs the activity. Set up steps Connect Accounts: Set up credentials for Google Sheets and Google PaLM/Gemini. Configure Settings: Open the SET-BIRTHDAY and SET - ANNIVERSARY nodes to enter your Agency Name and Google Chat API details (Space ID, Key, Token). Prepare Data: Ensure your Google Sheet contains columns for employee names, dates of birth, and joining dates.
by MUHAMMAD SHAHEER
Overview This workflow automates the process of turning your video transcripts into platform-specific social media posts using AI. It reads any uploaded transcript file, analyzes the text, and automatically generates full-length, engaging posts with image prompts for Facebook, LinkedIn, Instagram, Reddit, and WhatsApp. Perfect for creators, marketers, and automation builders who want to repurpose long-form content into viral posts, all in one click. How it Works The Manual Trigger starts the workflow. The Read Binary File node imports your video transcript (TXT format). The Move Binary Data and Set nodes convert it into a text string for processing. The AI Agent (LangChain) powered by Groq AI analyzes the transcript and generates human-like social media posts with realistic image prompts. The Function Node parses and structures the output by platform. The Google Sheets Node automatically saves all content โ ready for scheduling or publishing. The SerpAPI Integration enhances contextual awareness by referencing real-time search trends. Set Up Steps Setting up this workflow typically takes 5โ10 minutes. Connect your Google Sheets account (OAuth2). Connect your Groq AI and SerpAPI credentials. Upload your transcript file (e.g., from YouTube or podcast). Run the workflow to instantly generate platform-specific posts and prompts. View all results automatically saved in Google Sheets. Detailed instructions are included as sticky notes inside the workflow. Use Cases Turn YouTube videos or podcasts into multi-platform social content Auto-generate daily social posts using transcripts Build AI-powered repurposing systems for agencies or creators Save creative teams hours of manual copywriting work Requirements n8n account (self-hosted or cloud) Groq AI API Key SerpAPI Key (for optional trend enhancement) Google Sheets connection
by Philflow
This n8n template lets you run prompts against 350+ LLM models and see exactly what each request costs with real-time pricing from OpenRouter Use cases are many: Compare costs across different models, plan your AI budget, optimize prompts for cost efficiency, or track expenses for client billing! Good to know OpenRouter charges a platform fee on top of model costs. See OpenRouter Pricing for details. You need an OpenRouter account with API credits. Free signup available with some free models included. Pricing data is fetched live from OpenRouter's API, so costs are always up-to-date. How it works All available models are fetched from OpenRouter's API when you start. You select a model and enter your prompt via the form (or just use the chat). The prompt is sent to OpenRouter and the response is captured. Token usage (input/output) is extracted from the response using a LangChain Code node. Real-time pricing for your selected model is fetched from OpenRouter. The exact cost is calculated and displayed alongside your AI response. How to use Chat interface: Quick testing - just type a prompt and get the response with costs. Form interface: Select from all available models via dropdown, enter your prompt, and get a detailed cost breakdown. Click "Show Details" on the result form to see the full breakdown (input tokens, output tokens, cost per type). Requirements OpenRouter account with API key (Get one here) Customising this workflow Add a database node to log all requests and costs over time Connect to Google Sheets for cost tracking and reporting Extend with LLM-as-Judge evaluation to also check response quality
by Avkash Kakdiya
How it works This workflow runs daily to collect the latest funding round data from Crunchbase. It retrieves up to 100 recent funding events, including company, investors, funding amount, and industry details. The data is cleaned and filtered to only include rounds announced in the last 30 days. Finally, the results are saved into both Google Sheets for reporting and Airtable for structured database management. Step-by-step Trigger & Data Fetching Schedule Trigger node โ Runs the workflow once a day. HTTP Request node โ Calls the Crunchbase API to fetch the latest 100 funding rounds with relevant details. Data Processing Code node โ Parses the raw API response into clean fields such as company name, funding type, funding amount, investors, industry, and Crunchbase URL. Filter node โ Keeps only funding rounds from the last 30 days to ensure the dataset remains fresh and relevant. Storage & Outputs Google Sheets node โ Appends or updates the filtered funding records in a Google Sheet for easy sharing and reporting. Airtable node โ Stores the same records in Airtable for more structured, database-style organization and management. Why use this? Automates daily collection of startup funding data from Crunchbase. Keeps only the most recent and relevant records for faster insights. Ensures data is consistently stored in both Google Sheets and Airtable. Supports reporting, collaboration, and database management in one flow.
by satoshi
Create FAQ articles from Slack threads to Notion and Zendesk This workflow helps you capture "tribal knowledge" shared in Slack conversations and automatically converts it into structured documentation. By simply adding a specific reaction (default: ๐) to a message, the workflow aggregates the thread, uses AI to summarize it into a Q&A format, and publishes it to your knowledge base (Notion and Zendesk). Who is this for? Customer Support Teams** who want to turn internal troubleshooting discussions into public help articles. Knowledge Managers** looking to reduce the friction of documentation. Development Teams** wanting to archive technical decisions made in Slack threads. What it does Trigger: Watches for a specific emoji reaction (๐ :book:) on a Slack message. Data Collection: Fetches the parent message and all replies in the thread to get the full context. AI Processing: Uses OpenAI to analyze the conversation, summarize the solution, and format it into a clear Question & Answer structure. Publishing: Creates a new page in a Notion database with tags and summaries. (Optional) Drafts a new article in Zendesk. Notification: Replies to the original Slack thread with links to the newly created documentation. Requirements n8n** (Self-hosted or Cloud) Slack** workspace (with an App installed that has permissions to read channels and reactions). OpenAI** API Key. Notion** account with an Integration Token. Zendesk** account (optional, can be removed if not needed). How to set up Configure Credentials: Set up authentication for Slack, OpenAI, Notion, and Zendesk in n8n. Setup Notion: Create a database in Notion with the following properties: Name (Title) Summary (Text/Rich Text) Tags (Multi-select) Source (URL) Channel (Select or Text) Update Configuration Node: Open the Workflow Configuration1 node (Set node) and replace the placeholder values: slackWorkspaceId: Your Slack Workspace ID (e.g., T01234567). notionDatabaseId: The ID of your Notion database. zendeskSectionId: (Optional) The ID of the section where articles should be created. Slack App Scopes: Ensure your Slack App has the following scopes: reactions:read, channels:history, groups:history, chat:write. How to customize Change the Trigger:* If you prefer a different emoji (e.g., ๐ or ๐ก), update the "Right Value" in the *IF - :book: Reaction Check** node. Modify the Prompt:* Edit the *OpenAI** node to change how the AI formats the answer (e.g., ask it to be more technical or more casual). Remove Zendesk:* If you don't use Zendesk, simply delete the *Zendesk* node and remove the reference to it in the final *Slack - Notify Completion** node.
by RealSimple Solutions
POML โ Prompt/Messages (No-Deps) What this does Turns POML markup into either a single Markdown prompt or chat-style messages\[] โ using a zero-dependency n8n Code node. It supports variable substitution (via context), basic components (headings, lists, code, images, tables, line breaks), and optional schema-driven validation using componentSpec + attributeSpec. Credits Created by Real Simple Solutions as an n8n template friendly POML compiler (no dependencies) for full POML feature parity. View more of our _templates here_ Whoโs it for Teams who author prompts in POML and want a template-safe way to turn them into either a single Markdown prompt or chat-style messagesโwithout installing external modules. Works on n8n Cloud and self-hosted. What it does This workflow converts POML into: prompt** (Markdown) for single-shot models, or messages[]** (system|user|assistant) for chat APIs when speakerMode is true. It supports variable substitution via a context object ({{dot.path}}), lists, headings, code blocks, images (incl. base64 โ data: URL), tables from JSON (records/columns), and basic message components. How it works Set (Specs & Context):** Provide componentSpec (allowed attrs per tag), attributeSpec (typing/coercion), and optional context. Code (POML โ Prompt/Messages):** A zero-dependency compiler parses the POML and emits prompt or messages[]. > Add a yellow Sticky Note that includes this description and any setup links. Use additional neutral sticky notes to explain each step. How to set up Import the template. Open the first Set node and paste your componentSpec, attributeSpec, and context (examples included). In the Code node, choose: speakerMode: true to get messages[], or false for a single prompt. listStyle: dash | star | plus | decimal | latin. Run โ inspect prompt/messages in the output. Requirements No credentials or community nodes. Works without external libraries (template-compliant). How to customize Add message tags (<system-msg>, <user-msg>, <ai-msg>) in your POML when using speakerMode: true. Extend componentSpec/attributeSpec to validate or coerce additional tags/attributes. Preformat arrays in context (e.g., bulleted, csv) for display, or add a small Set node to build them on the fly. Rename nodes and keep all user-editable fields grouped in the first Set node. Security & best practices Never** hardcode API keys in nodes. Remove any personal IDs before publishing. Keep your Sticky Note(s) up to date and instructional.
by Hirokazu Kawamoto
How it works This workflow fetches RSS feeds daily and sends a notification to Slack if new topics are found. Since standard RSS snippets are often insufficient, the AI visits the source links to summarize the full articles and sends the summaries to Slack. You can then share interesting topics directly to X from Slack using the button. How to use Open the Gemini Chat Model node (attached to the AI Agent) and set up the Credential. You can obtain an API key from Google AI Studio. Open the Slack node and set up the Credential to allow sending messages. You can create a new Slack App here. Finally, open the Config node and update the rssUrls parameter with the RSS feed URLs you want to follow. Customizing this workflow You can adjust the number of topics fetched per RSS feed by modifying the takeCount parameter in the Config node.