by Michael A Putra
๐ง Automated Resume & Cover Letter Generator This project is an automation workflow that generates a personalized resume and cover letter for each job listing. ๐ Features Automated Resume Crafting Generates an HTML resume from your data. Hosts it live on GitHub Pages. Converts it to PDF using Gotenberg and saves it to Google Drive. Automated Cover Letter Generation Uses an LLM to create a tailored cover letter for each job listing. Simple Input Database Agent Stores your experience in an n8n Data Table with the following fields: role, summary, task, skills, tools, industry. The main agent pulls this data using RAG (Retrieval-Augmented Generation) to personalize the outputs. One-Time GitHub Setup Initializes a blank GitHub repository to host HTML files online, allowing Gotenberg to access and convert them. ๐งฉ Tech Stack Gotenberg** โ Converts HTML to PDF GitHub Pages** โ Hosts live HTML files n8n** โ Handles data tables and workflow automation LLM (OpenAI / Cohere / etc.)** โ Generates cover letters Google Drive** โ Stores the final PDFs โ๏ธ Installation & Setup 1. Create a GitHub Repository This repo will host your HTML resume through GitHub Pages. 2. Set the Webhook URL In the notify-n8n.yml file, replace: role | summary | task | skills | tools | industry 3. Create the n8n Data Table Add the following columns: role | summary | task | skills | tools | industry 4. Create a Google Spreadsheet Add these columns: company | cover_letter | resume 5. Install Gotenberg Follow the installation instructions on the Gotenberg GitHub repository: https://github.com/thecodingmachine/gotenberg 6. Customize the HTML Template Modify the HTML resume to your liking. You can use an LLM to locate and edit specific sections. 7. Add Authentication and Link Your GitHub Repo Ensure your workflow has permission to push updates to your GitHub Pages branch. 8. Run the Workflow Once everything is connected, trigger the workflow to automatically generate and save personalized resumes and cover letters. ๐ How to Use Copy and paste the job listing description into the Telegram bot. Wait for the "Done" notification before submitting another job. Do not use the bot again until the notification appears. The process usually takes a few minutes to complete. โ Notes This workflow is designed to save time and personalize your job applications efficiently. By combining n8n automation, LLMs, and open-source tools like Gotenberg, you can maintain full control over your data while generating high-quality resumes and cover letters for every job opportunity.
by Navneet Singh Arora
Automated Job Search & AI Relevance Evaluator Overview This n8n template automates the entire job hunting process by cross-referencing a candidate's PDF resume with live job listings from the JSearch API. It automatically filters for fresh, unapplied roles, uses Google Gemini AI to critically evaluate each job's relevance against the candidate's specific experience, and logs highly tailored matches directly into a Notion database for seamless tracking. ๐ How it works Context & Extraction: The workflow fetches existing applications from your Notion database to prevent duplicate tracking, then reads and extracts plain text directly from a local PDF resume. Role Discovery: A Google Gemini node isolates the candidate's current job title to formulate a precise search query. This query is sent to the JSearch API (via RapidAPI) to pull live job listings. Smart Filtering: Natively filters out jobs posted more than 14 days ago and jobs that already exist in your Notion tracker, ensuring only fresh, unseen postings are processed. AI Evaluation: The core of the workflow! Google Gemini acts as an expert technical recruiter, comparing the candidate's resume against each job description. It generates a "Relevance Score" (1-100), a "Skill Match Score", extracts remote/salary info, and summarizes why the job is a good fit. Notion Logging: Structured insights for each matched role are formatted and pushed directly as a rich database page into your Notion tracking board. ๐ฎ How to use API Credentials: Add your Google Gemini API Key and your RapidAPI key (subscribed to the JSearch API) in their respective nodes. Notion Setup: Connect your Notion credential and update the two Notion nodes with your specific target Database ID. File Path: Update the File Selector to point to your PDF resume (e.g., /home/node/.n8n-files/My-Resume.pdf). Search Customization: Open the "Search for Jobs via RapidAPI" node to manually tweak your target location, industry keywords, or pagination limits. โ๏ธ Requirements Google Gemini API Key RapidAPI Key (for JSearch API) Notion Account (with a pre-configured Job Tracker database) n8n Environment: Designed for self-hosted instances with local file access. ๐ฏ Use Cases Automated Job Hunting: Wake up to a pre-vetted, automatically scored list of highly relevant job openings perfectly matched to your exact resume. Recruiting Pipelines: Scale candidate sourcing by automatically comparing an inbound candidate's resume against thousands of active job board posts. Freelance Lead Generation: Independent contractors or agencies can use this to find companies actively hiring for the exact technical skills they offer.
by Dahiana
Description Who's it for: Content creators, marketers, and businesses who publish on both YouTube and blog platforms. What it does: Monitors your YouTube channel for new videos and automatically creates SEO-optimized blog posts using AI, then publishes to WordPress or Webflow. How it works: RSS Feed Trigger polls YouTube videos (every X amount of time) Extracts video metadata (title, description, thumbnail) YouTube node extracts full description for extra context Uses OpenAI (you can choose any model) to generate 600-800 word blog post Publishes to WordPress AND/OR Webflow with error handling Sends notifications to Telegram if publishing fails Requirements: YouTube channel ID (avoid tutorial channels for better results) OpenAI API key (or similar) WordPress OR Webflow credentials Telegram bot (optional, for error notifications) Setup steps: Replace YOUR_CHANNEL_ID in RSS Feed Trigger Add OpenAI credentials in AI generation node Configure WordPress and/or Webflow credentials Add Telegram bot for error notifications (optional). If you choose to set up Telegram, you need to input your channel ID. Test with manual execution first Customization: Modify AI prompt for different content styles Adjust polling frequency (30-60 minutes recommended) Add more CMS platforms Add content verification (is content larger than 600 characters? if not, improve)
by MANISH KUMAR
Shopify AI Automation Image-to-Product CSV Bulk Upload Automation This Shopify AI automation is an advanced n8n-powered workflow that converts raw product images into a Shopify-ready product CSV. It uses AI image analysis, Google Drive, Google Sheets, and Shopify APIs to fully automate product onboarding โ from images to structured ecommerce data. Built for scalable ecommerce automation, this workflow is especially effective for image-first catalogs such as jewelry, fashion, and accessories. ๐ Features ๐ผ๏ธ AI Image Analysis โ Analyzes product images one by one for higher accuracy and lower risk ๐ง Automatic Category Detection โ Identifies main product category (e.g. Jewelry), easily customizable for any niche โ๏ธ AI Product Content Generation โ Creates product names, descriptions (HTML), tags, and attributes ๐ Google Sheets Orchestration โ Structures data and outputs a clean Shopify-compatible CSV ๐๏ธ Shopify Asset Upload โ Uploads images to Shopify and retrieves CDN URLs ๐งฉ Workflow Preparation Before running the workflow: Upload all product images to Google Drive Name images using the format: <SKU><ColorCode> Example: 12345GR Place all images inside a folder named:<Brand Name> Root folder name : pending Example : Google_Drive/pending/Manish Collection/All Images Each image represents one product variant. โ๏ธ How It Works The workflow follows a 6-step automation pipeline designed for reliability and scalability. Notes : You may connect all these step to make it fully automatic or shecdule it according to your suitable time. ๐ Step-by-Step Process Step 1: Fetch Images from Google Drive Scans the pending/<brand_name> folder Fetches all images Extracts SKU and color code Stores references in Google Sheets Step 2: AI Image Analysis (One-by-One) Images are analyzed individually Slower than batch processing, but far more reliable Reduces hallucinations and incorrect attributes Ideal for production-grade Shopify automation. Step 3: Main Category Identification AI determines the primary product category (example: Jewelry) Prompts can be modified for any ecommerce niche Step 4: Conditional Product Content Generation Based on category: Product titles are generated Descriptions are written in Shopify-ready HTML Tags and attributes are created This replaces repetitive work typically handled via Shopify Flow or manual data entry. Step 5: Shopify Image Upload Images are uploaded to Shopify assets Shopify returns CDN URLs URLs are mapped back to product data Step 6: Shopify CSV Generation All enriched data is compiled into a new Google Sheet Output matches Shopifyโs product import CSV format File is ready for bulk upload ๐ ๏ธ n8n Nodes Used Trigger Node (Manual / Schedule) Google Drive Node Google Sheets Node AI Agent Node (Image Analysis + Content) Switch Node (Category-based logic) Code Node (Formatting & CSV structure) Shopify Node / HTTP Node ๐ Credentials Required Before running the workflow, configure the following credentials in n8n: Shopify Access Token** โ For asset uploads and API calls AI Provider API Key** โ For image analysis and content generation Google Drive OAuth** โ To access product images Google Sheets OAuth** โ To store and export data ๐ค Ideal For This workflow is ideal for: Shopify store owners handling bulk product uploads Ecommerce teams managing image-heavy catalogs Agencies building scalable Shopify automation systems Anyone exploring how to automate Shopify product onboarding ๐ฌ Extensibility This workflow is modular and easy to extend. You can add: Multi-language product descriptions Pricing and margin automation Shopify marketing automation triggers Shopify Flow integrations after product import Marketplace exports (Google Shopping, Meta, Amazon) ๐ Keywords shopify ai shopify flow shopify marketing automation shopify automation ecommerce automation how to automate shopify ๐ Notes No AI fine-tuning required No fragile prompt chaining Designed for accuracy over speed Safe for production ecommerce workflows ๐ Support If youโre looking to customize or extend this workflow, feel free to reach out or fork the project. Happy automating ๐
by Roshan Ramani
Product Video Creator with Nano Banana & Veo 3.1 via Telegram Who's it for This workflow is perfect for: E-commerce sellers needing quick product videos Social media marketers creating content at scale Small business owners without video editing skills Product photographers enhancing their offerings Anyone selling on Instagram, TikTok, or mobile-first platforms What it does Transform basic product photos into professional marketing videos in under 2 minutes: Send a product photo to your Telegram bot Nano Banana analyzes and enhances your image with studio-quality lighting Veo 3.1 generates an 8-second vertical video with motion and audio Receive your scroll-stopping marketing video automatically Perfect for creating engaging vertical content without expensive tools or editing expertise. How it works Input โ User sends product photo via Telegram with optional caption AI Analysis โ Nano Banana analyzes product and generates detailed enhancement prompt Image Enhancement โ Nano Banana creates commercial-grade photo (9:16, studio lighting) Video Generation โ Veo 3.1 creates 8-second 1080p video with motion and audio Delivery โ Auto-polls status every 30s, delivers final video to Telegram Requirements Google Cloud Platform Vertex AI API** enabled for Veo 3.1 Generative Language API** enabled for Nano Banana OAuth2 credentials Get credentials from Google Cloud Console Telegram Bot token from @BotFather n8n Self-hosted or cloud instance Setup Import workflow JSON into n8n Add credentials: Telegram API (bot token) Google OAuth2 API (client id and secret) Google PaLM API (API key) Update your Project ID in both Veo 3.1 nodes Activate workflow and test with a product photo How to customize Aspect Ratio: Choose 9:16 (vertical), 16:9 (horizontal) in "Generate Enhanced Image" and "Initiate veo 3.1" nodes Duration: Set 2 to 8 seconds by adjusting durationSeconds in "Initiate veo 3.1 Video Generation" Quality: Select 720p or 1080p by changing resolution in "Initiate veo 3.1 Video Generation" Audio: Enable or disable background music by toggling generateAudio in "Initiate veo 3.1 Video Generation" Enhancement Style: Match your brand aesthetic by editing the prompt in "AI Design Analysis" node Polling Time: Adjust retry interval by changing wait time in "Processing Delay (30s)" node Key Features ๐ Direct Google APIs โ No third-party services. Uses Nano Banana and Veo 3.1 directly via Google Cloud for maximum reliability and privacy โก Fully Automated โ Send photo, receive video. Zero manual work required ๐จ Studio Quality โ Nano Banana delivers professional lighting, composition, and AI-powered color grading ๐ฑ Mobile-First โ Default 9:16 vertical format optimized for Instagram Reels, TikTok, and Stories ๐ Smart Retry Logic โ Automatically polls Veo 3.1 status every 30 seconds until video generation completes ๐ต Audio Included โ Veo 3.1 generates background music automatically (can be disabled)
by Deniz
Structured Setup Guide: Narrative Chaining with N8N + AI 1. Input Setup Use a Google Sheet as the control panel. Fields required: Video URL (starting clip, ends with .mp4) Number of clips to extend (e.g., 2 extra scenes) Aspect ratio (horizontal, vertical, etc.) Model (V3 or V3 Fast) Narrative theme (guidance for story flow) Special requests (scene-by-scene instructions) Status column (e.g., "For Production", "Done") ๐ Example scene inputs: Scene 1: Naruto walks out with ramen is his hands Scene 2: Joker joins with chips 2. Workflow in N8N Step 1: Fetch Input Get rows in sheet โ fetch the next row where status = For Production. Clear sheet 2 โ reset the sheet that stores generated scenes. Edit fields (Initial Values): Video URL = starting clip Step = 1 Complete = total number of scenes requested Step 2: Looping Logic Looper Node: Runs until step = complete. Carries over current video URL โ feeds into next generation. Step 3: Analyze Current Clip Send video URL to File.AI Video Understanding API. Request: Describe last frame + audio + scene details. Output: Detailed video analysis text. Step 4: Generate Prompt AI Agent creates the next scene prompt using: Context from video analysis Narrative theme (from sheet) Scene instructions (from sheet) Aspect ratio, model preference, etc. ๐ Output = video prompt for next scene Step 5: Extract Last Frame Call File.AI Extract Frame API. Parameters: Input video URL Frame = last Output = JPG image (last frame of current clip). Step 6: Generate New Scene Use Key.AI (V3 Fast) for economical video generation. POST request includes: Prompt (from AI Agent) Aspect ratio + model Image URL (last frame) โ ensures seamless chaining Wait for generation to complete. ๐ Output = New clip URL (MP4) Step 7: Store & Increment Log new clip URL into Sheet 2. Increment Step by +1. Replace Video URL with the new clip. Loop back if Step < Complete. 3. Output Section Once all clips are generated: Gather all scene URLs from Sheet 2. Use File.AI Merge Videos API to stitch clips together: Original clip + all generated scenes. Save final MP4 output. Update Sheet 1 row with: Final video URL Status = Done 4. Costs Video analysis: ~$0.015 per 8s clip Frame extraction: ~0.002ยข (almost free) Clip merging: negligible (via ffmpeg backend) V3 Fast video generation (Key.AI): ~$0.30 per 8s clip
by Wessel Bulte
Description This workflow is a practical, โdirtyโ solution for real-world scenarios where frontline workers keep using Excel in their daily processes. Instead of forcing change, we take their spreadsheets as-is, clean and normalize the data, generate embeddings, and store everything in Supabase. The benefit: frontline staff continue with their familiar tools, while data analysts gain clean, structured, and vectorized data ready for analysis or RAG-style AI applications. How it works Frontline workers continue with Excel** โ no disruption to their daily routines. Upload & trigger** โ The workflow runs when a new Excel sheet is ready. Read Excel rows** โ Data is pulled from the specified workbook and worksheet. Clean & normalize** โ HTML is stripped, Excel dates are fixed, and text fields are standardized. Batch & switch** โ Rows are split and routed into Question/Answer processing paths. Generate embeddings** โ Cleaned Questions and Answers are converted into vectors via OpenAI. Merge enriched records** โ Original business data is combined with embeddings. Write into Supabase** โ Data lands in a structured table (excel_records) with vector and FTS indexes. Why itโs โdirty but usefulโ No disruption** โ frontline workers donโt need to change how they work. Analyst-ready data** โ Supabase holds clean, queryable data for dashboards, reporting, or AI pipelines. Bridge between old and new** โ Excel remains the input, but the backend becomes modern and scalable. Incremental modernization** โ paves the way for future workflow upgrades without blocking current work. Outcome Frontline workers keep their Excel-based workflows, while data can immediately be structured, searchable, and vectorized in Supabase โ enabling AI-powered search, reporting, and retrieval-augmented generation. Required setup Supabase account Create a project and enable the pgvector extension. OpenAI API Key Required for generating embeddings (text-embedding-3-small). Microsoft Excel credentials Needed to connect to your workbook and worksheet. Need Help ๐ LinkedIn โ Wessel Bulte
by Jimleuk
Cohere's new multimodal model releases make building your own Vision RAG agents a breeze. If you're new to Multimodal RAG and for the intent of this template, it means to embed and retrieve only document scans relevant to a query and then have a vision model read those scans to answer. The benefits being (1) the vision model doesn't need to keep all document scans in context (expensive) and (2) ability to query on graphical content such as charts, graphs and tables. How it works Page extracts from a technology report containing graphs and charts are downloaded, converted to base64 and embedded using Cohere's Embed v4 model. This produces embedding vectors which we will associate with the original page url and store them in our Qdrant vector store collection using the Qdrant community node. Our Vision RAG agent is split into 2 parts; one regular AI agent for chat and a second Q&A agent powered by Cohere's Command-A-vision model which is required to read contents of images. When a query requires access to the technology report, the Q&A agent branch is activated. This branch performs a vector search on our image embeddings and returns a list of matching image urls. These urls are then used as input for our vision model along with the user's original query. The Q&A vision agent can then reply to the user using the "respond to chat" node. Because both agents share the same memory space, it would be the same conversation to the user. How to use Ensure you have a Cohere account and sufficient credit to avoid rate limit or token usage restrictions. For embeddings, swap out the page extracts for your own. You may need to split and convert document pages to images if you want to use image embeddings. For chat, you may want to structure the agent(s) in another way which makes sense for your environment eg. using MCP servers. Requirements Cohere account for Embeddings and LLM Qdrant for vector store
by Aryan Shinde
Effortlessly generate, review, and publish SEO-optimized blog posts to WordPress using AI and automation. How It Works AI Topic Generation: Gemini suggests trending blog topics matching your agency's services. Content Research: Tavily fetches recent relevant articles for each generated topic. Human Review: Choose the preferred article for publishing through a Telegram notification. AI Rewriting: Gemini rewrites the selected article into a polished, SEO-friendly post. Image Generation & Publishing: The workflow creates a featured image with Gemini or OpenAI, then publishes the post (with dynamic categories and images) to WordPress. Audit Trail: Every published post is logged to Google Sheets, and final details are sent to Telegram. Set Up Steps Estimated setup time: 15โ30 minutes (excluding API approval/wait times). Connect your WordPress, Gemini (Google), Tavily, Google Sheets, and Telegram accounts. Configure your preferred posting schedule in the โSchedule Trigger.โ Adjust prompts or messages to fit your agencyโs niche or editorial voice if needed. Note: Detailed customizations and advanced configuration tips are included in the sticky notes within the workflow.
by Leon Kirschner
Automatically generate and send course certificates when new participants are added to Google Sheets This workflow creates PDF certificates using Stencil, stores them in Google Drive, and emails them to participants. How it works A new row is added to the Google Sheets document (via form, webhook, or manual entry) The workflow generates a PDF certificate using the Stencil API The PDF is uploaded to a Google Drive folder for archiving The certificate is sent to the participant via Outlook The Google Sheet is updated with the file link and send timestamp Setup steps Create a free account at stencilpdf.com and set up a certificate template Connect your Google account and select the target Sheet and Drive folder Connect your Outlook account for sending emails Configure the Stencil API credentials (Bearer Auth) Adjust the email template text as needed Prerequisites Free Stencil account with certificate template Google account (Sheets + Drive) Outlook/Microsoft 365 account `
by vinci-king-01
Enterprise Knowledge Search with GPT-4 Turbo, Google Drive & Academic APIs This workflow provides an enterprise-grade RAG (Retrieval-Augmented Generation) system that intelligently searches multiple sources and generates AI-powered responses using GPT-4 Turbo. How it works This workflow provides an enterprise-grade RAG (Retrieval-Augmented Generation) system that intelligently searches multiple sources and generates AI-powered responses using GPT-4 Turbo. Key Steps Form Input - Collects user queries with customizable search scope, response style, and language preferences Intelligent Search - Routes queries to appropriate sources (web, academic papers, news, internal documents) Data Aggregation - Unifies and processes information from multiple sources with quality scoring AI Processing - Uses GPT-4 Turbo to generate context-aware, source-grounded responses Response Enhancement - Formats outputs in various styles (comprehensive, concise, technical, etc.) Multi-Channel Delivery - Delivers results via webhook, email, Slack, and optional PDF generation Data Sources & AI Models Search Sources Web Search**: Google, Bing, DuckDuckGo integration Academic Papers**: arXiv, PubMed, Google Scholar via Crossref API News Articles**: News API, RSS feeds, real-time news Technical Documentation**: GitHub, Stack Overflow, documentation sites Internal Knowledge**: Google Drive, Confluence, Notion integration AI Models GPT-4 Turbo**: Primary language model for response generation Embedding Models**: For semantic search and similarity matching Custom Prompts**: Specialized prompts for different response styles Set up steps Setup time: 15-20 minutes Configure API credentials - Set up OpenAI API, News API, Google Drive, and other service credentials Set up search sources - Configure academic databases, news APIs, and internal knowledge sources Connect analytics - Link Google Sheets for usage tracking and performance monitoring Configure notifications - Set up Slack channels and email templates for automated alerts Test the workflow - Run sample queries to verify all components are working correctly Keep detailed configuration notes in sticky notes inside your workflow
by Jordan Hoyle
Description Automate the discovery and analysis of PDF files across a deeply nested OneDrive folder structure. This workflow recursively searches folders, filters for new or updated PDFs, extracts text, and uses a Mistral AI agent to generate a concise Executive Summary, Key Findings, and Structured Metadata (Date, Location, etc.), storing all insights into a n8n Data Table for easy access and further automation. Key Features & How It Works Scheduled Trigger & Recursive Folder Search: The workflow runs automatically (scheduled for 8 PM in this template) to monitor a specified main folder on OneDrive. It performs a deep, multi-level search (up to 8 layers) across subfolders to ensure no documents are missed. Smart Deduplication & Filtering: It checks new files against an internal n8n Data Table using the Compare Datasets node, ensuring only new or unique PDF files are processed, saving AI credits and processing time. A size check is also included, preventing attempts to process excessively large files. AI-Powered Document Intelligence (Mistral LLM): For each new PDF, the workflow extracts the text and passes it to a Mistral AI model for dual-stream analysis: Overview Agent: Generates an impartial, professional Executive Summary, a list of Key Findings & Data Points, and the document's Scope/Context. Document Information Agent: Extracts crucial metadata, including the single most relevant date, location (City/State/Country), and professional information (Name, Title, Organization). Structured Output and Archiving: AI outputs are meticulously validated and reformatted into a clean JSON object using Structured Output Parsers. The complete analysis, along with the original file name and path, is then logged as a new row in an n8n Data Table. Setup Notes OneDrive Folder: You must specify the exact name of your main folder in the 'Search for Main Folder' node. Data Table: Ensure your n8n Data Table exists with the required columns: Summary, Key_Findings, Scope, Date, Location, File_Name, and Path. Deep Folder Structure: The current configuration supports up to 8 levels of subfolders. If your files go deeper, you may need to add more "Get items in a folder" and "If" nodes. AI Customization: Review the AI agent prompts and the structured output schemas to customize the fields you want to extract or the summary style you require. Extend This Workflow The final output is organized data. You can easily extend this workflow to: Send daily/weekly digest emails with new summaries. Sync the extracted data to a Google Sheet, Airtable, or other database. Add a secondary AI agent to perform follow-up actions based on the "Key Findings."