by Amine ARAGRAG
This n8n template automates the collection and enrichment of Product Hunt posts using AI and Google Sheets. It fetches new tools daily, translates content, categorizes them intelligently, and saves everything into a structured spreadsheet—ideal for building directories, research dashboards, newsletters, or competitive intelligence assets. Good to know Sticky notes inside the workflow explain each functional block and required configurations. Uses cursor-based pagination to safely fetch Product Hunt data. AI agent handles translation, documentation generation, tech extraction, and function area classification. Category translations are synced with a Google Sheets dictionary to avoid duplicates. All enriched entries are stored in a clean “Tools” sheet for easy filtering or reporting. How it works A schedule trigger starts the workflow daily. Product Hunt posts are retrieved via GraphQL and processed in batches. A code node restructures each product into a consistent schema. The workflow checks if a product already exists in Google Sheets. For new items, the AI agent generates metadata, translations, and documentation. Categories are matched or added to a Google Sheets dictionary. The final enriched product entry is appended or updated in the spreadsheet. Pagination continues until no next page remains. How to use Connect Product Hunt OAuth2, Google Sheets, and OpenAI credentials. Adjust the schedule trigger to your preferred frequency. Optionally expand enrichment fields (tags, scoring, custom classifications). Replace the trigger with a webhook or manual trigger if needed. Requirements Product Hunt OAuth2 credentials Google Sheets account OpenAI (or compatible) API access Customising this workflow Add Slack or Discord notifications for new tools. Push enriched data to Airtable, Notion, or a database. Extend AI enrichment with summaries or SEO fields. Use the Google Sheet as a backend for dashboards or frontend applications.
by Lidia
Who’s it for Teams who want to automatically generate structured meeting minutes from uploaded transcripts and instantly share them in Slack. Perfect for startups, project teams, or any company that collects meeting transcripts in Google Drive. How it works / What it does This workflow automatically turns raw meeting transcripts into well-structured minutes in Markdown and posts them to Slack: Google Drive Trigger – Watches a specific folder. Any new transcript file added will start the workflow. Download File – Grabs the transcript. Prep Transcript – Converts the file into plain text and passes the transcript downstream. Message a Model – Sends the transcript to OpenAI GPT for summarization using a structured system prompt (action items, decisions, N/A placeholders). Make Minutes – Formats GPT’s response into a Markdown file. Slack: Send a message – Posts a Slack message announcing the auto-generated minutes. Slack: Upload a file – Uploads the full Markdown minutes file into the chosen Slack channel. End result: your Slack channel always has clear, standardized minutes right after a meeting. How to set up Google Drive Create a folder where you’ll drop transcript files. Configure the folder ID in the Google Drive Trigger node. OpenAI Add your OpenAI API credentials in the Message a Model node. Select a supported GPT model (e.g., gpt-4o-mini or gpt-4). Slack Connect your Slack account and set the target channel ID in the Slack nodes. Run the workflow and drop a transcript file into Drive. Minutes will appear in Slack automatically. Requirements Google Drive account (for transcript upload) OpenAI API key (for text summarization) Slack workspace (for message posting and file upload) How to customize the workflow Change summary structure*: Adjust the system prompt inside *Message a Model (e.g., shorter summaries, language other than English). Different output format*: Modify *Make Minutes to output plain text, PDF, or HTML instead of Markdown. New destinations**: Add more nodes to send minutes to email, Notion, or Confluence in parallel. Multiple triggers**: Replace Google Drive trigger with Webhook if you want to integrate with Zoom or MS Teams transcript exports. Good to know OpenAI API calls are billed separately. See OpenAI pricing. Files must be text-based (.txt or .md). For PDFs or docs, add a conversion step before summarization. Slack requires the bot user to be a member of the target channel, otherwise you’ll see a not_in_channel error.
by Rapiwa
Who is this for? This workflow is designed for online store owners, customer-success teams, and marketing operators who want to automatically verify customers' WhatsApp numbers and deliver order updates or invoice links via WhatsApp. It is built around WooCommerce order WooCommerce Trigger (order.updated) but is easily adaptable to Shopify or other platforms that provide billing and line_items in the WooCommerce Trigger payload. What this Workflow Does / Key Features Listens for WooCommerce order events (example: order.updated) via a Webhook or a WooCommerce trigger. Filters only orders with status "completed" and maps the payload into a normalized object: { data: { customer, products, invoice_link } } using the Code node Order Completed check. Iterates over line items using SplitInBatches to control throughput. Cleans phone numbers (Clean WhatsApp Number code node) by removing all non-digit characters. Verifies whether the cleaned phone number is registered on WhatsApp using Rapiwa's verify endpoint (POST https://app.rapiwa.com/api/verify-whatsapp). If verified, sends a templated WhatsApp message via Rapiwa (POST https://app.rapiwa.com/api/send-message). Appends an audit row to a "Verified & Sent" Google Sheet for successful sends, or to an "Unverified & Not Sent" sheet for unverified numbers. Uses Wait and batching to throttle requests and avoid API rate limits. Requirements HTTP Bearer credential for Rapiwa (example name in flow: Rapiwa Bearer Auth). WooCommerce API credential for the trigger (example: WooCommerce (get customer)) Running n8n instance with nodes: WooCommerce Trigger, Code, SplitInBatches, HTTP Request, IF, Google Sheets, Wait. Rapiwa account and a valid Bearer token. Google account with Sheets access and OAuth2 credentials configured in n8n. WooCommerce store (or any WooCommerce Trigger source) that provides billing and line_items in the payload. How to Use — step-by-step Setup 1) Credentials Rapiwa: Create an HTTP Bearer credential in n8n and paste your token (flow example name: Rapiwa Bearer Auth). Google Sheets: Add an OAuth2 credential (flow example name: Google Sheets). WooCommerce: Add the WooCommerce API credential or configure a Webhook on your store. 3) Configure Google Sheets The exported flow uses spreadsheet ID: 1S3RtGt5xxxxxxxXmQi_s (Sheet gid=0) as an example. Replace with your spreadsheet ID and sheet gid. Ensure your sheet column headers exactly match the mapping keys listed below (case and trailing spaces must match or be corrected in the mapping). 5) Verify HTTP Request nodes Verify endpoint: POST https://app.rapiwa.com/api/verify-whatsapp — sends { number } (uses HTTP Bearer credential). Send endpoint: POST https://app.rapiwa.com/api/send-message — sends number, message_type=text, and a templated message that uses fields from the Clean WhatsApp Number output. Google Sheet Column Structure The Google Sheets nodes in the flow append rows with these column keys. Make sure the spreadsheet headers A Google Sheet formatted like this ➤ sample | Name | Number | Email | Address | Product Title | Product ID | Total Price | Invoice Link | Delivery Status | Validity | Status | |----------------|---------------|-------------------|------------------------------------------|-----------------------------|------------|---------------|--------------------------------------------------------------------------------------------------------------|-----------------|------------|-----------| | Abdul Mannan | 8801322827799 | contact@spagreen.net | mirpur dohs | Air force 1 Fossil 1:1 - 44 | 238 | BDT 5500.00 | Invoice link | completed | verified | sent | | Abdul Mannan | 8801322827799 | contact@spagreen.net | mirpur dohs h#1168 rd#10 av#10 mirpur dohs dhaka | Air force 1 Fossil 1:1 - 44 | 238 | BDT 5500.00 | Invoice link | completed | unverified | not sent | Important Notes Do not hard-code API keys or tokens; always use n8n credentials. Google Sheets column header names must match the mapping keys used in the nodes. Trailing spaces are common accidental problems — trim them in the spreadsheet or adjust the mapping. The IF node in the exported flow compares to the string "true". If the verify endpoint returns boolean true/false, convert to string or boolean consistently before the IF. Message templates in the flow reference $('Clean WhatsApp Number').item.json.data.products[0] — update templates if you need multiple-product support. Useful Links Dashboard:** https://app.rapiwa.com Official Website:** https://rapiwa.com Documentation:** https://docs.rapiwa.com Support & Help WhatsApp**: Chat on WhatsApp Discord**: SpaGreen Community Facebook Group**: SpaGreen Support Website**: https://spagreen.net Developer Portfolio**: Codecanyon SpaGreen
by iamvaar
Youtube Video: https://youtu.be/dEtV7OYuMFQ?si=fOAlZWz4aDuFFovH Workflow Pre-requisites Step 1: Supabase Setup First, replace the keys in the "Save the embedding in DB" & "Search Embeddings" nodes with your new Supabase keys. After that, run the following code snippets in your Supabase SQL editor: Create the table to store chunks and embeddings: CREATE TABLE public."RAG" ( id bigserial PRIMARY KEY, chunk text NULL, embeddings vector(1024) NULL ) TABLESPACE pg_default; Create a function to match embeddings: DROP FUNCTION IF EXISTS public.matchembeddings1(integer, vector); CREATE OR REPLACE FUNCTION public.matchembeddings1( match_count integer, query_embedding vector ) RETURNS TABLE ( chunk text, similarity float ) LANGUAGE plpgsql AS $$ BEGIN RETURN QUERY SELECT R.chunk, 1 - (R.embeddings <=> query_embedding) AS similarity FROM public."RAG" AS R ORDER BY R.embeddings <=> query_embedding LIMIT match_count; END; $$; Step 2: Create Jotform with these fields Your full name email address Upload PDF Document [field where you upload the knowledgebase in PDF] Step 3: Get Together AI API Key Get a Together AI API key and paste it into the "Embedding Uploaded document" node and the "Embed User Message" node. Here is a detailed, node-by-node explanation of the n8n workflow, which is divided into two main parts. Part 1: Ingesting Knowledge from a PDF This first sequence of nodes runs when you submit a PDF through a Jotform. Its purpose is to read the document, process its content, and save it in a specialized database for the AI to use later. JotForm Trigger Type: Trigger What it does: This node starts the entire workflow. It's configured to listen for new submissions on a specific Jotform. When someone uploads a file and submits the form, this node activates and passes the submission data to the next step. Grab New knowledgebase Type: HTTP Request What it does: The initial trigger from Jotform only contains basic information. This node makes a follow-up call to the Jotform API using the submissionID to get the complete details of that submission, including the specific link to the uploaded file. Grab the uploaded knowledgebase file link Type: HTTP Request What it does: Using the file link obtained from the previous node, this step downloads the actual PDF file. It's set to receive the response as a file, not as text. Extract Text from PDF File Type: Extract From File What it does: This utility node takes the binary PDF file downloaded in the previous step and extracts all the readable text content from it. The output is a single block of plain text. Splitting into Chunks Type: Code What it does: This node runs a small JavaScript snippet. It takes the large block of text from the PDF and chops it into smaller, more manageable pieces, or "chunks," each of a predefined length. This is critical because AI models work more effectively with smaller, focused pieces of text. Embedding Uploaded document Type: HTTP Request What it does: This is a key AI step. It sends each individual text chunk to an embeddings API. A specified AI model converts the semantic meaning of the chunk into a numerical list called an embedding or vector. This vector is like a mathematical fingerprint of the text's meaning. Save the embedding in DB Type: Supabase What it does: This node connects to your Supabase database. For every chunk, it creates a new row in a specified table and stores two important pieces of information: the original text chunk and its corresponding numerical embedding (its "fingerprint") from the previous step. Part 2: Answering Questions via Chat This second sequence starts when a user sends a message. It uses the knowledge stored in the database to find relevant information and generate an intelligent answer. When chat message received Type: Chat Trigger What it does: This node starts the second part of the workflow. It listens for any incoming message from a user in a connected chat application. Embend User Message Type: HTTP Request What it does: This node takes the user's question and sends it to the exact same embeddings API and model used in Part 1. This converts the question's meaning into the same kind of numerical vector or "fingerprint." Search Embeddings Type: HTTP Request What it does: This is the "retrieval" step. It calls a custom database function in Supabase. It sends the question's embedding to this function and asks it to search the knowledge base table to find a specified number of top text chunks whose embeddings are mathematically most similar to the question's embedding. Aggregate Type: Aggregate What it does: The search from the previous step returns multiple separate items. This utility node simply bundles those items into a single, combined piece of data. This makes it easier to feed all the context into the final AI model at once. AI Agent & Google Gemini Chat Model Type: LangChain Agent & AI Model What it does: This is the "generation" step where the final answer is created. The AI Agent node is given a detailed set of instructions (a prompt). The prompt tells the Google Gemini Chat Model to act as a professional support agent. Crucially, it provides the AI with the user's original question and the aggregated text chunks from the Aggregate node as its only source of truth. It then instructs the AI to formulate an answer based only on that provided context, format it for a specific chat style, and to say "I don't know" if the answer cannot be found in the chunks. This prevents the AI from making things up.
by Anna Bui
Automatically analyze n8n workflow errors with AI, create support tickets, and send detailed Slack notifications Perfect for development teams and businesses that need intelligent error handling with automated support workflows. Never miss critical workflow failures again! How it works Error Trigger captures any workflow failure in your n8n instance AI Debugger analyzes the error using structured reasoning to identify root causes Clean Data transforms AI analysis into organized, actionable information Create Support Ticket automatically generates a detailed ticket in FreshDesk Merge combines ticket data with AI analysis for comprehensive reporting Generate Slack Alert creates rich, formatted notifications with all context Send to Team delivers instant alerts to your designated Slack channel How to use Replace FreshDesk credentials with your helpdesk system API Configure Slack channel for your team notifications Customize AI analysis prompts for your specific error types Set up as global error handler for all your critical workflows Requirements FreshDesk account (or compatible ticketing system) Slack workspace with bot permissions OpenAI API access for AI analysis n8n Cloud or self-hosted with AI nodes enabled Good to know OpenAI API calls cost approximately $0.01-0.03 per error analysis Works with any ticketing system that supports REST API Can be triggered by webhooks from external monitoring tools Slack messages use rich formatting for mobile-friendly alerts Need Help? Join the Discord or ask in the Forum! Happy Monitoring!
by Guillaume Duvernay
Move beyond generic AI-generated content and create articles that are high-quality, factually reliable, and aligned with your unique expertise. This template orchestrates a sophisticated "research-first" content creation process. Instead of simply asking an AI to write an article from scratch, it first uses an AI planner to break your topic down into logical sub-questions. It then queries a Lookio assistant—which you've connected to your own trusted knowledge base of uploaded documents—to build a comprehensive research brief. Only then is this fact-checked brief handed to a powerful AI writer to compose the final article, complete with source links. This is the ultimate workflow for scaling expert-level content creation. Who is this for? Content marketers & SEO specialists:** Scale the creation of authoritative, expert-level blog posts that are grounded in factual, source-based information. Technical writers & subject matter experts:** Transform your complex internal documentation into accessible public-facing articles, tutorials, and guides. Marketing agencies:** Quickly generate high-quality, well-researched drafts for clients by connecting the workflow to their provided brand and product materials. What problem does this solve? Reduces AI "hallucinations":** By grounding the entire writing process in your own trusted knowledge base, the AI generates content based on facts you provide, not on potentially incorrect information from its general training data. Ensures comprehensive topic coverage:** The initial AI-powered "topic breakdown" step acts like an expert outliner, ensuring the final article is well-structured and covers all key sub-topics. Automates source citation:** The workflow is designed to preserve and integrate source URLs from your knowledge base directly into the final article as hyperlinks, boosting credibility and saving you manual effort. Scales expert content creation:** It effectively mimics the workflow of a human expert (outline, research, consolidate, write) but in an automated, scalable, and incredibly fast way. How it works This workflow follows a sophisticated, multi-step process to ensure the highest quality output: Decomposition: You provide an article title and guidelines via the built-in form. An initial AI call then acts as a "planner," breaking down the main topic into an array of 5-8 logical sub-questions. Fact-based research (RAG): The workflow loops through each of these sub-questions and queries your Lookio assistant. This assistant, which you have pre-configured by uploading your own documents, finds the relevant information and source links for each point. Consolidation: All the retrieved question-and-answer pairs are compiled into a single, comprehensive research brief. Final article generation: This complete, fact-checked brief is handed to a final, powerful AI writer (e.g., GPT-4o). Its instructions are clear: write a high-quality article using only the provided information and integrate the source links as hyperlinks where appropriate. Building your own RAG pipeline VS using Lookio or alternative tools Building a RAG system natively within n8n offers deep customization, but it requires managing a toolchain for data processing, text chunking, and retrieval optimization. An alternative is to use a managed service like Lookio, which provides RAG functionality through an API. This approach abstracts the backend infrastructure for document ingestion and querying, trading the granular control of a native build for a reduction in development and maintenance tasks. Implementing the template 1. Set up your Lookio assistant (Prerequisite): Lookio is a platform for building intelligent assistants that leverage your organization's documents as a dedicated knowledge base. First, sign up at Lookio. You'll get 50 free credits to get started. Upload the documents you want to use as your knowledge base. Create a new assistant and then generate an API key. Copy your Assistant ID and your API Key for the next step. 2. Configure the workflow: Connect your AI provider (e.g., OpenAI) credentials to the two Language Model nodes. In the Query Lookio Assistant (HTTP Request) node, paste your Assistant ID in the body and add your Lookio API Key for authentication (we recommend using a Bearer Token credential). 3. Activate the workflow: Toggle the workflow to "Active" and use the built-in form to generate your first fact-checked article! Taking it further Automate publishing:* Connect the final *Article result* node to a *Webflow* or *WordPress** node to automatically create a draft post in your CMS. Generate content in bulk:* Replace the *Form Trigger* with an *Airtable* or *Google Sheet** trigger to automatically generate a whole batch of articles from your content calendar. Customize the writing style:* Tweak the system prompt in the final *New content - Generate the AI output** node to match your brand's specific tone of voice, add SEO keywords, or include specific calls-to-action.
by David Olusola
🎥 Auto-Save Zoom Recordings to Google Drive + Log Meetings in Airtable This workflow automatically saves Zoom meeting recordings to Google Drive and logs all important details into Airtable for easy tracking. Perfect for teams that want a searchable meeting archive. ⚙️ How It Works Zoom Recording Webhook Listens for recording.completed events from Zoom. Captures metadata (Meeting ID, Topic, Host, File Type, File Size, etc.). Normalize Recording Data A Code node extracts and formats Zoom payload into clean JSON. Download Recording Uses HTTP Request to download the recording file. Upload to Google Drive Saves the recording into your chosen Google Drive folder. Returns the file ID and share link. Log Result Combines Zoom metadata with Google Drive file info. Save to Airtable Logs all details into your Meeting Logs table: Meeting ID Topic Host File Type File Size Google Drive Saved (Yes/No) Drive Link Timestamp 🛠️ Setup Steps 1. Zoom Create a Zoom App → enable recording.completed event. Add the workflow’s Webhook URL as your Zoom Event Subscription endpoint. 2. Google Drive Connect OAuth in n8n. Replace YOUR_FOLDER_ID with your destination Drive folder. 3. Airtable Create a base with table Meeting Logs. Add columns: Meeting ID Topic Host File Type File Size Google Drive Saved Drive Link Timestamp Replace YOUR_AIRTABLE_BASE_ID in the node. 📊 Example Airtable Output | Meeting ID | Topic | Host | File Type | File Size | Google Drive Saved | Drive Link | Timestamp | |------------|-------------|-------------------|-----------|-----------|--------------------|------------|---------------------| | 987654321 | Team Sync | host@email.com | MP4 | 104 MB | Yes | 🔗 Link | 2025-08-30 15:02:10 | ⚡ With this workflow, every Zoom recording is safely archived in Google Drive and logged in Airtable for quick search, reporting, and compliance tracking.
by Shayan Ali Bakhsh
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Try It Out! Automatically generate Linkedin Carousal and Upload to Linkedin Use case : Linkedin Content Creation, specifically carousal. But could be adjusted for many other creations as well. How it works It will run automatically every 6:00 AM Get latest News from TechRadar Parse it into readable JSON AI will decide, which news resonates with your profile Then give the title and description of that news to generate the final linkedin carousal content. This step is also trigerred by Form trigger After carousal generation, it will give it to Post Nitro to create images on that content. Post Nitro provides the PDF file. We Upload the PDf file to Linkedin and get the file ID, in next step, it will be used. Finally create the Post description and Post it to Linkedin How to use It will run every 6:00 AM automatically. Just make it Live Submit the form, with correct title and description ( i did not added tests for that so must give that correct 😅 ) Requirements Install Post Nitro community Node @postnitro/n8n-nodes-postnitro-ai We need the following API keys to make it work Google Gemini ( for Gemini 2.5-Flash Usage ) Docs Google Gemini Key Post Nitro credentials ( API key + Template id + Brand id ) Docs Post Nitro Linkedin API key Docs Linkedin API Need Help? Message on Linkedin the Linkedin Happy Automation!
by Anurag Patil
Geekhack Discord Updater How It Works This n8n workflow automatically monitors GeekHack forum RSS feeds every hour for new keyboard posts in Interest Checks and Group Buys sections. When it finds a new thread (not replies), it: Monitors RSS Feeds: Checks two GeekHack RSS feeds for new posts (50 items each) Filters New Threads: Removes reply posts by checking for "Re:" prefix in titles Prevents Duplicates: Queries PostgreSQL database to skip already-processed threads Scrapes Content: Fetches the full thread page and extracts the original post Extracts Images: Uses regex to find all images in the post content Creates Discord Embed: Formats the post data into a rich Discord embed with up to 4 images Sends to Multiple Webhooks: Retrieves all webhook URLs from database and sends to each one Logs Processing: Records the thread as processed to prevent duplicates The workflow includes a webhook management system with a web form to add/remove Discord webhooks dynamically, allowing you to send notifications to multiple Discord servers or channels. Steps to Set Up Prerequisites n8n instance running PostgreSQL database Discord webhook URL(s) 1. Database Setup Create PostgreSQL tables: Processed threads table: CREATE TABLE processed_threads ( topic_id VARCHAR PRIMARY KEY, title TEXT, processed_at TIMESTAMP DEFAULT NOW() ); Webhooks table: CREATE TABLE webhooks ( id SERIAL PRIMARY KEY, url TEXT NOT NULL, created_at TIMESTAMP DEFAULT NOW() ); 2. n8n Configuration Import Workflow Copy the workflow JSON Go to n8n → Workflows → Import from JSON Paste the JSON and import Configure Credentials PostgreSQL: Create new PostgreSQL credential with your database connection details All PostgreSQL nodes should use the same credential 3. Node Configuration Schedule Trigger Already configured for 1-hour intervals Modify if different timing needed PostgreSQL Nodes Ensure all PostgreSQL nodes use your PostgreSQL credential: "Check if Processed" "Update entry" "Insert rows in a table" "Select rows from a table" Database schema should be "public" Table names: "processed_threads" and "webhooks" RSS Feed Limits Both RSS feeds are set to limit=50 items Adjust if you need more/fewer items per check 4. Webhook Management Adding Webhooks via Web Form The workflow creates a form trigger for adding webhooks Access the form URL from the "On form submission" node Submit Discord webhook URLs through the form Webhooks are automatically stored in the database Manual Webhook Addition Alternatively, insert webhooks directly into the database: INSERT INTO webhooks (url) VALUES ('https://discord.com/api/webhooks/YOUR_WEBHOOK_URL'); 5. Testing Test the Main Workflow Ensure you have at least one webhook in the database Activate the workflow Use "Execute Workflow" to test manually Check Discord channels for test messages Test Webhook Form Get the form URL from "On form submission" node Submit a test webhook URL Verify it appears in the webhooks table 6. Monitoring Check execution history for errors Monitor both database tables for entries Verify all registered webhooks receive notifications Adjust schedule timing if needed 7. Managing Webhooks Use the web form to add new webhook URLs Remove webhooks by deleting from the database: DELETE FROM webhooks WHERE url = 'webhook_url_to_remove'; The workflow will now automatically post new GeekHack threads to all registered Discord webhooks every hour, with the ability to dynamically manage webhook destinations through the web form interface.
by Recrutei Automações
Overview: Automated LinkedIn Job Posting with AI This workflow automates the publication of new job vacancies on LinkedIn immediately after they are created in the Recrutei ATS (Applicant Tracking System). It leverages a Code node to pre-process the job data and a powerful AI model (GPT-4o-mini, configured via the OpenAI node) to generate compelling, marketing-ready content. This template is designed for Recruitment and Marketing teams aiming to ensure consistent, timely, and high-quality job postings while saving significant operational time. Workflow Logic & Steps Recrutei Webhook Trigger: The workflow is instantly triggered when a new job vacancy is published in the Recrutei ATS, sending all relevant job data via a webhook. Data Cleaning (Code Node 1): The first Code node standardizes boolean fields (like remote, fixed_remuneration) from 0/1 to descriptive text ('yes'/'no'). Prompt Transformation (Code Node 2): The second, crucial Code node receives the clean job data and: Maps the original data keys (e.g., title, description) to user-friendly labels (e.g., Job Title, Detailed Description). Cleans and sanitizes the HTML description into readable Markdown format. Generates a single, highly structured prompt containing all job details, ready for the AI model. AI Content Generation (OpenAI): The AI Model receives the structured prompt and acts as a 'Marketing Copywriter' to create a compelling, engaging post specifically optimized for the LinkedIn platform. LinkedIn Post: The generated text is automatically posted to the configured LinkedIn profile or Company Page. Internal Logging (Google Sheets): The workflow concludes by logging the event (Job Title, Confirmation Status) into a Google Sheet for internal tracking and auditing. Setup Instructions To implement this workflow successfully, you must configure the following: Credentials: Configure OpenAI (for the Content Generator). Configure LinkedIn (for the Post action). Configure Google Sheets (for the logging). Node Configuration: Set up the Webhook URL in your Recrutei ATS settings. Replace YOUR_SHEET_ID_HERE in the Google Sheets Logging node with your sheet's ID. Select the correct LinkedIn profile/company page in the Create a post node.
by Rahul Joshi
Description Automatically extract a structured skill matrix from PDF resumes in a Google Drive folder and store results in Google Sheets. Uses Azure OpenAI (GPT-4o-mini) to analyze predefined tech stacks and filters for relevant proficiency. Fast, consistent insights ready for review. 🔍📊 What This Template Does Fetches all resumes from a designated Google Drive folder (“Resume_store”). 🗂️ Downloads each resume file securely via Google Drive API. ⬇️ Extracts text from PDF files for analysis. 📄➡️📝 Analyzes skills with Azure OpenAI (GPT-4o-mini), rating 1–5 and estimating years. 🤖 Parses and filters to include only skills with proficiency > 2, then updates Google Sheets (“Resume store” → “Sheet2”). ✅ Key Benefits Saves hours on manual resume screening. ⏱️ Produces a consistent, structured skill matrix. 📐 Focuses on intermediate to expert skills for faster shortlisting. 🎯 Centralizes candidate data in Google Sheets for easy sharing. 🗃️ Features Predefined tech stack focus: React, Node.js, Angular, Python, Java, SQL, Docker, Kubernetes, AWS, Azure, GCP, HTML, CSS, JavaScript. 🧰 Proficiency scoring (1–5) and estimated years of experience. 📈 PDF-to-text extraction for robust parsing. 🧾 JSON parsing with error handling for invalid outputs. 🛡️ Manual Trigger to run on demand. ▶️ Requirements n8n instance (cloud or self-hosted). Google Drive access with credentials to the “Resume_store” folder. Google Sheets access to the “Resume store” spreadsheet and “Sheet2” tab. Azure OpenAI with GPT-4o-mini deployed and connected via secure credentials. PDF text extraction enabled within n8n. Target Audience HR and Talent Acquisition teams. 👥 Recruiters and staffing agencies. 🧑💼 Operations teams managing hiring pipelines. 🧭 Tech hiring managers seeking consistent skill insights. 💡 Step-by-Step Setup Instructions Place candidate resumes (PDF) into Google Drive → “Resume_store”. In n8n, add Google Drive and Google Sheets credentials and authorize access. In n8n, add Azure OpenAI credentials (GPT-4o-mini deployment). Import the workflow, assign credentials to each node, and confirm folder/sheet names. Run the Manual Trigger to execute the flow and verify data in “Resume store” → “Sheet2”.
by Sridevi Edupuganti
Try It Out! Use n8n to extract medical test data from diagnostic reports uploaded to Google Drive, automatically detect abnormal values, and generate personalized health advice. How it works Upload a medical report (PDF or image) to a monitored Google Drive folder Mistral AI extracts text using OCR while preserving document structure GPT-4 parses the extracted text into structured JSON (patient info, test names, results, units, reference ranges) All test results are saved to the "All Values" sheet in Google Sheets JavaScript code compares each result against its reference range to detect abnormalities For out-of-range values, GPT-4 generates personalized dietary, lifestyle, and exercise advice based on patient age and gender Abnormal results with recommendations are saved to the "Out of Range Values" sheet How to use Set up Google Drive folder monitoring and Google Sheets with two tabs: "All Values" and "Out of Range Values" Configure API credentials for Google Drive, Mistral AI, and OpenAI (GPT-4) Upload medical reports to your monitored folder Review extracted data and personalized health advice in Google Sheets Requirements Google Drive and Sheets with OAuth2 authentication Mistral AI API key for OCR OpenAI API key (GPT-4 access required) for intelligent extraction and advice generation Need Help? See the detailed Read Me file at https://drive.google.com/file/d/1Wv7dfcBLsHZlPcy1QWPYk6XSyrS3H534/view?usp=sharing Join the n8n community forum for support