by Yasir
🧠 Workflow Overview — AI-Powered Jobs Scraper & Relevancy Evaluator This workflow automates the process of finding highly relevant job listings based on a user’s resume, career preferences, and custom filters. It scrapes fresh job data, evaluates relevance using OpenAI GPT models, and automatically appends the results to your Google Sheet tracker — while skipping any jobs already in your sheet, so you don’t have to worry about duplicates. Perfect for recruiters, job seekers, or virtual assistants who want to automate job research and filtering. ⚙️ What the Workflow Does Takes user input through a form — including resume, preferences, target score, and Google Sheet link. Fetches job listings via an Apify LinkedIn Jobs API actor. Filters and deduplicates results (removes duplicates and blacklisted companies). Evaluates job relevancy using GPT-4o-mini, scoring each job (0–100) against the user’s resume & preferences. Applies a relevancy threshold to keep only top-matching jobs. Checks your Google Sheet for existing jobs and prevents duplicates. Appends new, relevant jobs directly into your provided Google Sheet. 📋 What You’ll Get A personal Job Scraper Form (public URL you can share or embed). Automatic job collection & filtering based on your inputs. Relevance scoring** (0–100) for each job using your resume and preferences. Real-time job tracking Google Sheet that includes: Job Title Company Name & Profile Job URLs Location, Salary, HR Contact (if available) Relevancy Score 🪄 Setup Instructions 1. Required Accounts You’ll need: ✅ n8n account (self-hosted or Cloud) ✅ Google account (for Sheets integration) ✅ OpenAI account (for GPT API access) ✅ Apify account (to fetch job data) 2. Connect Credentials In your n8n instance: Go to Credentials → Add New: Google Sheets OAuth2 API Connect your Google account. OpenAI API Add your OpenAI API key. Apify API Replace <your_apify_api> with your apify api key. Set Up Apify API Get your Apify API key Visit: https://console.apify.com/settings/integrations Copy your API key. Rent the required Apify actor before running this workflow Go to: https://console.apify.com/actors/BHzefUZlZRKWxkTck/input Click “Rent Actor”. Once rented, it can be used by your Apify account to fetch job listings. 3. Set Up Your Google Sheet Make a copy of this template: 📄 Google Sheet Template Enable Edit Access for anyone with the link. Copy your sheet’s URL — you’ll provide this when submitting the workflow form. 4. Deploy & Run Import this workflow (jobs_scraper.json) into your n8n workspace. Activate the workflow. Visit your form trigger endpoint (e.g. https://your-n8n-domain/webhook/jobs-scraper). Fill out the form with: Job title(s) Location Contract type, Experience level, Working mode, Date posted Target relevancy score Google Sheet link Resume text Job preferences or ranking criteria Submit — within minutes, new high-relevance job listings will appear in your Google Sheet automatically. 🧩 Example Use Cases Automate daily job scraping for clients or yourself. Filter jobs by AI-based relevance instead of keywords. Build a smart job board or job alert system. Support a career agency offering done-for-you job search services. 💡 Tips Adjust the “Target Relevancy Score” (e.g., 70–85) to control how strict the filtering is. You can add your own blacklisted companies in the Filter & Dedup Jobs node.
by Lidia
Who’s it for Teams who want to automatically generate structured meeting minutes from uploaded transcripts and instantly share them in Slack. Perfect for startups, project teams, or any company that collects meeting transcripts in Google Drive. How it works / What it does This workflow automatically turns raw meeting transcripts into well-structured minutes in Markdown and posts them to Slack: Google Drive Trigger – Watches a specific folder. Any new transcript file added will start the workflow. Download File – Grabs the transcript. Prep Transcript – Converts the file into plain text and passes the transcript downstream. Message a Model – Sends the transcript to OpenAI GPT for summarization using a structured system prompt (action items, decisions, N/A placeholders). Make Minutes – Formats GPT’s response into a Markdown file. Slack: Send a message – Posts a Slack message announcing the auto-generated minutes. Slack: Upload a file – Uploads the full Markdown minutes file into the chosen Slack channel. End result: your Slack channel always has clear, standardized minutes right after a meeting. How to set up Google Drive Create a folder where you’ll drop transcript files. Configure the folder ID in the Google Drive Trigger node. OpenAI Add your OpenAI API credentials in the Message a Model node. Select a supported GPT model (e.g., gpt-4o-mini or gpt-4). Slack Connect your Slack account and set the target channel ID in the Slack nodes. Run the workflow and drop a transcript file into Drive. Minutes will appear in Slack automatically. Requirements Google Drive account (for transcript upload) OpenAI API key (for text summarization) Slack workspace (for message posting and file upload) How to customize the workflow Change summary structure*: Adjust the system prompt inside *Message a Model (e.g., shorter summaries, language other than English). Different output format*: Modify *Make Minutes to output plain text, PDF, or HTML instead of Markdown. New destinations**: Add more nodes to send minutes to email, Notion, or Confluence in parallel. Multiple triggers**: Replace Google Drive trigger with Webhook if you want to integrate with Zoom or MS Teams transcript exports. Good to know OpenAI API calls are billed separately. See OpenAI pricing. Files must be text-based (.txt or .md). For PDFs or docs, add a conversion step before summarization. Slack requires the bot user to be a member of the target channel, otherwise you’ll see a not_in_channel error.
by iamvaar
Youtube Video: https://youtu.be/dEtV7OYuMFQ?si=fOAlZWz4aDuFFovH Workflow Pre-requisites Step 1: Supabase Setup First, replace the keys in the "Save the embedding in DB" & "Search Embeddings" nodes with your new Supabase keys. After that, run the following code snippets in your Supabase SQL editor: Create the table to store chunks and embeddings: CREATE TABLE public."RAG" ( id bigserial PRIMARY KEY, chunk text NULL, embeddings vector(1024) NULL ) TABLESPACE pg_default; Create a function to match embeddings: DROP FUNCTION IF EXISTS public.matchembeddings1(integer, vector); CREATE OR REPLACE FUNCTION public.matchembeddings1( match_count integer, query_embedding vector ) RETURNS TABLE ( chunk text, similarity float ) LANGUAGE plpgsql AS $$ BEGIN RETURN QUERY SELECT R.chunk, 1 - (R.embeddings <=> query_embedding) AS similarity FROM public."RAG" AS R ORDER BY R.embeddings <=> query_embedding LIMIT match_count; END; $$; Step 2: Create Jotform with these fields Your full name email address Upload PDF Document [field where you upload the knowledgebase in PDF] Step 3: Get Together AI API Key Get a Together AI API key and paste it into the "Embedding Uploaded document" node and the "Embed User Message" node. Here is a detailed, node-by-node explanation of the n8n workflow, which is divided into two main parts. Part 1: Ingesting Knowledge from a PDF This first sequence of nodes runs when you submit a PDF through a Jotform. Its purpose is to read the document, process its content, and save it in a specialized database for the AI to use later. JotForm Trigger Type: Trigger What it does: This node starts the entire workflow. It's configured to listen for new submissions on a specific Jotform. When someone uploads a file and submits the form, this node activates and passes the submission data to the next step. Grab New knowledgebase Type: HTTP Request What it does: The initial trigger from Jotform only contains basic information. This node makes a follow-up call to the Jotform API using the submissionID to get the complete details of that submission, including the specific link to the uploaded file. Grab the uploaded knowledgebase file link Type: HTTP Request What it does: Using the file link obtained from the previous node, this step downloads the actual PDF file. It's set to receive the response as a file, not as text. Extract Text from PDF File Type: Extract From File What it does: This utility node takes the binary PDF file downloaded in the previous step and extracts all the readable text content from it. The output is a single block of plain text. Splitting into Chunks Type: Code What it does: This node runs a small JavaScript snippet. It takes the large block of text from the PDF and chops it into smaller, more manageable pieces, or "chunks," each of a predefined length. This is critical because AI models work more effectively with smaller, focused pieces of text. Embedding Uploaded document Type: HTTP Request What it does: This is a key AI step. It sends each individual text chunk to an embeddings API. A specified AI model converts the semantic meaning of the chunk into a numerical list called an embedding or vector. This vector is like a mathematical fingerprint of the text's meaning. Save the embedding in DB Type: Supabase What it does: This node connects to your Supabase database. For every chunk, it creates a new row in a specified table and stores two important pieces of information: the original text chunk and its corresponding numerical embedding (its "fingerprint") from the previous step. Part 2: Answering Questions via Chat This second sequence starts when a user sends a message. It uses the knowledge stored in the database to find relevant information and generate an intelligent answer. When chat message received Type: Chat Trigger What it does: This node starts the second part of the workflow. It listens for any incoming message from a user in a connected chat application. Embend User Message Type: HTTP Request What it does: This node takes the user's question and sends it to the exact same embeddings API and model used in Part 1. This converts the question's meaning into the same kind of numerical vector or "fingerprint." Search Embeddings Type: HTTP Request What it does: This is the "retrieval" step. It calls a custom database function in Supabase. It sends the question's embedding to this function and asks it to search the knowledge base table to find a specified number of top text chunks whose embeddings are mathematically most similar to the question's embedding. Aggregate Type: Aggregate What it does: The search from the previous step returns multiple separate items. This utility node simply bundles those items into a single, combined piece of data. This makes it easier to feed all the context into the final AI model at once. AI Agent & Google Gemini Chat Model Type: LangChain Agent & AI Model What it does: This is the "generation" step where the final answer is created. The AI Agent node is given a detailed set of instructions (a prompt). The prompt tells the Google Gemini Chat Model to act as a professional support agent. Crucially, it provides the AI with the user's original question and the aggregated text chunks from the Aggregate node as its only source of truth. It then instructs the AI to formulate an answer based only on that provided context, format it for a specific chat style, and to say "I don't know" if the answer cannot be found in the chunks. This prevents the AI from making things up.
by Bhavy Shekhaliya
Overview This n8n template demonstrates how to use AI to automatically analyze WordPress blog content and generate relevant, SEO-optimized tags for WordPress posts. Use cases Automate content tagging for WordPress blogs, maintain consistent taxonomy across large content libraries, save hours of manual tagging work, or improve SEO by ensuring every post has relevant, searchable tags! Good to know The workflow creates new tags automatically if they don't exist in WordPress. Tag generation is intelligent - it avoids duplicates by mapping to existing tag IDs. How it works We fetch a WordPress blog post using the WordPress node with sticky data enabled for testing. The post content is sent to GPT-4.1-mini which analyzes it and generates 5-10 relevant tags using a structured output parser. All existing WordPress tags are fetched via HTTP Request to check for matches. A smart loop processes each AI-generated tag: If the tag already exists, it maps to the existing tag ID If it's new, it creates the tag via WordPress API All tag IDs are aggregated and the WordPress post is updated with the complete tag list. How to use The manual trigger node is used as an example but feel free to replace this with other triggers such as webhook, schedule, or WordPress webhook for new posts. Modify the "Fetch One WordPress Blog" node to fetch multiple posts or integrate with your publishing workflow. Requirements WordPress site with REST API enabled OpenAI API Customising this workflow Adjust the AI prompt to generate tags specific to your industry or SEO strategy Change the tag count (currently 5-10) based on your needs Add filtering logic to only tag posts in specific categories
by Jadai kongolo
🚀 n8n Local AI Agentic RAG Template Author: Jadai kongolo What is this? This template provides an entirely local implementation of an Agentic RAG (Retrieval Augmented Generation) system in n8n that can be extended easily for your specific use case and knowledge base. Unlike standard RAG which only performs simple lookups, this agent can reason about your knowledge base, self-improve retrieval, and dynamically switch between different tools based on the specific question. Why Agentic RAG? Standard RAG has significant limitations: Poor analysis of numerical/tabular data Missing context due to document chunking Inability to connect information across documents No dynamic tool selection based on question type What makes this template powerful: Intelligent tool selection**: Switches between RAG lookups, SQL queries, or full document retrieval based on the question Complete document context**: Accesses entire documents when needed instead of just chunks Accurate numerical analysis**: Uses SQL for precise calculations on spreadsheet/tabular data Cross-document insights**: Connects information across your entire knowledge base Multi-file processing**: Handles multiple documents in a single workflow loop Efficient storage**: Uses JSONB in Supabase to store tabular data without creating new tables for each CSV Getting Started Run the table creation nodes first to set up your database tables in Supabase Upload your documents to the folder on your computer that is mounted to /data/shared in the n8n container. This folder by default is the "shared" folder in the local AI package. The agent will process them automatically (chunking text, storing tabular data in Supabase) Start asking questions that leverage the agent's multiple reasoning approaches Customization This template provides a solid foundation that you can extend by: Tuning the system prompt for your specific use case Adding document metadata like summaries Implementing more advanced RAG techniques Optimizing for larger knowledge bases The non-local ("cloud") version of this Agentic RAG agent can be found here.
by Anna Bui
Automatically analyze n8n workflow errors with AI, create support tickets, and send detailed Slack notifications Perfect for development teams and businesses that need intelligent error handling with automated support workflows. Never miss critical workflow failures again! How it works Error Trigger captures any workflow failure in your n8n instance AI Debugger analyzes the error using structured reasoning to identify root causes Clean Data transforms AI analysis into organized, actionable information Create Support Ticket automatically generates a detailed ticket in FreshDesk Merge combines ticket data with AI analysis for comprehensive reporting Generate Slack Alert creates rich, formatted notifications with all context Send to Team delivers instant alerts to your designated Slack channel How to use Replace FreshDesk credentials with your helpdesk system API Configure Slack channel for your team notifications Customize AI analysis prompts for your specific error types Set up as global error handler for all your critical workflows Requirements FreshDesk account (or compatible ticketing system) Slack workspace with bot permissions OpenAI API access for AI analysis n8n Cloud or self-hosted with AI nodes enabled Good to know OpenAI API calls cost approximately $0.01-0.03 per error analysis Works with any ticketing system that supports REST API Can be triggered by webhooks from external monitoring tools Slack messages use rich formatting for mobile-friendly alerts Need Help? Join the Discord or ask in the Forum! Happy Monitoring!
by Guillaume Duvernay
Move beyond generic AI-generated content and create articles that are high-quality, factually reliable, and aligned with your unique expertise. This template orchestrates a sophisticated "research-first" content creation process. Instead of simply asking an AI to write an article from scratch, it first uses an AI planner to break your topic down into logical sub-questions. It then queries a Lookio assistant—which you've connected to your own trusted knowledge base of uploaded documents—to build a comprehensive research brief. Only then is this fact-checked brief handed to a powerful AI writer to compose the final article, complete with source links. This is the ultimate workflow for scaling expert-level content creation. Who is this for? Content marketers & SEO specialists:** Scale the creation of authoritative, expert-level blog posts that are grounded in factual, source-based information. Technical writers & subject matter experts:** Transform your complex internal documentation into accessible public-facing articles, tutorials, and guides. Marketing agencies:** Quickly generate high-quality, well-researched drafts for clients by connecting the workflow to their provided brand and product materials. What problem does this solve? Reduces AI "hallucinations":** By grounding the entire writing process in your own trusted knowledge base, the AI generates content based on facts you provide, not on potentially incorrect information from its general training data. Ensures comprehensive topic coverage:** The initial AI-powered "topic breakdown" step acts like an expert outliner, ensuring the final article is well-structured and covers all key sub-topics. Automates source citation:** The workflow is designed to preserve and integrate source URLs from your knowledge base directly into the final article as hyperlinks, boosting credibility and saving you manual effort. Scales expert content creation:** It effectively mimics the workflow of a human expert (outline, research, consolidate, write) but in an automated, scalable, and incredibly fast way. How it works This workflow follows a sophisticated, multi-step process to ensure the highest quality output: Decomposition: You provide an article title and guidelines via the built-in form. An initial AI call then acts as a "planner," breaking down the main topic into an array of 5-8 logical sub-questions. Fact-based research (RAG): The workflow loops through each of these sub-questions and queries your Lookio assistant. This assistant, which you have pre-configured by uploading your own documents, finds the relevant information and source links for each point. Consolidation: All the retrieved question-and-answer pairs are compiled into a single, comprehensive research brief. Final article generation: This complete, fact-checked brief is handed to a final, powerful AI writer (e.g., GPT-4o). Its instructions are clear: write a high-quality article using only the provided information and integrate the source links as hyperlinks where appropriate. Building your own RAG pipeline VS using Lookio or alternative tools Building a RAG system natively within n8n offers deep customization, but it requires managing a toolchain for data processing, text chunking, and retrieval optimization. An alternative is to use a managed service like Lookio, which provides RAG functionality through an API. This approach abstracts the backend infrastructure for document ingestion and querying, trading the granular control of a native build for a reduction in development and maintenance tasks. Implementing the template 1. Set up your Lookio assistant (Prerequisite): Lookio is a platform for building intelligent assistants that leverage your organization's documents as a dedicated knowledge base. First, sign up at Lookio. You'll get 50 free credits to get started. Upload the documents you want to use as your knowledge base. Create a new assistant and then generate an API key. Copy your Assistant ID and your API Key for the next step. 2. Configure the workflow: Connect your AI provider (e.g., OpenAI) credentials to the two Language Model nodes. In the Query Lookio Assistant (HTTP Request) node, paste your Assistant ID in the body and add your Lookio API Key for authentication (we recommend using a Bearer Token credential). 3. Activate the workflow: Toggle the workflow to "Active" and use the built-in form to generate your first fact-checked article! Taking it further Automate publishing:* Connect the final *Article result* node to a *Webflow* or *WordPress** node to automatically create a draft post in your CMS. Generate content in bulk:* Replace the *Form Trigger* with an *Airtable* or *Google Sheet** trigger to automatically generate a whole batch of articles from your content calendar. Customize the writing style:* Tweak the system prompt in the final *New content - Generate the AI output** node to match your brand's specific tone of voice, add SEO keywords, or include specific calls-to-action.
by David Olusola
🎥 Auto-Save Zoom Recordings to Google Drive + Log Meetings in Airtable This workflow automatically saves Zoom meeting recordings to Google Drive and logs all important details into Airtable for easy tracking. Perfect for teams that want a searchable meeting archive. ⚙️ How It Works Zoom Recording Webhook Listens for recording.completed events from Zoom. Captures metadata (Meeting ID, Topic, Host, File Type, File Size, etc.). Normalize Recording Data A Code node extracts and formats Zoom payload into clean JSON. Download Recording Uses HTTP Request to download the recording file. Upload to Google Drive Saves the recording into your chosen Google Drive folder. Returns the file ID and share link. Log Result Combines Zoom metadata with Google Drive file info. Save to Airtable Logs all details into your Meeting Logs table: Meeting ID Topic Host File Type File Size Google Drive Saved (Yes/No) Drive Link Timestamp 🛠️ Setup Steps 1. Zoom Create a Zoom App → enable recording.completed event. Add the workflow’s Webhook URL as your Zoom Event Subscription endpoint. 2. Google Drive Connect OAuth in n8n. Replace YOUR_FOLDER_ID with your destination Drive folder. 3. Airtable Create a base with table Meeting Logs. Add columns: Meeting ID Topic Host File Type File Size Google Drive Saved Drive Link Timestamp Replace YOUR_AIRTABLE_BASE_ID in the node. 📊 Example Airtable Output | Meeting ID | Topic | Host | File Type | File Size | Google Drive Saved | Drive Link | Timestamp | |------------|-------------|-------------------|-----------|-----------|--------------------|------------|---------------------| | 987654321 | Team Sync | host@email.com | MP4 | 104 MB | Yes | 🔗 Link | 2025-08-30 15:02:10 | ⚡ With this workflow, every Zoom recording is safely archived in Google Drive and logged in Airtable for quick search, reporting, and compliance tracking.
by Shayan Ali Bakhsh
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Try It Out! Automatically generate Linkedin Carousal and Upload to Linkedin Use case : Linkedin Content Creation, specifically carousal. But could be adjusted for many other creations as well. How it works It will run automatically every 6:00 AM Get latest News from TechRadar Parse it into readable JSON AI will decide, which news resonates with your profile Then give the title and description of that news to generate the final linkedin carousal content. This step is also trigerred by Form trigger After carousal generation, it will give it to Post Nitro to create images on that content. Post Nitro provides the PDF file. We Upload the PDf file to Linkedin and get the file ID, in next step, it will be used. Finally create the Post description and Post it to Linkedin How to use It will run every 6:00 AM automatically. Just make it Live Submit the form, with correct title and description ( i did not added tests for that so must give that correct 😅 ) Requirements Install Post Nitro community Node @postnitro/n8n-nodes-postnitro-ai We need the following API keys to make it work Google Gemini ( for Gemini 2.5-Flash Usage ) Docs Google Gemini Key Post Nitro credentials ( API key + Template id + Brand id ) Docs Post Nitro Linkedin API key Docs Linkedin API Need Help? Message on Linkedin the Linkedin Happy Automation!
by Omer Fayyaz
An automated PDF download and management system that collects PDFs from URLs, uploads them to Google Drive, extracts metadata, and maintains a searchable library with comprehensive error handling and status tracking. What Makes This Different: Intelligent URL Validation** - Validates PDF URLs before attempting download, extracting filenames from URLs and generating fallback names when needed, preventing wasted processing time Binary File Handling** - Properly handles PDF downloads as binary files with appropriate headers (User-Agent, Accept), ensuring compatibility with various PDF hosting services Comprehensive Error Handling** - Three-tier error handling: invalid URLs are marked immediately, failed downloads are logged with error messages, and all errors are tracked in a dedicated Error Log sheet Metadata Extraction** - Automatically extracts file ID, size, MIME type, Drive view links, and download URLs from Google Drive responses, creating a complete file record Multiple Trigger Options** - Supports manual execution, scheduled runs (every 12 hours), and workflow-to-workflow calls, making it flexible for different automation scenarios Status Tracking** - Updates source spreadsheet with processing status (Downloaded, Failed, Invalid), enabling easy monitoring and retry logic for failed downloads Key Benefits of Automated PDF Management: Centralized Storage** - All PDFs are automatically organized in a Google Drive folder, making them easy to find and share across your organization Searchable Library** - Metadata is stored in Google Sheets with file links, titles, sources, and download dates, enabling quick searches and filtering Error Recovery** - Failed downloads are logged with error messages, allowing you to identify and fix issues (broken links, access permissions, etc.) and retry Automated Processing** - Schedule-based execution keeps your PDF library updated without manual intervention, perfect for monitoring research sources Integration Ready** - Can be called by other workflows, enabling complex automation chains (e.g., scrape URLs → download PDFs → process content) Bulk Processing** - Processes multiple PDFs in sequence from a spreadsheet, handling large batches efficiently with proper error isolation Who's it for This template is designed for researchers, academic institutions, market research teams, legal professionals, compliance officers, and anyone who needs to systematically collect and organize PDF documents from multiple sources. It's perfect for organizations that need to build research libraries, archive regulatory documents, collect industry reports, maintain compliance documentation, or aggregate academic papers without manually downloading and organizing each file. How it works / What it does This workflow creates a PDF collection and management system that reads PDF URLs from Google Sheets, downloads the files, uploads them to Google Drive, extracts metadata, and maintains a searchable library. The system: Reads Pending PDF URLs - Fetches PDF URLs from Google Sheets "PDF URLs" sheet, processing entries that need to be downloaded Loops Through PDFs - Processes PDFs one at a time using Split in Batches, ensuring proper error isolation and preventing batch failures Prepares Download Info - Extracts filename from URL, decodes URL-encoded characters, validates PDF URL format, and generates fallback filenames with timestamps if needed Validates URL - Checks if URL is valid before attempting download, skipping invalid entries immediately Downloads PDF - Makes HTTP request with proper browser headers, downloads PDF as binary file with 60-second timeout, handles download errors gracefully Verifies Download - Checks if binary data was successfully received, routing to error handling if download failed Uploads to Google Drive - Uploads PDF file to specified Google Drive folder, preserving original filename or using generated name Extracts File Metadata - Extracts file ID, name, MIME type, file size, Drive view link, and download link from Google Drive API response Saves to PDF Library - Appends file metadata to Google Sheets "PDF Library" sheet with title, source, file links, and download timestamp Updates Source Status - Marks processed URLs as "Downloaded", "Failed", or "Invalid" in source sheet for tracking Logs Errors - Records failed downloads and invalid URLs in "Error Log" sheet with error messages for troubleshooting Tracks Completion - Generates completion summary with processing statistics and timestamp Key Innovation: Error-Resilient Processing - Unlike simple download scripts that fail on the first error, this workflow isolates failures, continues processing remaining PDFs, and provides detailed error logging. This ensures maximum success rate and makes troubleshooting straightforward. How to set up 1. Prepare Google Sheets Create a Google Sheet with three tabs: "PDF URLs", "PDF Library", and "Error Log" In "PDF URLs" sheet, create columns: PDF_URL (or pdf_url), Title (optional), Source (optional), Status (optional - will be updated by workflow) Add sample PDF URLs in the PDF_URL column (e.g., direct links to PDF files) The "PDF Library" sheet will be automatically populated with columns: pdfUrl, title, source, fileName, fileId, mimeType, fileSize, driveUrl, downloadUrl, downloadedAt, status The "Error Log" sheet will record: status, errorMessage, pdfUrl, title (for failed downloads) Verify your Google Sheets credentials are set up in n8n (OAuth2 recommended) 2. Configure Google Sheets Nodes Open the "Read Pending PDF URLs" node and select your spreadsheet from the document dropdown Set sheet name to "PDF URLs" Configure the "Save to PDF Library" node: select same spreadsheet, set sheet name to "PDF Library", operation should be "Append or Update" Configure the "Update Source Status" node: same spreadsheet, "PDF URLs" sheet, operation "Update" Configure the "Log Error" node: same spreadsheet, "Error Log" sheet, operation "Append or Update" Test connection by running the "Read Pending PDF URLs" node manually to verify it can access your sheet 3. Set Up Google Drive Folder Create a folder in Google Drive where you want PDFs stored (e.g., "PDF Reports" or "Research Library") Open the "Upload to Google Drive" node Select your Google Drive account (OAuth2 credentials) Choose the drive (usually "My Drive") Select the folder you created from the folder dropdown The filename will be automatically extracted from the URL or generated with timestamp Verify folder permissions allow the service account to upload files Test by manually uploading a file to ensure access works 4. Configure Download Settings The "Download PDF" node is pre-configured with appropriate headers and 60-second timeout If you encounter timeout issues with large PDFs, increase timeout in the node options The User-Agent header is set to mimic a browser to avoid blocking Accept header is set to application/pdf,application/octet-stream,/ for maximum compatibility For sites requiring authentication, you may need to add additional headers or use cookies Test with a sample PDF URL to verify download works correctly 5. Set Up Scheduling & Test The workflow includes Manual Trigger (for testing), Schedule Trigger (runs every 12 hours), and Execute Workflow Trigger (for calling from other workflows) To customize schedule: Open "Schedule (Every 12 Hours)" node and adjust interval (e.g., daily, weekly) For initial testing: Use Manual Trigger, add 2-3 test PDF URLs to your "PDF URLs" sheet Verify execution: Check that PDFs are downloaded, uploaded to Drive, and metadata saved to "PDF Library" Monitor execution logs: Check for any download failures, timeout issues, or Drive upload errors Review Error Log sheet: Verify failed downloads are properly logged with error messages Common issues: Invalid URLs (check URL format), access denied (check file permissions), timeout (increase timeout for large files), Drive quota (check Google Drive storage) Requirements Google Sheets Account** - Active Google account with OAuth2 credentials configured in n8n for reading and writing spreadsheet data Google Drive Account** - Same Google account with OAuth2 credentials and sufficient storage space for PDF files Source Spreadsheet** - Google Sheet with "PDF URLs", "PDF Library", and "Error Log" tabs, properly formatted with required columns Valid PDF URLs** - Direct links to PDF files (not HTML pages that link to PDFs) - URLs should end in .pdf or point directly to PDF content n8n Instance** - Self-hosted or cloud n8n instance with access to external websites (HTTP Request node needs internet connectivity to download PDFs)
by Anurag Patil
Geekhack Discord Updater How It Works This n8n workflow automatically monitors GeekHack forum RSS feeds every hour for new keyboard posts in Interest Checks and Group Buys sections. When it finds a new thread (not replies), it: Monitors RSS Feeds: Checks two GeekHack RSS feeds for new posts (50 items each) Filters New Threads: Removes reply posts by checking for "Re:" prefix in titles Prevents Duplicates: Queries PostgreSQL database to skip already-processed threads Scrapes Content: Fetches the full thread page and extracts the original post Extracts Images: Uses regex to find all images in the post content Creates Discord Embed: Formats the post data into a rich Discord embed with up to 4 images Sends to Multiple Webhooks: Retrieves all webhook URLs from database and sends to each one Logs Processing: Records the thread as processed to prevent duplicates The workflow includes a webhook management system with a web form to add/remove Discord webhooks dynamically, allowing you to send notifications to multiple Discord servers or channels. Steps to Set Up Prerequisites n8n instance running PostgreSQL database Discord webhook URL(s) 1. Database Setup Create PostgreSQL tables: Processed threads table: CREATE TABLE processed_threads ( topic_id VARCHAR PRIMARY KEY, title TEXT, processed_at TIMESTAMP DEFAULT NOW() ); Webhooks table: CREATE TABLE webhooks ( id SERIAL PRIMARY KEY, url TEXT NOT NULL, created_at TIMESTAMP DEFAULT NOW() ); 2. n8n Configuration Import Workflow Copy the workflow JSON Go to n8n → Workflows → Import from JSON Paste the JSON and import Configure Credentials PostgreSQL: Create new PostgreSQL credential with your database connection details All PostgreSQL nodes should use the same credential 3. Node Configuration Schedule Trigger Already configured for 1-hour intervals Modify if different timing needed PostgreSQL Nodes Ensure all PostgreSQL nodes use your PostgreSQL credential: "Check if Processed" "Update entry" "Insert rows in a table" "Select rows from a table" Database schema should be "public" Table names: "processed_threads" and "webhooks" RSS Feed Limits Both RSS feeds are set to limit=50 items Adjust if you need more/fewer items per check 4. Webhook Management Adding Webhooks via Web Form The workflow creates a form trigger for adding webhooks Access the form URL from the "On form submission" node Submit Discord webhook URLs through the form Webhooks are automatically stored in the database Manual Webhook Addition Alternatively, insert webhooks directly into the database: INSERT INTO webhooks (url) VALUES ('https://discord.com/api/webhooks/YOUR_WEBHOOK_URL'); 5. Testing Test the Main Workflow Ensure you have at least one webhook in the database Activate the workflow Use "Execute Workflow" to test manually Check Discord channels for test messages Test Webhook Form Get the form URL from "On form submission" node Submit a test webhook URL Verify it appears in the webhooks table 6. Monitoring Check execution history for errors Monitor both database tables for entries Verify all registered webhooks receive notifications Adjust schedule timing if needed 7. Managing Webhooks Use the web form to add new webhook URLs Remove webhooks by deleting from the database: DELETE FROM webhooks WHERE url = 'webhook_url_to_remove'; The workflow will now automatically post new GeekHack threads to all registered Discord webhooks every hour, with the ability to dynamically manage webhook destinations through the web form interface.
by Recrutei Automações
Overview: Automated LinkedIn Job Posting with AI This workflow automates the publication of new job vacancies on LinkedIn immediately after they are created in the Recrutei ATS (Applicant Tracking System). It leverages a Code node to pre-process the job data and a powerful AI model (GPT-4o-mini, configured via the OpenAI node) to generate compelling, marketing-ready content. This template is designed for Recruitment and Marketing teams aiming to ensure consistent, timely, and high-quality job postings while saving significant operational time. Workflow Logic & Steps Recrutei Webhook Trigger: The workflow is instantly triggered when a new job vacancy is published in the Recrutei ATS, sending all relevant job data via a webhook. Data Cleaning (Code Node 1): The first Code node standardizes boolean fields (like remote, fixed_remuneration) from 0/1 to descriptive text ('yes'/'no'). Prompt Transformation (Code Node 2): The second, crucial Code node receives the clean job data and: Maps the original data keys (e.g., title, description) to user-friendly labels (e.g., Job Title, Detailed Description). Cleans and sanitizes the HTML description into readable Markdown format. Generates a single, highly structured prompt containing all job details, ready for the AI model. AI Content Generation (OpenAI): The AI Model receives the structured prompt and acts as a 'Marketing Copywriter' to create a compelling, engaging post specifically optimized for the LinkedIn platform. LinkedIn Post: The generated text is automatically posted to the configured LinkedIn profile or Company Page. Internal Logging (Google Sheets): The workflow concludes by logging the event (Job Title, Confirmation Status) into a Google Sheet for internal tracking and auditing. Setup Instructions To implement this workflow successfully, you must configure the following: Credentials: Configure OpenAI (for the Content Generator). Configure LinkedIn (for the Post action). Configure Google Sheets (for the logging). Node Configuration: Set up the Webhook URL in your Recrutei ATS settings. Replace YOUR_SHEET_ID_HERE in the Google Sheets Logging node with your sheet's ID. Select the correct LinkedIn profile/company page in the Create a post node.