by Jay Emp0
MCP Tool — Replicate (Flux) Image Generator → WordPress/Twitter Generates images via Replicate Flux models and uploads to WordPress (and optionally Twitter/X). Built to act as an MCP module that other agents/workflows call for on-demand image creation. Models configured in this workflow:\ black-forest-labs/flux-schnell, black-forest-labs/flux-dev, black-forest-labs/flux-1.1-pro Switch rationale: lower cost 💰, broader model choice 🎯, full control of parameters ⚙️ Leonardo API credits cannot be used in the web UI 🙅♂️; separate spend for API vs UI Links: 📜 Prior Leonardo-based workflow: https://n8n.io/workflows/6363-generate-and-upload-images-with-leonardo-ai-wordpress-and-twitter/ 📰 Blog automation consuming these images: https://n8n.io/workflows/6734-ai-blog-automation-publish-hourly-seo-articles-to-wordpress-and-twitter-v3/ 📥 Inputs | Field | Type | Description | | ------ | ------ | --------------------------------- | | prompt | string | Text description for the image | | slug | string | Filename slug for WP media | | model | string | One of the configured Flux models | Example: { "prompt":"Joker watching a Batman movie on his laptop", "slug":"joker-watching-batman", "model":"black-forest-labs/flux-dev" } 📤 Output { "public_image_url": "https://your-wp.com/wp-content/uploads/2025/08/img-joker-watching-batman.webp", "wordpress": {...}, "twitter": {...} } 🔄 Flow Trigger with prompt, slug, model Build model payload (quality/steps/ratio/output format) Call Replicate: POST /v1/models/{model}/predictions (Prefer: wait) Download the generated image URL Upload to WordPress (returns public URL) Optional: upload to Twitter/X Return URL + metadata 🤖 MCP Use at Scale (emp0.com) Operational pattern: I currently use this setup for my blog where i generate 300 posts/month, each with 4 images (banner + 2 to 3 inline images) → 1,000 images/month produced by this MCP. 💡 Hybrid Cost-Optimized Setup: High-priority images* (banners, main visuals): Generated using *Flux Dev** on Leonardo for slightly better prompt adherence. Low-priority images* (inline blog visuals): Generated using *Flux Schnell** on Replicate for maximum cost efficiency. 💰 Pricing Comparison (per image) Leonardo per-image cost uses API Basic math: $9 / 3,500 credits = $0.0025714 per credit. Flux Schnell (Leonardo)** = 7 credits Flux Dev (Leonardo)** = 7 credits Flux 1.1 Pro equivalent in Leonardo* = *Leonardo Phoenix** based on my experience = 10 credits | Flux Model | Replicate | Leonardo API* | | ------------------------ | ------------------------- | ------------------------------- | | flux-schnell | $0.0030 (=$3/1,000) | $0.0180 (7 × $0.0025714) | | flux-dev | $0.0250 | $0.0180 (7 × $0.0025714) | | flux-1.1-pro / Phoenix | $0.0400 | $0.0257 (10 × $0.0025714) | Replicate pricing: https://replicate.com/pricing\ Leonardo pricing: https://leonardo.ai/pricing/\ Leonardo API usage: https://docs.leonardo.ai/docs/commonly-used-api-values 📊 Monthly Cost Example (1,000 images/month) Mix: 300 ×flux-dev on Leonardo, 700 ×flux-schnell on Replicate. | Platform/Model | Images | Price per Image | Total | | ------------------------ | ------ | --------------- | ---------- | | Leonardo flux-dev | 300 | $0.0180 | $5.40 | | Replicate flux-schnell | 700 | $0.0030 | $2.10 | | Total Monthly Spend | 1000 | — | $7.50 | 💵 If using Leonardo for both: 300 × $0.0180 = $5.40 700 × $0.0180 = $12.60 Total = $18.00** Savings: $10.50/month (≈58% lower) with the hybrid setup. 📌 Notes More Replicate models can be added in Code1 node. Parameters tuned for aspect ratio, inference steps, quality, guidance. Leonardo credit model is API-only; credits are not spendable in Leonardo's web UI.
by Ranjan Dailata
Who this is for? Extract & Summarize Indeed Company Info is an automated workflow that extracts the Indeed company profile information using Bright Data Web Unlocker, transform it using Google Gemini’s LLM, and forward the transformed response with the summary to a specified webhook for downstream use. This workflow is tailored for: Recruiters and HR teams looking to assess companies quickly during talent sourcing. Job seekers researching potential employers and needing summarized company insights. Market researchers and analysts monitoring competitor or industry players. What problem is this workflow solving? Searching and evaluating company profiles on Indeed manually can be time-consuming and inefficient, especially when dealing with large volumes of companies. Manually browsing, copying, and summarizing company descriptions, reviews, and ratings from Indeed hinders productivity and limits real-time insights. This workflow solves this by: Automating the extraction of company details from Indeed using Bright Data Web Unlocker. Summarizing the raw data using Google Gemini's language model for a quick, human-readable overview. Sending the transformed response with the summary to a chosen endpoint, like Slack, Notion, Airtable, or a custom webhook. What this workflow does This automated pipeline does the following: Scrape Indeed company profile pages (e.g., ratings, description, reviews) using Bright Data’s Web Unlocker. Transform the scraped content into structured JSON using n8n’s built-in tools. Summarize and extract meaningful insights using Google Gemini's large language model. Forward the summarized data to a specified webhook or app for real-time access, storage, or analysis. Forward the formatted response to a specified webhook or app for real-time access, storage, or analysis. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the search query, Bright Data zone by navigating to the Set Indeed Search Query node. Update the Webhook Notifier with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether you're a company or a market researcher, entrepreneur, or data analyst. Here’s how you can adapt it to fit your specific use case: Changing the data source**: Replace the Indeed search input with other job or business listing platforms if needed (e.g., Glassdoor, Crunchbase) Refining the LLM prompt**: Tailor the Gemini prompt to transform or summarize the Indeed company information in a specific format. Routing the output to different destinations**: Send summaries or transformed response to Google Sheets, Airtable, or CRMs like HubSpot or Salesforce etc.
by Custom Workflows AI
Introduction This workflow offers a streamlined solution for uploading multiple files to a GitHub repository simultaneously using GitHub's REST API. It addresses a significant limitation of n8n's native GitHub node, which only supports single-file uploads at a time. By leveraging GitHub's Git Data API, this workflow creates a new Git tree containing multiple files, commits this tree, and updates the target branch—all in a single automated process. The workflow is particularly valuable for automation scenarios that require batch file operations, such as deploying website updates, publishing documentation, or maintaining configuration files across repositories. It eliminates the need for multiple separate API calls when working with multiple files, making your automation more efficient and less prone to partial update issues. By abstracting the complexities of GitHub's Git Data API into a reusable workflow, it provides a practical solution for developers, content managers, and DevOps professionals who need to programmatically manage repository content at scale. Who is this for? This workflow is designed for: Developers and DevOps engineers who need to automate file updates in GitHub repositories Content managers who regularly publish multiple files to GitHub-hosted websites or documentation Automation specialists looking to integrate GitHub operations into larger workflows Teams using n8n for CI/CD processes who need to push code or configuration changes Users should have basic familiarity with GitHub concepts (repositories, branches, commits) and should be comfortable obtaining and using GitHub Personal Access Tokens. While the workflow handles the API complexity, users should understand the fundamentals of version control to effectively utilize and customize it. What problem is this workflow solving? This workflow addresses several key challenges: Limited batch operations: n8n's native GitHub node only supports uploading one file at a time, making multi-file operations cumbersome and inefficient. API complexity: GitHub's Git Data API requires multiple sequential calls with interdependent data to create commits with multiple files, which is complex to implement manually. Automation bottlenecks: Without this workflow, automating multi-file updates would require either multiple separate API calls (risking partial updates) or custom scripting outside of n8n. Consistency issues: When files need to be updated together (e.g., code and corresponding documentation), this workflow ensures they're committed in a single atomic operation. By solving these issues, the workflow enables reliable, atomic updates of multiple files, maintaining repository consistency and simplifying automation processes. What this workflow does Overview This workflow uses GitHub's REST API to push multiple files to a repository in a single operation. It follows Git's internal model by: Retrieving the current state of the repository Creating a new tree with the files to be added or updated Creating a new commit with this tree Updating the branch reference to point to the new commit Process Initialization: The workflow starts with a manual trigger and sets up GitHub credentials and repository information. File Content Definition: Two "Set" nodes define the content for the files to be uploaded. Repository State Retrieval: The workflow fetches the latest commit SHA for the specified branch It then retrieves the base tree SHA from this commit Tree Creation: A new Git tree is created that includes both files (file1.txt and file2.txt), specifying their paths and content. Commit Creation: A new commit is created with the specified commit message, referencing the new tree and the parent commit. Branch Update: Finally, the branch reference is updated to point to the new commit, making the changes visible in the repository. Setup To use this workflow: Import the workflow: Download the workflow JSON and import it into your n8n instance. Create a GitHub Personal Access Token: Go to GitHub Settings → Developer Settings → Personal Access Tokens → Fine-grained tokens Create a new token with "Contents" permission (Read and write) for your target repository Configure the workflow: Update the "Set Github Info" node with: Your GitHub Personal Access Token Your GitHub username Your repository name The target branch (default is "main") A commit message Define file content: Modify the "File 1" and "File 2" nodes with the content you want to upload Adjust file paths if needed: In the "Create new tree" node, update the file paths if you want to change where the files are stored in the repository Save and run the workflow: Click "Test workflow" to execute the process. How to customize this workflow to your needs This workflow can be adapted in several ways: Add more files: Create additional "Set" nodes for more file content In the "Create new tree" node, add more tree entries following the same pattern (path, mode, type, content) Change file locations: Modify the "path" parameters in the "Create new tree" node to place files in different directories Dynamic file content: Replace the static content in the "File" nodes with data from other sources Use previous nodes or HTTP requests to generate file content dynamically Conditional file updates: Add IF nodes to determine which files should be updated based on certain conditions Create separate branches in your workflow for different update scenarios Scheduled updates: Replace the manual trigger with a Schedule node to run the workflow at specific intervals Combine with other triggers like Webhook or database events to push files when certain events occur Error handling: Add Error Trigger nodes to handle potential API failures Implement notification nodes to alert you of successful pushes or failures
by Agent Studio
Overview This workflow helps you compare Claude 3.5 Sonnet and Gemini 2.0 Flash when extracting data from a PDF This workflow extracts and processes the data within a PDF in one single step, instead of calling an OCR and then an LLM” How it works The initial 2 steps download the PDF and convert it to base64. This base64 string is then sent to both Claude 3.5 Sonnet and Gemini 2.0 Flash to extract information. This workflow is made to let you compare results, latency, and cost (in their dedicated dashboard). How to use it Set up your Google Drive if not already done Select a document on your Google Drive Modify the prompt in "Define Prompt" to extract the information you need and transform it as wanted. Get a Claude API key and/or Gemini API key Note that you can deactivate one of the 2 API calls if you don't want to try both Test the Workflow
by n8n Team
This workflow syncs your GitHub issues to your Notion database. Whenever a new issue is opened in your GitHub repository, it will be shown in your Notion database, syncing the status property (opened/edited/closed/deleted). In case there’s no Notion database existing yet, a new one will be created automatically. Prerequisites Notion account and Notion credentials GitHub account and GitHub credentials How it works Github trigger starts the workflow when a new issue is created in a GitHub repository. If node splits the workflow conditionally, showing whether the issue is new or an update of an existing issue. If data is new, the Notion node will create a new database page in Notion. If the data is not new, the Function node will create a Notion filter that will find its specific database page by issue ID. Switch node will then conditionally route the data into the appropriate Notion page, based on the update made upon it.
by Ibrahim
Overview This n8n workflow is designed to extract specific interests from messages in a Telegram chat and retrieve related information using the Facebook Graph API. It aims to provide a streamlined solution for parsing and analyzing user-provided interests within the Telegram platform. Features Interest Extraction:** Automatically identifies and extracts interests from messages that start with the hashtag "#interest". Data Retrieval:** Utilizes the Facebook Graph API to retrieve information related to the extracted interests. Structured Outputs:** Presents the retrieved data in an organized format for further analysis and review. Requirements Operational instance of n8n (self-hosted or cloud version). Basic understanding of n8n workflows and nodes. Setup and Configuration Import Workflow: Load the provided JSON workflow into your n8n instance. Configure Telegram Trigger Node: Ensure the Telegram trigger node is set up with the appropriate credentials and webhook ID. Configure and Test Nodes: Adjust node parameters as necessary and test the workflow to ensure proper functionality. How it Works Telegram Trigger: Listens for incoming messages in a specified Telegram chat. Check Message Contents: Verifies if the message begins with the specified hashtag and is from the designated chat ID. Extract Message: Extracts the content of the message for further processing. Split Message: Splits the extracted message to identify the interest and remaining content. Connect to Graph API: Utilizes the Facebook Graph API to search for information related to the extracted interest. Split Interests into a Table: Organizes the retrieved data into a structured table format. Get Variables: Maps the retrieved data to create a new JSON object containing specific fields related to the interest. Create a Spreadsheet: Generates a spreadsheet file in CSV format based on the retrieved and formatted data. Send the Spreadsheet File: Sends the generated spreadsheet file back to the original Telegram chat. Customization Modify the filtering conditions and fields to suit specific requirements. Adjust the frequency of the trigger node based on preference. Best Practices Regularly test the workflow to ensure consistent performance. Stay informed about any changes to external APIs that might affect the workflow's functionality. Contributing Your feedback and contributions are highly valued. Feel free to adapt, modify, and share enhancements with the n8n community.
by Joey D’Anna
This template is a set of building blocks to access Monday.com in ways not supported by the official Monday node. Prerequisites Monday account and Monday credentials. Included are setups to: Find a column value by the column's name (instead of a numerical index which can change when board structure is changed) Find a column value by the column's ID (again, instead of using a numerical index) Pull a board relation column, and get all the related pulses Pull an items subitems and split them out Upload a file to an item's files field Setup Create a Monday.com credential Update the nodes in the template to use your credential Copy/Paste the nodes you need from this template into any other workflow To retreive a column by name: Route a Monday.com node that gets an item to the COLUMN BY NAME node Edit the COLUMN BY NAME node, and enter the name in the first line of code. To retreive a column by its ID: Follow Monday.com's instructions to locate the column's ID Route a Monday.com node that gets an item to the COLUMN BY ID node -Edit the COLUMN BY ID node, and enter the ID in the first line of code. To retreive all linked pulses from a Board Relation column: Route a Monday.com node that gets an item to the GET BOARD RELATION node Edit the GET BOARD RELATION node to specify the column name. All linked pulses will be retrieved by the subsequent PULL LINKEDPULSE node To pull all subitems from an item: Route a Monday.com node that gets an item to the PULL SUBITEMS node All subitems will be retrieved by the subsequent GET EACH SUBITEM node To upload a File: Repalce the Convert to File node with whatever node you are using to output your binary file data Enable the MONDAY UPLOAD node If the destination column is named anything other then the default of "file" - edit the MONDAY UPLOAD node and change column_id:"file" in the first Value field to match the name of your file column
by Atta
What it does Customer support calls contain a wealth of valuable feedback and urgent issues, but manually reviewing audio files is inefficient. This workflow acts as an AI assistant for your call log, transforming unstructured audio recordings into structured, actionable data. It provides a clean summary, sentiment analysis, and a list of required actions for every call, eliminating the need for manual listening and ensuring key insights are never missed. How it works The workflow runs on a schedule to fully automate the call analysis process from start to finish. Fetch New Recordings: The workflow triggers on a schedule (e.g., every 5 minutes), searches a designated Google Drive folder for new call recordings, and downloads any new files it finds. Transcribe Audio: Each audio file is sent to the ElevenLabs API to be converted from speech to a text transcript. The result is then formatted into a conversational, multi-speaker format. AI-Powered Analysis: The transcript is passed to a Google Gemini node, which is prompted to return a structured JSON object. This JSON contains a complete analysis of the call, including speaker identification (agent_name, client_name), a summary, the client_sentiment, a call_topic, a department_tag, and a list of action_items. Log the Results: The complete, structured analysis output from Gemini is appended as a new row in a Google Sheet, creating a centralized log with all the extracted call details and the full transcript. Take Action: The workflow uses conditional logic based on the detected sentiment: Negative Sentiment: If a call was negative, an immediate alert containing the call summary and action items is sent to a manager's group on Telegram. Positive Sentiment: If a call was positive, a kudos message is sent to the support team's Telegram channel to celebrate good work. File Management: After processing, the original audio file is automatically moved to a separate "Processed" folder in Google Drive to ensure it isn’t analyzed again. Setup Instructions To configure this workflow, you will need to set up your file storage in Google Drive, create a Google Sheet for logging, and configure credentials for all connected services. Required Credentials Google: You will need Google OAuth2 credentials that have permission for Google Drive, Google Sheets, and the Google AI (Gemini) APIs. ElevenLabs: Sign up for an account at ElevenLabs and get your API Key. You will add this directly into the HTTP Request node for transcription. Telegram: Create a bot using the BotFather in Telegram to get your Bot Token. You will also need the specific Chat ID for the managers' channel and the team's channel. Step-by-Step Configuration Google Drive: Create two folders in your Google Drive: one named "Company - Support Call Recordings" and another named "Processed Recordings". Copy the unique Folder ID from the URL for each and paste it into the respective Google Drive nodes. Google Sheets: Create a new Google Sheet to log the results. In the first row, create the following headers exactly as written: Recording File, Sentiment, Department, Topic, Agent, Client, Summary, Actions, and Fulltext. Copy the Sheet ID from the URL and paste it into the "Log Recording Analysis" (Google Sheets) node. ElevenLabs Node: In the "Convert Speech To Text" (HTTP Request) node, make sure the URL is set to the correct ElevenLabs API endpoint for speech-to-text. Add your ElevenLabs API Key to the authentication header. Telegram Nodes: In the "Send Alert To Managers" node, enter the Chat ID for your managers' group. In the "Send Kudos to Team" node, enter the Chat ID for the main team channel. How to Adapt the Template This workflow is a powerful starting point. Based on your specific needs, you can customize the inputs, the AI analysis, the logging method, and the final actions. Input Method Change File Source:* Instead of Google Drive, you can adapt the workflow to fetch recordings from other services like *Dropbox, **OneDrive, or a custom FTP server. Use a Webhook:* Replace the *Schedule Trigger* with a *Webhook Trigger** to process calls in real-time as they are added from your call software (if it supports webhooks). Final Actions Create Service Tickets:* This is a key area for customization. Replace the *Telegram* nodes with nodes for ticketing systems. For a negative call, you can automatically create a high-priority ticket in *Jira, **Zendesk, or ServiceNow. Create Tasks:* For calls with specific action items, use a node like *Asana, **Trello, or Todoist to automatically create a task and assign it to the correct team member. Send Email Notifications:* Use the *Send Email** node to dispatch summaries and alerts to stakeholders who are not on Telegram. Logging and Analysis Log to a Database:* Instead of Google Sheets, you can use a *Postgres, **MySQL, or Data Warehouse node to log the structured data for more advanced business intelligence and dashboarding. Customize the AI Prompt:** The prompt in the Google Gemini node is the "brain" of the operation. It specifically instructs the AI to return a JSON object with a predefined structure. To change what data is extracted, you can modify this structure in the prompt. For example, you could add a new key-value pair like "competitor_mentioned": "Name of competitor if mentioned, otherwise null" to the JSON structure. The current workflow asks the AI to populate a JSON object like this: { "speaker_identification": { "agent": "speaker_id", "agent_name": "The agent's name", "client": "client_id", "client_name": "The client's name" }, "summary": "A concise summary.", "client_sentiment": "Positive, Negative, or Neutral", "call_topic": "A brief phrase for the topic.", "department_tag": "The most relevant department.", "action_items": [ "A list of actionable tasks." ] } Change AI or STT Service:* You can swap out the *Google Gemini* node for an *OpenAI* node, or change the *HTTP Request* node to use a different transcription service like *AssemblyAI* or *Deepgram**.
by Sona Labs
Automatically identify ICP matches by enriching basic company records with Sona Enrich data—combining web scraping, AI analysis, and the structured attributes that define your ideal customer. Import company domains from a Google Sheet, automatically analyze their websites with AI, enrich them with firmographic data via Sona Enrich, and sync the results to HubSpot—so you can quickly discover and target your ideal customers. How it works Step 1: Data Input & Web Scraping Reads company domains from your Google Sheet Scrapes each website's content via HTTP requests Extracts and cleans HTML content Removes navigation, footers, and noise Step 2: AI Analysis Sends cleaned content to OpenAI Chat Model Extracts structured company intelligence (industry, positioning, features, personas) Captures and analyzes pricing, pros/cons, and value propositions Aggregates all AI results into standardized format Advanced users: You can modify the data that's generated and then add custom fields to HubSpot Step 3: HubSpot Preparation Creates custom fields in HubSpot CRM Prepares AI-extracted data for import Splits aggregated data into individual company records Ready for batch processing Step 4: Enrich & Sync to HubSpot Loops through each company one by one Enriches with the Sona API (firmographics, revenue, employees, funding, and more) Creates company record in HubSpot Formats and populates all custom fields Combines AI insights + Sona data in one complete profile What you'll get The workflow enriches each company record with: Web-Scraped Intelligence**: Business descriptions, features, and positioning directly from their website AI-Analyzed Insights**: Value propositions, target personas, pricing models, and competitive advantages interpreted by AI Firmographic Data**: Company size, employee count, revenue estimates, headquarters location, and more via Sona Enrich Technographic Data**: Technology stack, platforms, and tools the company uses Industry Classification**: Precise industry categorization and market type (B2B/B2C) Funding & Growth**: Investment rounds, funding status, and growth indicators Custom HubSpot Properties**: All data automatically mapped and synced to your CRM for immediate use Why use this Complete intelligence gathering**: Combines three powerful data sources (web scraping, AI, and Sona enrichment) for maximum insight depth Personalize at scale**: Leverage actual company intelligence to craft relevant, informed outreach that resonates Intelligent segmentation**: Build precise account lists by industry, tech stack, business model, or company size Accelerate research**: Eliminate hours of manual company investigation—save 15-30 minutes per prospect Improve conversion**: Engage prospects with context-rich conversations that demonstrate deep understanding Enhanced lead scoring**: Build sophisticated scoring models with comprehensive firmographic and technographic signals Automated updates**: Keep HubSpot records current with scheduled enrichment runs (daily/weekly) Setup instructions Before you start, you'll need: Google Sheet with company websites (column named "Website Domain") OpenAI API key for AI analysis (sign up here) Sona API credentials (get access here) Get an app token from HubSpot by creating a legacy app: Go to HubSpot Settings > Integrations > Legacy Apps Click Create Legacy App Select Private (for one account) In the scopes section, enable the following permissions: crm.schemas.companies.write crm.objects.companies.write crm.schemas.companies.read Click Create Copy the access token from the Auth tab n8n cloud or self-hosted instance Configuration steps: Prepare your data: Create a Google Sheet with a "Website Domain" column and add 2-3 test companies (e.g., example.com) Connect Google Sheets: In the "Get row(s) in sheet" node, authenticate and select your spreadsheet and sheet name Configure web scraping: Update the HTTP Request node with your preferred scraping method or data source URL Set up AI Agent: Add your OpenAI API key and customize the extraction prompt to define which company fields you want (industry, personas, features, etc.) Create HubSpot custom fields: Review the "Create Custom HubSpot Fields" node and adjust property names to match your CRM structure Add Sona credentials: In the "Sona Enrich" node within the loop, authenticate with your Sona API key Connect HubSpot: Authenticate in both "Create a Company" nodes using your HubSpot API key or OAuth2 Map enriched data: In the "Format Custom Properties" node, configure how Sona and AI data maps to your HubSpot fields Test with sample data: Run the workflow with 2-3 test companies and verify records appear correctly in HubSpot with all custom properties populated Add error handling: Configure notifications for failed enrichments or API errors (optional but recommended) Scale and automate: Process your full company list, then optionally add a Schedule Trigger for automatic daily or weekly enrichment
by Lucas Peyrin
How it works This workflow automates your initial hiring pipeline by creating an AI-powered CV scanner. It collects job applications through a web form, uses AI to analyze the candidate's CV against your job description, and neatly organizes the results in a Google Sheet. Here’s the step-by-step process: The Application Form:** A Form Trigger provides a public web form for candidates to submit their name, email, and CV (as a PDF). Initial Logging:** As soon as an application is submitted, the candidate's name and email are added to a Google Sheet. This ensures every applicant is logged, even if a later step fails. CV Text Extraction:* The workflow uses *Mistral's OCR** model to accurately extract all the text from the uploaded CV PDF. AI Analysis:* The extracted text is sent to *Google Gemini**. A detailed prompt instructs the AI to act as a hiring assistant, scoring the CV against the specific requirements of your job role and providing a detailed explanation for its score. Structured Output:** A JSON Output Parser ensures the AI's analysis is returned in a clean, structured format, making the data reliable. Final Record:** The AI-generated qualification score and explanation are added to the candidate's row in the Google Sheet, giving you a complete, analyzed list of applicants. Set up steps Setup time: ~15 minutes You'll need API keys for Mistral and Google AI, and to connect your Google account. Get Your Mistral API Key: Visit the Mistral Platform at console.mistral.ai/api-keys. Create and copy your API key. In the workflow, go to the Extract CV Text node, click the Credential dropdown, and select + Create New Credential. Paste your key into the API Key field and Save. Get Your Google AI API Key: Visit Google AI Studio at aistudio.google.com/app/apikey. Click "Create API key in new project" and copy the key. In the workflow, go to the Gemini 2.5 Flash Lite node, click the Credential dropdown, and select + Create New Credential. Paste your key into the API Key field and Save. Connect Your Google Account: Select the Create 'CVs' Spreadsheet node. Click the Credential dropdown and select + Create New Credential to connect your Google account. Repeat this for the Log Candidate Submission and Add CV Analysis nodes, selecting the credential you just created. Create Your Spreadsheet: Click the "play" icon on the Start Here node to run it. This will create a new Google Sheet in your Google Drive named "CVs" with the correct columns. Customize the Job Role: Go to the AI Qualification node. In the Text parameter, find the job_requirements section and replace the example job description with your own. Be as detailed as possible for the best results. Start Screening! Activate the workflow using the toggle at the top right. Go to the Application Form node and click the "Open Form URL" button. Fill out the form with a test application and upload a sample CV. Check your Google Sheet to see the AI's analysis appear within moments
by Khairul Muhtadin
Automatically extract job listings from any website URL, format them with AI, and publish directly to WordPress. Just send a URL via Telegram, and watch as the workflow scrapes the job details, enhances the content with GPT, and creates a polished post on your site. 💡 Why Use Job Repost? ⏰ Save countless hours Automatically extract, process, and publish job offers from any website, freeing your time from repetitive tasks. ✅ Eliminate human errors Say goodbye to typos and missed fields — every job post is validated before going live. 📈 Boost engagement Fresh, well-structured job listings attract more candidates, improving your site's reach and authority. 🚀 Stay ahead Leveraging AI with GPT means your content is not just automated but polished and SEO-friendly — the digital assistant you never knew you needed. ⚡ Perfect For Job board managers:** Want to aggregate listings from multiple sources with minimal effort Recruiters & HR teams:** Who need to streamline job posting workflows without technical hassles Content creators & marketers:** Looking to automate publishing while maintaining style and SEO standards 🔧 How It Works | Step | Process | Description | |------|---------|-------------| | 📱 | Trigger | Send a job URL via Telegram bot to initiate the process | | 🔥 | Extract | Firecrawl API scrapes and extracts clean content from the provided URL | | 📎 | Process | Job data is extracted via AI, text split and cleaned, job categories and types mapped to your system | | 🤖 | Smart Logic | GPT crafts formatted job posts, intelligent validation ensures all key data is present, default values fill in the blanks if necessary | | 💌 | Output | Posts automatically published to WordPress with company logos uploaded, and success or error notifications sent via Telegram | | 🗂 | Storage | Uses Supabase vector store for managing document embeddings, ensuring quick lookup and reference compliance | 🔐 Quick Setup Import the provided JSON file into your n8n instances Add credentials: Firecrawl API key Google Drive OAuth2 (for RAG storage) OpenAI API WordPress API Telegram API Supabase Customize: Telegram bot token WordPress URLs Default images and category mappings if needed Update: URLs and API tokens where placeholders are used Test: Send a job URL to your Telegram bot to verify accurate extraction and posting 🧩 You'll Need ✅ Active n8n instances ✅ Firecrawl account with API access ✅ Google Drive account for RAG document storage ✅ OpenAI account with GPT API access ✅ WordPress site with autojob plugin and API enabled ✅ Telegram bot for URL submission and notifications ✅ Supabase account for vector store management 🛠️ Level Up Ideas 🌍 Add multi-language support to expand global reach 🔗 Support batch URL processing for multiple jobs at once 💬 Integrate Slack or email notifications for wider team alerts 🎯 Use more AI nodes to summarize or rate job offers for quality control 🔄 Schedule periodic cleanup of vector store for performance optimization 📊 Add analytics tracking for published jobs performance 🧠 Nodes Used Core Components: Firecrawl HTTP Request** (Web scraping and content extraction) Google Drive** (RAG document storage) Supabase Vector Store** OpenAI** (Embeddings, GPT Extraction) Code Nodes** for mapping categories Telegram Trigger & Message** HTTP Request** (for WordPress API and image uploads) Made by: Khaisa Studio Tags: automation recruitment job-posting wordpress AI web-scraping firecrawl Category: Human Resources, Recruitment, Wordpress, Scrapping Need a custom? contact me on LinkedIn or Web
by Trung Tran
Decodo Scraper API Workflow Template (n8n Automation Amazon Book Purchase Report) Watch the demo video below: > This workflow demos how to use Decodo Scraper API to crawl any public web page (headless JS, device emulation: mobile/desktop/tablet), extract structured product data from the returned HTML, generate a purchase-ready report, and automatically deliver it as a Google Doc + PDF to Slack/Drive. Who’s it for Creators / Analysts** who need quick product lists (books, gadgets, etc.) with prices/ratings. Ops & Marketing teams** building weekly “top picks” reports. Engineers** validating the Decodo Scraper API + LLM extraction pattern before scaling. How it works / What it does Trigger – Manually run the workflow. Edit Fields (manual) – Provide inputs: targetUrl (e.g., an Amazon category/search/listing page) deviceType (desktop | mobile | tablet) Optional: maxItems, notes, reportTitle, reportOwner Scraper API Request (HTTP Request → POST) Calls Decodo Scraper API with: URL to crawl, headless JS enabled Device emulation (UA + viewport) Optional waitFor / executeJS to ensure late-loading content is captured HTML Response Parser (Code/Function or HTML node) Pulls the HTML string from Decodo response and normalizes it (strip scripts/styles, collapse whitespace). Product Analyzer Agent (LLM + Structured Output Parser) Prompts an LLM to extract structured “book” objects from the HTML: The Structured Output Parser enforces a strict JSON schema and drops malformed items. Build 📚 Book Purchase Report (Code/LLM) Converts the JSON array into a Markdown (or HTML) report with: Executive summary (top picks, average price/rating) Table of items (rank, title, author, price, rating, link) “Recommended to buy” shortlist (rules configurable) Notes / owner / timestamp Configure Google Drive Folder (manual) Choose/create a Drive folder for output artifacts. Create Document File (Google Docs API) Creates a Doc from the generated Markdown/HTML. Convert Document to PDF (Google Drive export) Exports the Doc to PDF. Upload report to Slack Sends the PDF (and/or Doc link) to a chosen Slack channel with a short summary. How to set up 1 Prerequisites n8n** (self-hosted or Cloud) Decodo Scraper API** key OpenAI (or compatible) API key** for the Analyzer Agent Google Drive/Docs** credentials (OAuth2) Slack** Bot/User token (files:write, chat:write) 2 Environment variables (recommended) DECODO_API_KEY OPENAI_API_KEY DRIVE_FOLDER_ID (optional default) SLACK_CHANNEL_ID 3 Nodes configuration (high level) Edit Fields (Set node) Scraper API Request (HTTP Request → POST) HTML Response Parser (Code node) Product Analyzer Agent Build Book Purchase Report (Code/LLM) Create Document File Convert to PDF Upload to Slack Requirements Decodo**: Active API key and endpoint access. Be mindful of concurrency/rate limits. Model**: GPT-4o/4.1-mini or similar for reliable structured extraction. Google**: OAuth client (Docs/Drive scopes). Ensure n8n can write to the target folder. Slack**: Bot token with files:write + chat:write. How to customize the workflow Target site: Change targetUrl to any **public page (category, search, or listing). For other domains (not Amazon), tweak the LLM guidance (e.g., price/label patterns). Device emulation**: Switch deviceType to mobile to fetch mobile-optimized markup (often simpler DOMs). Late-loading pages**: Adjust waitFor.selector or use waitUntil: "networkidle" (if supported) to ensure full content loads. Client-side JS**: Extend executeJS if you need to interact (scroll, click “next”, expand sections). You can also loop over pagination by iterating URLs. Extraction schema**: Add fields (e.g., discount_percent, bestseller_badge, prime_eligible) and update the Structured Output schema accordingly. Filtering rules**: Modify recommendation logic (e.g., min ratings count, price bands, languages). Report branding**: Add logo, cover page, footer with company info; switch to HTML + inline CSS for richer Docs formatting. Destinations**: Besides Slack & Drive, add Email, Notion, Confluence, or a database sink. Scheduling: Add a **Cron trigger for weekly/monthly auto-reports.