by Joseph LePage
๐ This n8n workflow integrates Tavily's search and extract APIs with AI summarization capabilities to process web content efficiently. Quick Setup Get your Tavily API key from https://app.tavily.com/home Replace tvly-YOUR_API_KEY in the "Tavily API Key" node Connect your OpenAI credentials to the "OpenAI Chat Model" node Deploy the workflow and start the chat trigger Core Features Search & Extract ๐ฏ Intelligent web searching with relevance filtering Automated content extraction from top results AI-powered content summarization in markdown format User Interaction ๐ฌ Chat-based search topic input Real-time processing pipeline Structured markdown output The workflow demonstrates practical implementation of Tavily's API endpoints while handling the complete process from search to summarization in a single automated pipeline.
by Danger
How it Works This meta-workflow is designed to intelligently scan all your active workflows in n8n, identify those that contain Webhook nodes, and automatically generate a Swagger (OpenAPI) specification based on them. The output Swagger document reflects all accessible endpoints from your Webhook nodes, making it easier to: Visualize your API structure Share your endpoints Integrate with tools like Postman or Swagger UI Enhanced Parameter Support If you want the Swagger to reflect request parameters (e.g., query or body fields), you can annotate your Webhook nodes using the Note section. When configured properly, these annotations enrich your Swagger documentation with parameter names, types, and descriptions. Setup Steps Add the WebhookDocs to n8n Import the WebhookDocs JSON file into your n8n instance. Activate the WebhookDocs (you can also use the test-endpoint) Annotate Webhook Nodes (Optional but Recommended) To enable parameter documentation, open the Note section of each Webhook node and add annotations in the following format: //@body field_name string description //@query field_name string description Open the page https://n8n.youristance.com/webhook/swagger
by Solomon
This n8n template demonstrates how to obtain token usage from AI Agents and places the data into a spreadsheet that calculates the estimated cost of the execution. Obtaining the token usage from AI Agents is tricky, because it doesn't provide all the data from tool calls. This workflow taps into the workflow execution metadata to extract token usage information. Works well with OpenAI, Google and Anthropic. Other LLM providers might need small tweaks. How it works The AI Agent executes and then calls a subworkflow to calculate the token usage. The data is stored in Google Sheets The spreadsheet has formulas to calculate the estimated cost of the execution. How to use The AI Agent is used as an example. Feel free to replace this with other agents you have. Call the subworkflow AFTER all the other branches have finished executing. Requirements LLM account (OpenAI, Gemini...) for API usage. Google Drive and Sheets credentials n8n API key of your instance
by Richard Uren
Task Read a list of customers from a GoogleSheet and create them in Shopify using Shopify's Admin API (GraphQL). Why ? Generate test users for development stores. Migrate customers from other platforms. Easy intro to Shopify's GraphQL API. Setup Setting up Google Sheets access Follow the instructions in the N8N Docs for granting Oauth2 access to Google services. You'll need to grant API access to Google Sheets and Google Drive (to list available sheets). Setting up Shopify access Shopify's Admin API uses 'Header Auth' with a key of X-Shopify-Access-Token and a value of your shopify access token which starts with shpat_ . How to generate a Shopify Access Token To generate a Shopify Access Token create an app, grant the app the necessary scopes, then generate a token. From inside a store do the following : click Settings (nav link) click Apps and sales channels (nav link) click Develop Apps (button) click Create App (button) give the app a name click configure Admin API Scopes (button) at a minimum grant read_customers and write_customers scope. Grant additional scopes if you plan on accessing other parts of the API. click save To generate the token click install app (button) click install on the dialog that pops up (button) click 'reveal token once' (button) copy the token into a password vault or somewhere secure. Template Updates To test this out you'll need to make the following changes : 1) Create a header credential where the key is X-Shopify-Access-Token and the value is your Shopify Access Token (it starts with shpat_ 2) In the GraphQL node change the endpoint URL to your store. Something like https://{your store goes here}.myshopify.com/admin/api/2025-04/graphql.json Google Sheet Structure Columns can be in any order, because the rows will be mapped to fields in a json object. N8N will treat the first row in the sheet as a column name, so at a minimum use the column names below in row 1 of your sheet. first_name : Any string last_name : Any string email : Valid email mobile_phone : International mobile phone format with no spaces eg. +61414708406 (Shopify will reject anything else). Example CSV "first_name","last_name","email","mobile_phone" "Bob","Smith","bob@example.com","+61414999999"
by Jan Willem Altink
Supabase Storage File Upload Workflow works with selfhosted Supabase โน๏ธ How it works โข Accepts file data (MIME type, filename, base64 content) from other workflows โข Automatically routes files to appropriate storage buckets based on file type (images, audio, video, documents) โข Uploads files to Supabase Storage using the REST API โข Generates secure signed URLs for file access with 30-day expiration โข Returns structured success/error responses for downstream processing ๐๏ธ Set up steps โข Configure Supabase API credentials in n8n โข Create storage buckets in your Supabase project (image-files, audio-files, video-files, document-files) (or choose your own structuring system) โข Replace url paths with your own โข Test the workflow using the included form trigger โข Remove test form and integrate with your main workflows ๐ Reference: Supabase Storage Documentation
by WeblineIndia
Smart Document Parser for Invoices, Logs or Sensor Reports (PDF/Image to Google Sheets) This n8n workflow automatically parses documents such as invoices, sensor logs or structured PDFs/images (including scanned docs or CSVs), extracts key fields like totals, dates and customer/vendor info using OCR and AI, and writes the structured output into Google Sheets. Whoโs it for Finance or Ops teams automating invoice processing. SaaS platforms parsing uploaded reports or documents. Anyone needing a no-code backend for PDF/image/CSV document parsing. AI-powered data capture pipelines. How it works Webhook Trigger receives file uploads (/uploadDoc) Switch Node checks the file type: If image โ Use Tesseract OCR If PDF โ Use PDF parser If CSV โ Extract as-is Extracted text is passed to: Google Gemini or Gemini Flash AI model Prompt extracts fields like invoice_id, total, customer_name, etc. JSON string is parsed and cleaned Data is appended to Google Sheets using appendOrUpdate How to set up Create a Google Sheet with columns like: invoice_id, invoice_date, due_date, customer_name, vendor_name, subtotal, tax_total, total, currency Connect: Google Sheets OAuth Google Gemini (PaLM API key) for LLM parsing Deploy the webhook endpoint: /uploadDoc Upload sample files (PDFs, images, CSVs) to test Review and map sheet columns in the Invoice Data node Requirements | Tool | Purpose | | ------------- | --------------------------------- | | n8n | Automation framework | | Google Sheets | To store structured output | | Tesseract OCR | For scanned image text extraction | | Google Gemini | For natural language parsing | How to customize Add extraction for line items using structured prompts. Change prompt to extract sensor readings, log types, or custom keys. Add support for other file types (e.g., XLSX, DOCX). Add Slack/Email notifications on success/failure. Swap Gemini with OpenAI or Hugging Face if preferred. Addโons Save uploaded files to Google Drive or S3 Add auth for secure uploads Use charting/dashboard nodes to visualize extracted data Integrate with billing/accounting software Use Case Examples | Scenario | What Happens | | ----------------------- | ------------------------------------------------------- | | Invoice Upload (PDF) | Extracts totals, customer, tax data into a Google Sheet | | Scanned Receipt (Image) | OCR + LLM extracts structured data | | Log File (CSV) | Parses and logs entries into Sheets | Common troubleshooting | Issue | Possible Cause | Solution | | --------------------------------- | ----------------------- | ------------------------------------------- | | Webhook not triggered | URL or method mismatch | Use correct POST URL /uploadDoc | | Text is blank | OCR failed | Check image quality or Tesseract config | | Gemini model not returning JSON | Prompt formatting issue | Ensure prompt ends with valid JSON schema | | Sheet not updated | Invalid Sheet ID or tab | Double-check sheet credentials and tab name | Need Help? Need help fine-tuning the Gemini prompt for better field accuracy? Want to extract full tables, multi-page invoices or convert PDFs to JSON lines? Our automation team at WeblineIndia can help you extend this into a full-blown document automation pipeline.
by Corentin Ribeyre
This template can be used to scan a domain/company with Icypeas. Be sure to have an active account to use this template. How it works This workflow can be divided into three steps : The workflow initiates with a manual trigger (On clicking 'execute'). It connects to your Icypeas account. It performs an HTTP request to scan a domain/company name. Set up steps You will need a working icypeas account to run the workflow and get your API Key, API Secret and User ID. You will need a domain/company name to perform the search.
by Atta
What it does Customer support calls contain a wealth of valuable feedback and urgent issues, but manually reviewing audio files is inefficient. This workflow acts as an AI assistant for your call log, transforming unstructured audio recordings into structured, actionable data. It provides a clean summary, sentiment analysis, and a list of required actions for every call, eliminating the need for manual listening and ensuring key insights are never missed. How it works The workflow runs on a schedule to fully automate the call analysis process from start to finish. Fetch New Recordings: The workflow triggers on a schedule (e.g., every 5 minutes), searches a designated Google Drive folder for new call recordings, and downloads any new files it finds. Transcribe Audio: Each audio file is sent to the ElevenLabs API to be converted from speech to a text transcript. The result is then formatted into a conversational, multi-speaker format. AI-Powered Analysis: The transcript is passed to a Google Gemini node, which is prompted to return a structured JSON object. This JSON contains a complete analysis of the call, including speaker identification (agent_name, client_name), a summary, the client_sentiment, a call_topic, a department_tag, and a list of action_items. Log the Results: The complete, structured analysis output from Gemini is appended as a new row in a Google Sheet, creating a centralized log with all the extracted call details and the full transcript. Take Action: The workflow uses conditional logic based on the detected sentiment: Negative Sentiment: If a call was negative, an immediate alert containing the call summary and action items is sent to a manager's group on Telegram. Positive Sentiment: If a call was positive, a kudos message is sent to the support team's Telegram channel to celebrate good work. File Management: After processing, the original audio file is automatically moved to a separate "Processed" folder in Google Drive to ensure it isnโt analyzed again. Setup Instructions To configure this workflow, you will need to set up your file storage in Google Drive, create a Google Sheet for logging, and configure credentials for all connected services. Required Credentials Google: You will need Google OAuth2 credentials that have permission for Google Drive, Google Sheets, and the Google AI (Gemini) APIs. ElevenLabs: Sign up for an account at ElevenLabs and get your API Key. You will add this directly into the HTTP Request node for transcription. Telegram: Create a bot using the BotFather in Telegram to get your Bot Token. You will also need the specific Chat ID for the managers' channel and the team's channel. Step-by-Step Configuration Google Drive: Create two folders in your Google Drive: one named "Company - Support Call Recordings" and another named "Processed Recordings". Copy the unique Folder ID from the URL for each and paste it into the respective Google Drive nodes. Google Sheets: Create a new Google Sheet to log the results. In the first row, create the following headers exactly as written: Recording File, Sentiment, Department, Topic, Agent, Client, Summary, Actions, and Fulltext. Copy the Sheet ID from the URL and paste it into the "Log Recording Analysis" (Google Sheets) node. ElevenLabs Node: In the "Convert Speech To Text" (HTTP Request) node, make sure the URL is set to the correct ElevenLabs API endpoint for speech-to-text. Add your ElevenLabs API Key to the authentication header. Telegram Nodes: In the "Send Alert To Managers" node, enter the Chat ID for your managers' group. In the "Send Kudos to Team" node, enter the Chat ID for the main team channel. How to Adapt the Template This workflow is a powerful starting point. Based on your specific needs, you can customize the inputs, the AI analysis, the logging method, and the final actions. Input Method Change File Source:* Instead of Google Drive, you can adapt the workflow to fetch recordings from other services like *Dropbox, **OneDrive, or a custom FTP server. Use a Webhook:* Replace the *Schedule Trigger* with a *Webhook Trigger** to process calls in real-time as they are added from your call software (if it supports webhooks). Final Actions Create Service Tickets:* This is a key area for customization. Replace the *Telegram* nodes with nodes for ticketing systems. For a negative call, you can automatically create a high-priority ticket in *Jira, **Zendesk, or ServiceNow. Create Tasks:* For calls with specific action items, use a node like *Asana, **Trello, or Todoist to automatically create a task and assign it to the correct team member. Send Email Notifications:* Use the *Send Email** node to dispatch summaries and alerts to stakeholders who are not on Telegram. Logging and Analysis Log to a Database:* Instead of Google Sheets, you can use a *Postgres, **MySQL, or Data Warehouse node to log the structured data for more advanced business intelligence and dashboarding. Customize the AI Prompt:** The prompt in the Google Gemini node is the "brain" of the operation. It specifically instructs the AI to return a JSON object with a predefined structure. To change what data is extracted, you can modify this structure in the prompt. For example, you could add a new key-value pair like "competitor_mentioned": "Name of competitor if mentioned, otherwise null" to the JSON structure. The current workflow asks the AI to populate a JSON object like this: { "speaker_identification": { "agent": "speaker_id", "agent_name": "The agent's name", "client": "client_id", "client_name": "The client's name" }, "summary": "A concise summary.", "client_sentiment": "Positive, Negative, or Neutral", "call_topic": "A brief phrase for the topic.", "department_tag": "The most relevant department.", "action_items": [ "A list of actionable tasks." ] } Change AI or STT Service:* You can swap out the *Google Gemini* node for an *OpenAI* node, or change the *HTTP Request* node to use a different transcription service like *AssemblyAI* or *Deepgram**.
by gotoHuman
๐ผ Lead Outreach Agent This AI workflow helps you quickly react to new leads with an initial personalized outreach. A great start of your lead nurturing sequence to avoid loosing precious leads that could turn into paying customers. Most importantly it uses gotoHuman so you can review the AI-analysis and the AI-generated editable email draft before it is sent out in your name. How it works We receive a new form submission incl. the email address and company name of the prospect and extract the website URL from the address. We proceed only for company email addresses. We scrape the website using Firecrawl and summarize it with OpenAI Our AI agent runs an analysis based on the lead information and documents describing our own company and the defined Ideal Customer Profiles. It also fetches previously approved examples from gotoHuman so you're effectively creating a self-learning agent. It responds with the analysis and the drafted outreach email. Human Approval in gotoHuman. Allows editing the drafted email. We can now send our email including any edits made during the review and be sure that we are using high-quality content instead of AI slop. How to set up Most importantly, install the gotoHuman node before importing this template! (Just add the node to a blank canvas before importing) Set up your credentials for the different services In gotoHuman, select and create the pre-built review template "Lead Outreach Agent" or import the ID: T873fI1Xli5nt3eh33Rj Select this template in the gotoHuman node Requirements You need accounts for gotoHuman (Human Supervision) OpenAI (AI Agent) Typeform (Lead Form Submissions) Firecrawl (Website Scraping) Gmail Google Docs (Company Wiki) How to customize Replace the Typeform trigger with any other way you might receive or find new leads Provide the AI Sales Agent with more context to properly analyze the lead and create better personalized emails. Consider adding tools that allow the agent to fetch more infos about the prospect's company or personal profile, or to find out more about your specific product/service offerings and how your sales pitches look like.
by Lucas Peyrin
How it works This workflow automates your initial hiring pipeline by creating an AI-powered CV scanner. It collects job applications through a web form, uses AI to analyze the candidate's CV against your job description, and neatly organizes the results in a Google Sheet. Hereโs the step-by-step process: The Application Form:** A Form Trigger provides a public web form for candidates to submit their name, email, and CV (as a PDF). Initial Logging:** As soon as an application is submitted, the candidate's name and email are added to a Google Sheet. This ensures every applicant is logged, even if a later step fails. CV Text Extraction:* The workflow uses *Mistral's OCR** model to accurately extract all the text from the uploaded CV PDF. AI Analysis:* The extracted text is sent to *Google Gemini**. A detailed prompt instructs the AI to act as a hiring assistant, scoring the CV against the specific requirements of your job role and providing a detailed explanation for its score. Structured Output:** A JSON Output Parser ensures the AI's analysis is returned in a clean, structured format, making the data reliable. Final Record:** The AI-generated qualification score and explanation are added to the candidate's row in the Google Sheet, giving you a complete, analyzed list of applicants. Set up steps Setup time: ~15 minutes You'll need API keys for Mistral and Google AI, and to connect your Google account. Get Your Mistral API Key: Visit the Mistral Platform at console.mistral.ai/api-keys. Create and copy your API key. In the workflow, go to the Extract CV Text node, click the Credential dropdown, and select + Create New Credential. Paste your key into the API Key field and Save. Get Your Google AI API Key: Visit Google AI Studio at aistudio.google.com/app/apikey. Click "Create API key in new project" and copy the key. In the workflow, go to the Gemini 2.5 Flash Lite node, click the Credential dropdown, and select + Create New Credential. Paste your key into the API Key field and Save. Connect Your Google Account: Select the Create 'CVs' Spreadsheet node. Click the Credential dropdown and select + Create New Credential to connect your Google account. Repeat this for the Log Candidate Submission and Add CV Analysis nodes, selecting the credential you just created. Create Your Spreadsheet: Click the "play" icon on the Start Here node to run it. This will create a new Google Sheet in your Google Drive named "CVs" with the correct columns. Customize the Job Role: Go to the AI Qualification node. In the Text parameter, find the job_requirements section and replace the example job description with your own. Be as detailed as possible for the best results. Start Screening! Activate the workflow using the toggle at the top right. Go to the Application Form node and click the "Open Form URL" button. Fill out the form with a test application and upload a sample CV. Check your Google Sheet to see the AI's analysis appear within moments
by Trung Tran
Decodo Scraper API Workflow Template (n8n Automation Amazon Book Purchase Report) Watch the demo video below: > This workflow demos how to use Decodo Scraper API to crawl any public web page (headless JS, device emulation: mobile/desktop/tablet), extract structured product data from the returned HTML, generate a purchase-ready report, and automatically deliver it as a Google Doc + PDF to Slack/Drive. Whoโs it for Creators / Analysts** who need quick product lists (books, gadgets, etc.) with prices/ratings. Ops & Marketing teams** building weekly โtop picksโ reports. Engineers** validating the Decodo Scraper API + LLM extraction pattern before scaling. How it works / What it does Trigger โ Manually run the workflow. Edit Fields (manual) โ Provide inputs: targetUrl (e.g., an Amazon category/search/listing page) deviceType (desktop | mobile | tablet) Optional: maxItems, notes, reportTitle, reportOwner Scraper API Request (HTTP Request โ POST) Calls Decodo Scraper API with: URL to crawl, headless JS enabled Device emulation (UA + viewport) Optional waitFor / executeJS to ensure late-loading content is captured HTML Response Parser (Code/Function or HTML node) Pulls the HTML string from Decodo response and normalizes it (strip scripts/styles, collapse whitespace). Product Analyzer Agent (LLM + Structured Output Parser) Prompts an LLM to extract structured โbookโ objects from the HTML: The Structured Output Parser enforces a strict JSON schema and drops malformed items. Build ๐ Book Purchase Report (Code/LLM) Converts the JSON array into a Markdown (or HTML) report with: Executive summary (top picks, average price/rating) Table of items (rank, title, author, price, rating, link) โRecommended to buyโ shortlist (rules configurable) Notes / owner / timestamp Configure Google Drive Folder (manual) Choose/create a Drive folder for output artifacts. Create Document File (Google Docs API) Creates a Doc from the generated Markdown/HTML. Convert Document to PDF (Google Drive export) Exports the Doc to PDF. Upload report to Slack Sends the PDF (and/or Doc link) to a chosen Slack channel with a short summary. How to set up 1 Prerequisites n8n** (self-hosted or Cloud) Decodo Scraper API** key OpenAI (or compatible) API key** for the Analyzer Agent Google Drive/Docs** credentials (OAuth2) Slack** Bot/User token (files:write, chat:write) 2 Environment variables (recommended) DECODO_API_KEY OPENAI_API_KEY DRIVE_FOLDER_ID (optional default) SLACK_CHANNEL_ID 3 Nodes configuration (high level) Edit Fields (Set node) Scraper API Request (HTTP Request โ POST) HTML Response Parser (Code node) Product Analyzer Agent Build Book Purchase Report (Code/LLM) Create Document File Convert to PDF Upload to Slack Requirements Decodo**: Active API key and endpoint access. Be mindful of concurrency/rate limits. Model**: GPT-4o/4.1-mini or similar for reliable structured extraction. Google**: OAuth client (Docs/Drive scopes). Ensure n8n can write to the target folder. Slack**: Bot token with files:write + chat:write. How to customize the workflow Target site: Change targetUrl to any **public page (category, search, or listing). For other domains (not Amazon), tweak the LLM guidance (e.g., price/label patterns). Device emulation**: Switch deviceType to mobile to fetch mobile-optimized markup (often simpler DOMs). Late-loading pages**: Adjust waitFor.selector or use waitUntil: "networkidle" (if supported) to ensure full content loads. Client-side JS**: Extend executeJS if you need to interact (scroll, click โnextโ, expand sections). You can also loop over pagination by iterating URLs. Extraction schema**: Add fields (e.g., discount_percent, bestseller_badge, prime_eligible) and update the Structured Output schema accordingly. Filtering rules**: Modify recommendation logic (e.g., min ratings count, price bands, languages). Report branding**: Add logo, cover page, footer with company info; switch to HTML + inline CSS for richer Docs formatting. Destinations**: Besides Slack & Drive, add Email, Notion, Confluence, or a database sink. Scheduling: Add a **Cron trigger for weekly/monthly auto-reports.
by Khairul Muhtadin
Automatically extract job listings from any website URL, format them with AI, and publish directly to WordPress. Just send a URL via Telegram, and watch as the workflow scrapes the job details, enhances the content with GPT, and creates a polished post on your site. ๐ก Why Use Job Repost? โฐ Save countless hours Automatically extract, process, and publish job offers from any website, freeing your time from repetitive tasks. โ Eliminate human errors Say goodbye to typos and missed fields โ every job post is validated before going live. ๐ Boost engagement Fresh, well-structured job listings attract more candidates, improving your site's reach and authority. ๐ Stay ahead Leveraging AI with GPT means your content is not just automated but polished and SEO-friendly โ the digital assistant you never knew you needed. โก Perfect For Job board managers:** Want to aggregate listings from multiple sources with minimal effort Recruiters & HR teams:** Who need to streamline job posting workflows without technical hassles Content creators & marketers:** Looking to automate publishing while maintaining style and SEO standards ๐ง How It Works | Step | Process | Description | |------|---------|-------------| | ๐ฑ | Trigger | Send a job URL via Telegram bot to initiate the process | | ๐ฅ | Extract | Firecrawl API scrapes and extracts clean content from the provided URL | | ๐ | Process | Job data is extracted via AI, text split and cleaned, job categories and types mapped to your system | | ๐ค | Smart Logic | GPT crafts formatted job posts, intelligent validation ensures all key data is present, default values fill in the blanks if necessary | | ๐ | Output | Posts automatically published to WordPress with company logos uploaded, and success or error notifications sent via Telegram | | ๐ | Storage | Uses Supabase vector store for managing document embeddings, ensuring quick lookup and reference compliance | ๐ Quick Setup Import the provided JSON file into your n8n instances Add credentials: Firecrawl API key Google Drive OAuth2 (for RAG storage) OpenAI API WordPress API Telegram API Supabase Customize: Telegram bot token WordPress URLs Default images and category mappings if needed Update: URLs and API tokens where placeholders are used Test: Send a job URL to your Telegram bot to verify accurate extraction and posting ๐งฉ You'll Need โ Active n8n instances โ Firecrawl account with API access โ Google Drive account for RAG document storage โ OpenAI account with GPT API access โ WordPress site with autojob plugin and API enabled โ Telegram bot for URL submission and notifications โ Supabase account for vector store management ๐ ๏ธ Level Up Ideas ๐ Add multi-language support to expand global reach ๐ Support batch URL processing for multiple jobs at once ๐ฌ Integrate Slack or email notifications for wider team alerts ๐ฏ Use more AI nodes to summarize or rate job offers for quality control ๐ Schedule periodic cleanup of vector store for performance optimization ๐ Add analytics tracking for published jobs performance ๐ง Nodes Used Core Components: Firecrawl HTTP Request** (Web scraping and content extraction) Google Drive** (RAG document storage) Supabase Vector Store** OpenAI** (Embeddings, GPT Extraction) Code Nodes** for mapping categories Telegram Trigger & Message** HTTP Request** (for WordPress API and image uploads) Made by: Khaisa Studio Tags: automation recruitment job-posting wordpress AI web-scraping firecrawl Category: Human Resources, Recruitment, Wordpress, Scrapping Need a custom? contact me on LinkedIn or Web