by ConvertAPI
Who is this for? For developers and organizations that need to convert PDF files to PDFA for long term archiving. What problem is this workflow solving? The file format conversion problem. What this workflow does Downloads the PDF file from the web. Converts the PDF file to PDFA. Stores the PDFA file in the local file system. How to customize this workflow to your needs Open the HTTP Request node. Adjust the URL parameter (all endpoints can be found here). Add your secret to the Query Auth account parameter. Please create a ConvertAPI account to get an authentication secret. Optionally, additional Body Parameters can be added for the converter.
by Yaron Been
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically tracks local search trends and geographic-specific search patterns to optimize local SEO and marketing strategies. It saves you time by eliminating the need to manually research local search behavior and provides location-based insights for targeted marketing campaigns. Overview This workflow automatically scrapes local search results, geographic search trends, and location-based query data to understand regional search behavior and local market opportunities. It uses Bright Data to access location-specific search data and AI to intelligently analyze local trends and optimization opportunities. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For scraping location-based search data without being blocked OpenAI**: AI agent for intelligent local search trend analysis Google Sheets**: For storing local search trend data and geographic insights How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your local trends tracking spreadsheet Customize: Define target locations and local search monitoring parameters Use Cases Local SEO**: Optimize for location-specific search queries and trends Regional Marketing**: Tailor campaigns to local search behavior and preferences Multi-location Businesses**: Track search trends across different geographic markets Market Expansion**: Identify new geographic opportunities based on search trends Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #localsearch #localseo #searchtrends #brightdata #webscraping #geographictrends #n8nworkflow #workflow #nocode #localmarketing #regionalseo #locationbased #localbusiness #searchgeography #localtrends #geoseo #localdata #regionalmarketing #localanalytics #geographicseo #localsearchdata #localoptimization #regionalsearch #locationmarketing #localsearchtrends #geomarketing #localinsights #regionalsearch
by Angel Menendez
Introducing the Qualys Scan Slack Report Subworkflow—a robust solution designed to automate the generation and retrieval of security reports from the Qualys API. This workflow is a sub workflow of the Qualys Slack Shortcut Bot workflow. It is triggered when someone fills out the modal popup in slack generated by the Qualys Slack Shortcut Bot. When deploying this workflow, use the Demo Data node to simulate the data that is input via the Execute Workflow Trigger. That data flows into the Global Variables Node which is then referenced by the rest of the workflow. It includes nodes to Fetch the Report IDs and then Launch a report, and then check the report status periodically and download the completed report, which is then posted to Slack for easy access. For Security Operations Centers (SOCs), this workflow provides significant benefits by automating tedious tasks, ensuring timely updates, and facilitating efficient data handling. How It Works Fetch Report Templates:** The "Fetch Report IDs" node retrieves a list of available report templates from Qualys. This automated retrieval saves time and ensures that the latest templates are used, enhancing the accuracy and relevance of reports. Convert XML to JSON:** The response is converted to JSON format for easier manipulation. This step simplifies data handling, making it easier for SOC analysts to work with the data and integrate it into other tools or processes. Launch Report:** A POST request is sent to Qualys to initiate report generation using specified parameters like template ID and report title. Automating this step ensures consistency and reduces the chance of human error, improving the reliability of the reports generated. Loop and Check Status:** The workflow loops every minute to check if the report generation is complete. Continuous monitoring automates the waiting process, freeing up SOC analysts to focus on higher-priority tasks while ensuring they are promptly notified when reports are ready. Download Report:** Once the report is ready, it is downloaded from Qualys. Automated downloading ensures that the latest data is always available without manual intervention, improving efficiency. Post to Slack:** The final report is posted to a designated Slack channel for quick access. This integration with Slack ensures that the team can promptly access and review the reports, facilitating swift action and decision-making. Get Started Ensure your Slack and Qualys integrations are properly set up. Customize the workflow to fit your specific reporting needs. Link to parent workflow Link to Vulnerability Scan Trigger Need Help? Join the discussion on our Forum or check out resources on Discord! Deploy this workflow to streamline your security report generation process, improve response times, and enhance the efficiency of your security operations.
by Jenny
Vector Database as a Big Data Analysis Tool for AI Agents Workflows from the webinar "Build production-ready AI Agents with Qdrant and n8n". This series of workflows shows how to build big data analysis tools for production-ready AI agents with the help of vector databases. These pipelines are adaptable to any dataset of images, hence, many production use cases. Uploading (image) datasets to Qdrant Set up meta-variables for anomaly detection in Qdrant Anomaly detection tool KNN classifier tool For anomaly detection 1. This is the first pipeline to upload an image dataset to Qdrant. The second pipeline is to set up cluster (class) centres & cluster (class) threshold scores needed for anomaly detection. The third is the anomaly detection tool, which takes any image as input and uses all preparatory work done with Qdrant to detect if it's an anomaly to the uploaded dataset. For KNN (k nearest neighbours) classification 1. This is the first pipeline to upload an image dataset to Qdrant. The second is the KNN classifier tool, which takes any image as input and classifies it on the uploaded to Qdrant dataset. To recreate both You'll have to upload crops and lands datasets from Kaggle to your own Google Storage bucket, and re-create APIs/connections to Qdrant Cloud (you can use Free Tier cluster), Voyage AI API & Google Cloud Storage. [This workflow] Batch Uploading Images Dataset to Qdrant This template imports dataset images from Google Could Storage, creates Voyage AI embeddings for them in batches, and uploads them to Qdrant, also in batches. In this particular template, we work with crops dataset. However, it's analogous to uploading lands dataset, and in general, it's adaptable to any dataset consisting of image URLs (as the following pipelines are). First, check for an existing Qdrant collection to use; otherwise, create it here. Additionally, when creating the collection, we'll create a payload index, which is required for a particular type of Qdrant requests we will use later. Next, import all (dataset) images from Google Cloud Storage but keep only non-tomato-related ones (for anomaly detection testing). Create (per batch) embeddings for all imported images using the Voyage AI multimodal embeddings API. Finally, upload the resulting embeddings and image descriptors to Qdrant via batch upload.
by darrell_tw
This workflow automates the process of fetching agricultural transaction data from the Taiwan Agricultural Products Open Data Platform and storing it in a Google Sheets document for further analysis. Key Features Manual Trigger: Allows manual execution of the workflow to control when data is fetched. HTTP Request: Sends a request to the Open Data Platform's API to retrieve detailed transaction data, including: Pricing (Upper, Middle, Lower, Average) Transaction quantities Crop and market details Split Out Node: Processes each record individually, ensuring accurate handling of every data entry. Google Sheets Integration: Appends the data into a structured Google Sheets document for easy access and analysis. Node Configurations 1. Manual Trigger Purpose**: Start the workflow manually. Configuration**: No setup needed. 2. HTTP Request Purpose**: Fetch agricultural data. Configuration**: URL: https://data.moa.gov.tw/api/v1/SheepQuotation Query Parameters: Start_time: 2024/12/01 End_time: 2024/12/31 MarketName: 台北二 api_key: <your_api_key> Headers: accept: application/json 3. Split Out Purpose**: Split the API response data array into individual items. Configuration**: Field to Split Out: Data 4. Google Sheets Purpose**: Append the data to Google Sheets. Configuration**: Operation: Append Document ID: <your_document_id> Sheet Name: Sheet1 Mapped Fields: TransDate, TcType, CropCode, CropName, MarketCode, MarketName Upper_Price, Middle_Price, Lower_Price, Avg_Price, Trans_Quantity 此 Workflow 從 台灣農業產品開放資料平臺 獲取農產品交易數據,並將其儲存到 Google Sheets 文件 中進行進一步分析。 主要功能 Manual Trigger:允許手動執行工作流程,以控制數據獲取的時間。 HTTP Request:向開放資料平臺的 API 發送請求,獲取詳細的交易數據,包括: 價格 (Upper, Middle, Lower, Average) 交易數量 作物和市場詳細資料 Split Out Node:逐筆處理每一筆記錄,確保數據準確無誤。 Google Sheets Integration:將數據追加到結構化的 Google Sheets 文件中,方便存取和分析。 節點設定 1. Manual Trigger 用途**:手動啟動工作流程。 設定**:無需額外設定。 2. HTTP Request 用途**:抓取農產品數據。 設定**: URL: https://data.moa.gov.tw/api/v1/SheepQuotation 查詢參數 (Query Parameters): Start_time: 2024/12/01 End_time: 2024/12/31 MarketName: 台北二 api_key: <your_api_key> 標頭 (Headers): accept: application/json 3. Split Out 用途**:將 API 回應的數據陣列分解為個別項目。 設定**: Field to Split Out: Data 4. Google Sheets 用途**:將數據追加至 Google Sheets。 設定**: Operation:Append Document ID:<your_document_id> Sheet Name:Sheet1 映射欄位 (Mapped Fields): TransDate, TcType, CropCode, CropName, MarketCode, MarketName Upper_Price, Middle_Price, Lower_Price, Avg_Price, Trans_Quantity 請多利用 Curl Import 功能 例如 curl -X GET "https://data.moa.gov.tw/api/v1/AgriProductsTransType/?Start_time=114.01.01&End_time=114.01.01&MarketName=%E5%8F%B0%E5%8C%97%E4%BA%8C" -H "accept: application/json" 農業資料開放平台 文件
by Davide
Workflow Overview This workflow automates the process of scraping Trustpilot reviews, extracting key details, analyzing sentiment, and saving the results to Google Sheets. It uses OpenAI for sentiment analysis and HTML parsing for review extraction. How It Works 1. Scrape Trustpilot Reviews HTTP Request**: Fetches review pages from Trustpilot (https://it.trustpilot.com/review/{{company_id}}). Paginates through pages (up to max_page limit). HTML Parsing**: Extracts review URLs using CSS selectors Splits the URLs into individual review links. 2. Extract Review Details Information Extractor**: Uses DeepSeek to extract structured data from the review: Author: Name of the reviewer. Rating: Numeric rating (1-5). Date: Review date in YYYY-MM-DD format. Title: Review title. Text: Full review text. Total Reviews: Number of reviews by the user. Country: Reviewer’s country (2-letter code). 3. Sentiment Analysis Sentiment Analysis Node**: Uses OpenAI to classify the review text as Positive, Neutral, or Negative. Example output: { "category": "Positive", "confidence": 0.95 } 4. Save to Google Sheets Google Sheets Node**: Appends or updates the extracted data to a Google Sheet Set Up Steps 1. Configure Trustpilot Scraping Edit Fields1 Node**: Set company_id to the Trustpilot company name Set max_page to limit the number of pages scraped. 2. Configure Google Sheets Google Sheets Node**: Update the documentId with your Google Sheet ID Ensure the sheet has the required columns (Id, Data, Nome, etc.). 3. Configure OpenAI OpenAI Chat Model Node**: Add your OpenAI API key. Sentiment Analysis Node**: Ensure the categories match your desired sentiment labels (Positive, Neutral, Negative). Key Components Nodes**: HTTP Request/HTML: Scrape and parse Trustpilot reviews. Information Extractor: Extract structured review data using DeepSeek. Sentiment Analysis: Classify review sentiment. Google Sheets: Save and update review data. Credentials**: OpenAI API key. DeepSeek API key. Google Sheets OAuth2.
by tanaypant
This workflow automatically follows the steps in a custom incident response playbook and manages incidents in PagerDuty, Jira tickets, and notifies the on-call team in Mattermost. This workflow consists of three sub-workflows, each automating specific steps in the playbook. Read more about this use case and learn how to set up the workflows step-by-step in the blog tutorial How to automate every step of an incident response workflow. Prerequisites A PagerDuty account and credentials A Mattermost account and credentials A Jira account and credentials Nodes Webhook nodes trigger the workflows when an incident is created in PagerDuty, and when the incidedent is acknowledged and resolved. Mattermost nodes create an auxiliary channel for the on-call team to discuss the incident with buttons to acknowledge the incident and mark it as resolved. PagerDuty nodes update the status of the incident. Jira nodes create an issue about the incident and update its status when it's resolved.
by Jonathan
This is the first of 4 workflows for a Mattermost Standup Bot. This workflow will create a default configuration file. You can set the default configuration in the Set node (Use Default Config) the values are: config.slashCmdToken - The token Mattermost provides when you make a new Slash Command config.mattermostBaseUrl - The base URL for your Mattermost instance config.botUserToken - The User token for your Mattermost bot config.n8nWebhookUrl - The URL for your "Action from MM" webhook in the "Standup Bot - Worker" workflow config.botUserId - The UserID for your Mattermost Bot user The config file is saved under /home/node/.n8n/standup-bot-config.json This workflow only needs to be run once manually as part of the setup .
by Don Jayamaha Jr
Instantly access NFT metadata, collections, traits, contracts, and ownership details from OpenSea! This workflow integrates GPT-4o-mini AI, OpenSea API, and n8n automation to provide structured NFT data for traders, collectors, and investors. How It Works Receives user queries via Telegram, webhooks, or another connected interface. Determines the correct API tool based on the request (e.g., user profile, NFT metadata, contract details). Retrieves data from OpenSea API (requires API key). Processes the information using an AI-powered NFT insights engine. Returns structured insights in an easy-to-read format for quick decision-making. What You Can Do with This Agent 🔹 Retrieve OpenSea User Profiles → Get user bio, links, and profile info. 🔹 Fetch NFT Collection Details → Get collection metadata, traits, fees, and contract info. 🔹 Analyze NFT Metadata → Retrieve ownership, rarity, and trait-based pricing. 🔹 Monitor NFTs Owned by a Wallet → Track all NFTs under a specific account. 🔹 Retrieve Smart Contract Data → Get blockchain contract details for an NFT collection. 🔹 Identify Valuable Traits → Fetch NFT trait insights and rarity scores. Example Queries You Can Use ✅ "Get OpenSea profile for 0xA5f49655E6814d9262fb656d92f17D7874d5Ac7E." ✅ "Retrieve details for the 'Azuki' NFT collection." ✅ "Fetch metadata for NFT #5678 from 'Bored Ape Yacht Club'." ✅ "Show all NFTs owned by 0x123... on Ethereum." ✅ "Get contract details for NFT collection 'CloneX'." Available API Tools & Endpoints 1️⃣ Get OpenSea Account Profile → /api/v2/accounts/{address_or_username} (Retrieve user bio, links, and image) 2️⃣ Get NFT Collection Details → /api/v2/collections/{collection_slug} (Get collection-wide metadata) 3️⃣ Get NFT Metadata → /api/v2/chain/{chain}/contract/{address}/nfts/{identifier} (Retrieve individual NFT details) 4️⃣ Get NFTs Owned by Account → /api/v2/chain/{chain}/account/{address}/nfts (List all NFTs owned by a wallet) 5️⃣ Get NFTs by Collection → /api/v2/collection/{collection_slug}/nfts (Retrieve all NFTs from a specific collection) 6️⃣ Get NFTs by Contract → /api/v2/chain/{chain}/contract/{address}/nfts (Retrieve all NFTs under a contract) 7️⃣ Get Payment Token Details → /api/v2/chain/{chain}/payment_token/{address} (Fetch info on payment tokens used in NFT transactions) 8️⃣ Get NFT Traits → /api/v2/traits/{collection_slug} (Retrieve collection-wide trait data) Set Up Steps Get an OpenSea API Key Sign up at OpenSea API and request an API key. Configure API Credentials in n8n Add your OpenSea API key under HTTP Header Authentication. Connect the Workflow to Telegram, Slack, or Database (Optional) Use n8n integrations to send alerts to Telegram, Slack, or save results to Google Sheets, Notion, etc. Deploy and Test Send a query (e.g., "Azuki latest sales") and receive instant NFT market insights! Unlock powerful NFT analytics with AI-powered OpenSea insights—start now!
by WeblineIndia
This workflow streamlines the process of creating events in Google Calendar using event data stored in a Google Sheet. The process begins by retrieving the latest event entry from Google Sheets, ensuring only the most recent event details are processed. Once fetched, a Function node formats the event date to align with Google Calendar's required format—ensuring consistency and preventing date-related errors. After formatting, the structured event details are sent to Google Calendar, where an event is created with essential information such as the event title (summary), description, date, and location. Additionally, the workflow allows customization by setting the event's status as either "Busy" or "Available," helping attendees manage their schedules. A background color can also be assigned for better visibility and categorization. By automating this process, you eliminate the need for manual event creation, ensuring seamless synchronization between Google Sheets and Google Calendar. This improves efficiency, accuracy, and productivity, making event management effortless. Prerequisites : Before setting up this workflow, ensure the following: You have an active Google account connected to Google Sheets and Google Calendar. The Google Sheets API and Google Calendar API are enabled in the Google Cloud Console. n8n has the necessary OAuth2 authentication configured for both Google Sheets and Google Calendar. Your Google Sheet has columns for event details (event name, description, location, date, etc.). |Event Name|Event Description|Event Start Date|Location| |-|-|-|-| |Birthday|Celebration|27-Mar-1989|City| |Anniversary|Celebration|10-Jun-2015|City| Customization Options : Modify the Google Sheets trigger to track updates in specific columns. Adjust the data formatting function to support: Different date/time formats Time zone settings Custom event colors Attendee invitations Steps : Step 1: Add the Google Sheets Trigger Node Click "Add Node" and search for Google Sheets. Select "Google Sheets Trigger" and add it to the workflow. Authenticate using your Google account (select an existing account if already authenticated). Select the Spreadsheet and Sheet Name to monitor. Set the Trigger Event to "Row Added". Click "Execute Node" to test the connection. Click "Save". Step 2: Process Data with Function Node Click "Add Node" and search for Function. Add the Function Node and connect it to the Google Sheets Trigger Node. In the function editor, write a script to extract and format data. Ensure the required fields (title, location, date) are properly structured. Click "Execute Node" to verify the formatted output. Click "Save". Step 3: Add the Google Calendar Node Click "Add Node" and search for Google Calendar. Select "Create Event" operation. Authenticate with Google Calendar. Map the required fields Title Description Location Start time Optional: Set Event Status and Event Colors. Click "Execute Node" to test event creation. Click "Save". Step 4: Final Steps Connect all nodes in sequence (Google Sheets Trigger → Function Node → Google Calendar Node). Test the workflow by adding a sample row in Google Sheets. Verify that the event is created in Google Calendar with the correct title, description, date, and location. About WeblineIndia This workflow was built by the AI development team at WeblineIndia. We help businesses automate processes, reduce repetitive work, and scale faster. Need something custom? You can hire AI developers to build workflows tailored to your needs.
by Jakkrapat Ampring
Main Use Case This workflow enables automated, AI-assisted replies to users messaging a LINE Official Account, while storing and referencing chat history from Google Sheets to maintain context. Ideal for businesses or support teams that want to provide smart, personalized customer interactions using AI with memory. How It Works (Step-by-Step) Connect to LINE Official Account's API A Webhook listens for incoming messages from users on LINE. When a message is received, it triggers the workflow. Prepare the Data An Edit Fields module structures incoming data (e.g. extracts user ID, message content). This ensures data is clean and usable downstream. Retrieve Chat History The user’s previous conversations are fetched from a Google Sheet. This ensures the AI has memory and can continue conversations contextually. Prepare Prompt The retrieved chat history is combined with the new message to form a complete prompt for the AI. Example format: “User previously said X. Now they said Y. How should we respond?” AI Agent: Google Gemini The formatted prompt is passed to an AI Agent (Google Gemini Chat Model). The AI generates a response based on the message + history. Tools used: Chat ModeMemory, ToolOutputParser for accurate replies. Split & Clean History The conversation history is split into smaller chunks for cleaning and storage. This ensures the Google Sheet remains readable and manageable over time. Save Chat History The cleaned new message and AI reply are saved to Google Sheets. This updates the chat history for future context. Send Reply to LINE The AI-generated reply is sent back to the user via a POST HTTP Request to the LINE Messaging API. How to Set Up Prerequisites: LINE Official Account Google Sheet to store chat history Google Gemini API or AI agent with context memory Automation platform (e.g., n8n, as this seems visually similar) Step-by-Step: Create a Webhook on LINE: Set the webhook URL to your automation service. Enable webhook events. Design Your Google Sheet: Create a sheet with columns: User ID, Timestamp, Message, AI Reply. Set Up Modules in Automation Platform: Webhook: receives user messages. Edit Fields: extract user ID and message. Google Sheets Read: fetch message history. Prompt Composer: format prompt using past history + new message. AI Agent: connect to Google Gemini for smart replies. Split & Clean: clean and chunk history if needed. Google Sheets Write: save the updated conversation. HTTP Request: send reply to LINE via Messaging API. Test Your Workflow: Send a message from LINE. Watch the full loop: receive → process → AI → store → reply. Deploy & Monitor: Ensure error handling is in place (e.g., for blank messages or failed API calls). Regularly check your Google Sheets for storage limits. (If limits reached, you can increase the history row.) 📦 Benefits Maintains context in conversations Personalized, AI-driven responses Easy history tracking via Google Sheets Fully automated and scalable
by PiAPI
What does the workflow do? This workflow is designed to generate high-quality short videos, primarily uses GPT-4o-mini (unofficial), Midjourney (unofficial) and Kling (unofficial) APIs from PiAPI and Creatomate API mainly for content creator, social media bloggers and short-form video creators. Through this short video workflow, users can quickly validate their creative ideas and focus more on enhancing the quality of their video concepts. Who is the workflow for? Social Media Influencers: produce content videos based on inspiration efficiently. Vloggers: generate vlogs based on inspiration. Educational Creators: explain specific topics via animated short videos or demonstrate a specific imagined scenario to students for enhanced educational impact. Advertising Agencies: generate short videos based on specific products. AI Tool Developers: automatically generate product demo videos. Step-by-step Instructions Fill in X-API-key of PiAPI account in Basic Params node. Fill in the scenario of the image and video prompt. Set a video template on Creatomate and make an API call in the final node with core and processing modules provided in Creatomate. Before full video generation, you can first use basic assets in Creatomate for a prototype demo, then integrate with n8n after verifying the expected results. Fill in your Creatomate account settings following the image guildline. Click Test Workflow and wait for a generation (within 10~20min). In this workflow, we've established a basic structure for image-to-video generation with subtitle integration. You can further enhance it by adding music nodes using either PiAPI's audio models or your preferred music solution. All video elements will ultimately be composited through Creatomate. For best practice, please refer to PiAPI's official API documentation or Creatomate's API documentation to comprehend more use cases. Use Case Params Settings style: a children’s book cover, ages 6-10. --s 500 --sref 4028286908 --niji 6 character: A gentle girl and a fluffy rabbit explore a sunlit forest together, playing by a sparkling stream situational_keywords: Butterflies flutter around them as golden sunlight filters through green leaves. Warm and peaceful atmosphere Output Video