by AppStoneLab Technologies LLP
Automated AI Research Assistant: From Query to Polished Report with Jina & Gemini Turn a single research question into a comprehensive, multi-source report with proper citations. This workflow automates the entire research process by leveraging the web-crawling power of Jina AI and the advanced reasoning capabilities of Google's Gemini models. Simply input your query, and this AI-powered assembly line will search the web, scrape relevant sources, summarize the content, draft a structured research paper, and finally, evaluate and polish the report for accuracy and formatting. ✨ Key Features 🔎 Dynamic Web Search**: Kicks off by searching the web with Jina AI based on your initial query. 📚 Multi-Source Content Scraping**: Automatically reads and extracts content from the top 10 search results. 🧠 AI-Powered Summarization**: Uses a Gemini agent to intelligently summarize each webpage, retaining the core information. ✍️ Automated Report Generation**: A specialized "Generator Agent" synthesizes the summarized data into a structured research paper, complete with an executive summary, introduction, discussion, and conclusion. ✅ Citation & Quality Verification**: A final "Evaluator Agent" meticulously checks the generated report for citation accuracy, logical flow, and markdown formatting, delivering a polished final document. 📈Rate-Limit Ready**: Includes a configurable Wait node to ensure stable execution when dealing with multiple API calls. 📝 What This Workflow Does This workflow is designed to be your personal research assistant. It addresses the time-consuming process of gathering, reading, and synthesizing information from multiple online sources. Instead of spending hours manually searching, reading, and citing, you can delegate the entire task to this workflow and receive a well-structured and cited report as the final output. It's perfect for students, researchers, content creators, and analysts who need to quickly compile information on any given topic. ⚙️ How It Works (Step-by-Step) Initiate with a Query: The workflow starts when you send your research question or topic to the Chat Trigger node. Search the Web: The user's query is passed to the Jina AI node, which performs a web search and returns the top 10 most relevant URLs. Scrape, Summarize, Repeat: The workflow then loops through each URL: Read Content: The Jina AI node scrapes the full text content from the URL. Summarize: A Summarizer Agent powered by Google Gemini reads the scraped content and the original user query, then generates a concise summary. Wait: A one-second pause helps to avoid hitting API rate limits before processing the next URL. Aggregate the Knowledge: Once the loop is complete, a Code node gathers all 10 individual summaries into a single, neatly structured list. Draft the Research Report: This aggregated data is fed to the Generator Agent. Following a detailed prompt, this Gemini-powered agent writes a full research report, structuring it with headings and adding inline citations for every piece of information it uses. Evaluate and Finalize: The generated draft is passed to the final Evaluator Chain. This agent acts as a quality control supervisor. It verifies that all claims are correctly cited, refines the content for clarity and academic tone, and polishes the markdown formatting to produce the final, ready-to-use report. 🚀 How to Use This Workflow Credentials: Click on Use template, then configure your credentials for the following nodes: Jina AI: You will need a Jina AI API key for the Search web and Read URL content nodes. Get your key from here: JinaAI API Key Google Gemini: You will need a Google Gemini API key for the Summarizer Model, Generator Model, and Evaluator Model nodes. Get your key from here: Gemini API Key Activate Workflow: Make sure the workflow is active in your n8n instance. Start Research: Send a chat message with your research topic to the webhook URL provided in the When chat message received node. Get Your Report: Check the output of the final node, Evaluator Chain, to find your completed and polished research report. Nodes Used Chat Trigger Jina AI Code (Python) Split in Batches (Looping) Wait AI Agent Basic LLM Chain Google Gemini Chat Model
by Harshil Agrawal
This workflow allows you to receive updates about the positiong of the ISS and add it to a table in TimescaleDB. Cron node: The Cron node triggers the workflow every minute. You can configure the time based on your use-case. HTTP Request node: This node makes an HTTP Request to an API that returns the position of the ISS. Based on your use-case you may want to fetch data from a different URL. Enter the URL in the URL field. Set node: In the Set node we set the information that we need in the workflow. Since we only need the timestamp, latitude, and longitude we set this in the node. If you need other information, you can set them in this node. TimescaleDB node: This node stores the information in a table named iss. You can use a different table as well.
by ConvertAPI
Who is this for? For developers and organizations that need to convert web page to PDF. What problem is this workflow solving? The web page conversion to PDF problem. What this workflow does Converts web page to PDF. Stores the PDF file in the local file system. How to customize this workflow to your needs Open the HTTP Request node. Adjust the URL parameter (all endpoints can be found here). Add your secret to the Query Auth account parameter. Please create a ConvertAPI account to get an authentication secret. Change the parameter url to the webpage you want to convert to pdf Optionally, additional Body Parameters can be added for the converter.
by Miquel Colomer
Do you want to avoid communication problems when launching phone calls? This workflow verifies landline and mobile phone numbers using the uProc Get Parsed and validated phone tool with worldwide coverage. You need to add your credentials (Email and API Key - real -) located at Integration section to n8n. Node "Create Phone Item" can be replaced by any other supported service with phone values, like databases (MySQL, Postgres), or Typeform. The "uProc" node returns the next fields per every parsed and validated phone number: country_prefix: contains the international country phone prefix number. country_code: contains the 2-digit ISO country code of the phone number. local_number: contains the phone number without international prefix. formatted: contains a formatted version of the phone number, according to country detected. valid: detects if the phone number has a valid format and prefix. type: the phone number type (mobile, landline, or something else). "If" node checks if the phone number is valid. You can use the result to mark invalid phone numbers in your database or discard them from future telemarketing campaigns.
by Jenny
Vector Database as a Big Data Analysis Tool for AI Agents Workflows from the webinar "Build production-ready AI Agents with Qdrant and n8n". This series of workflows shows how to build big data analysis tools for production-ready AI agents with the help of vector databases. These pipelines are adaptable to any dataset of images, hence, many production use cases. Uploading (image) datasets to Qdrant Set up meta-variables for anomaly detection in Qdrant Anomaly detection tool KNN classifier tool For anomaly detection 1. This is the first pipeline to upload an image dataset to Qdrant. The second pipeline is to set up cluster (class) centres & cluster (class) threshold scores needed for anomaly detection. The third is the anomaly detection tool, which takes any image as input and uses all preparatory work done with Qdrant to detect if it's an anomaly to the uploaded dataset. For KNN (k nearest neighbours) classification 1. This is the first pipeline to upload an image dataset to Qdrant. The second is the KNN classifier tool, which takes any image as input and classifies it on the uploaded to Qdrant dataset. To recreate both You'll have to upload crops and lands datasets from Kaggle to your own Google Storage bucket, and re-create APIs/connections to Qdrant Cloud (you can use Free Tier cluster), Voyage AI API & Google Cloud Storage. [This workflow] Batch Uploading Images Dataset to Qdrant This template imports dataset images from Google Could Storage, creates Voyage AI embeddings for them in batches, and uploads them to Qdrant, also in batches. In this particular template, we work with crops dataset. However, it's analogous to uploading lands dataset, and in general, it's adaptable to any dataset consisting of image URLs (as the following pipelines are). First, check for an existing Qdrant collection to use; otherwise, create it here. Additionally, when creating the collection, we'll create a payload index, which is required for a particular type of Qdrant requests we will use later. Next, import all (dataset) images from Google Cloud Storage but keep only non-tomato-related ones (for anomaly detection testing). Create (per batch) embeddings for all imported images using the Voyage AI multimodal embeddings API. Finally, upload the resulting embeddings and image descriptors to Qdrant via batch upload.
by darrell_tw
This workflow automates the process of fetching agricultural transaction data from the Taiwan Agricultural Products Open Data Platform and storing it in a Google Sheets document for further analysis. Key Features Manual Trigger: Allows manual execution of the workflow to control when data is fetched. HTTP Request: Sends a request to the Open Data Platform's API to retrieve detailed transaction data, including: Pricing (Upper, Middle, Lower, Average) Transaction quantities Crop and market details Split Out Node: Processes each record individually, ensuring accurate handling of every data entry. Google Sheets Integration: Appends the data into a structured Google Sheets document for easy access and analysis. Node Configurations 1. Manual Trigger Purpose**: Start the workflow manually. Configuration**: No setup needed. 2. HTTP Request Purpose**: Fetch agricultural data. Configuration**: URL: https://data.moa.gov.tw/api/v1/SheepQuotation Query Parameters: Start_time: 2024/12/01 End_time: 2024/12/31 MarketName: 台北二 api_key: <your_api_key> Headers: accept: application/json 3. Split Out Purpose**: Split the API response data array into individual items. Configuration**: Field to Split Out: Data 4. Google Sheets Purpose**: Append the data to Google Sheets. Configuration**: Operation: Append Document ID: <your_document_id> Sheet Name: Sheet1 Mapped Fields: TransDate, TcType, CropCode, CropName, MarketCode, MarketName Upper_Price, Middle_Price, Lower_Price, Avg_Price, Trans_Quantity 此 Workflow 從 台灣農業產品開放資料平臺 獲取農產品交易數據,並將其儲存到 Google Sheets 文件 中進行進一步分析。 主要功能 Manual Trigger:允許手動執行工作流程,以控制數據獲取的時間。 HTTP Request:向開放資料平臺的 API 發送請求,獲取詳細的交易數據,包括: 價格 (Upper, Middle, Lower, Average) 交易數量 作物和市場詳細資料 Split Out Node:逐筆處理每一筆記錄,確保數據準確無誤。 Google Sheets Integration:將數據追加到結構化的 Google Sheets 文件中,方便存取和分析。 節點設定 1. Manual Trigger 用途**:手動啟動工作流程。 設定**:無需額外設定。 2. HTTP Request 用途**:抓取農產品數據。 設定**: URL: https://data.moa.gov.tw/api/v1/SheepQuotation 查詢參數 (Query Parameters): Start_time: 2024/12/01 End_time: 2024/12/31 MarketName: 台北二 api_key: <your_api_key> 標頭 (Headers): accept: application/json 3. Split Out 用途**:將 API 回應的數據陣列分解為個別項目。 設定**: Field to Split Out: Data 4. Google Sheets 用途**:將數據追加至 Google Sheets。 設定**: Operation:Append Document ID:<your_document_id> Sheet Name:Sheet1 映射欄位 (Mapped Fields): TransDate, TcType, CropCode, CropName, MarketCode, MarketName Upper_Price, Middle_Price, Lower_Price, Avg_Price, Trans_Quantity 請多利用 Curl Import 功能 例如 curl -X GET "https://data.moa.gov.tw/api/v1/AgriProductsTransType/?Start_time=114.01.01&End_time=114.01.01&MarketName=%E5%8F%B0%E5%8C%97%E4%BA%8C" -H "accept: application/json" 農業資料開放平台 文件
by Davide
Workflow Overview This workflow automates the process of scraping Trustpilot reviews, extracting key details, analyzing sentiment, and saving the results to Google Sheets. It uses OpenAI for sentiment analysis and HTML parsing for review extraction. How It Works 1. Scrape Trustpilot Reviews HTTP Request**: Fetches review pages from Trustpilot (https://it.trustpilot.com/review/{{company_id}}). Paginates through pages (up to max_page limit). HTML Parsing**: Extracts review URLs using CSS selectors Splits the URLs into individual review links. 2. Extract Review Details Information Extractor**: Uses DeepSeek to extract structured data from the review: Author: Name of the reviewer. Rating: Numeric rating (1-5). Date: Review date in YYYY-MM-DD format. Title: Review title. Text: Full review text. Total Reviews: Number of reviews by the user. Country: Reviewer’s country (2-letter code). 3. Sentiment Analysis Sentiment Analysis Node**: Uses OpenAI to classify the review text as Positive, Neutral, or Negative. Example output: { "category": "Positive", "confidence": 0.95 } 4. Save to Google Sheets Google Sheets Node**: Appends or updates the extracted data to a Google Sheet Set Up Steps 1. Configure Trustpilot Scraping Edit Fields1 Node**: Set company_id to the Trustpilot company name Set max_page to limit the number of pages scraped. 2. Configure Google Sheets Google Sheets Node**: Update the documentId with your Google Sheet ID Ensure the sheet has the required columns (Id, Data, Nome, etc.). 3. Configure OpenAI OpenAI Chat Model Node**: Add your OpenAI API key. Sentiment Analysis Node**: Ensure the categories match your desired sentiment labels (Positive, Neutral, Negative). Key Components Nodes**: HTTP Request/HTML: Scrape and parse Trustpilot reviews. Information Extractor: Extract structured review data using DeepSeek. Sentiment Analysis: Classify review sentiment. Google Sheets: Save and update review data. Credentials**: OpenAI API key. DeepSeek API key. Google Sheets OAuth2.
by Davide
This workflow implements a Retrieval-Augmented Generation (RAG) system that: Stores vectorized documents in Qdrant, Retrieves relevant content based on user input, Generates AI answers using Google Gemini, Automatically cites the document sources (from Google Drive). Workflow Steps Create Qdrant Collection A REST API node creates a new collection in Qdrant with specified vector size (1536) and cosine similarity. Load Files from Google Drive The workflow lists all files in a Google Drive folder, downloads them as plain text, and loops through each. Text Preprocessing & Embedding Documents are split into chunks (500 characters, with 50-character overlap). Embeddings are created using OpenAI embeddings (text-embedding-3-small assumed). Metadata (file name and ID) is attached to each chunk. Store in Qdrant All vectors, along with metadata, are inserted into the Qdrant collection. Chat Input & Retrieval When a chat message is received, the question is embedded and matched against Qdrant. Top 5 relevant document chunks are retrieved. A Gemini model is used to generate the answer based on those sources. Source Aggregation & Response File IDs and names are deduplicated. The AI response is combined with a list of cited documents (filenames). Final output: AI Response Sources: ["Document1", "Document2"] Main Advantages End-to-end Automation**: From document ingestion to chat response generation, fully automated with no manual steps. Scalable Knowledge Base**: Easy to expand by simply adding files to the Google Drive folder. Traceable Responses**: Each answer includes its source files, increasing transparency and trustworthiness. Modular Design**: Each step (embedding, storage, retrieval, response) is isolated and reusable. Multi-provider AI**: Combines OpenAI (for embeddings) and Google Gemini (for chat), optimizing performance and flexibility. Secure & Customizable**: Uses API credentials and configurable chunk size, collection name, etc. How It Works Document Processing & Vectorization The workflow retrieves documents from a specified Google Drive folder. Each file is downloaded, split into chunks (using a recursive text splitter), and converted into embeddings via OpenAI. The embeddings, along with metadata (file ID and name), are stored in a Qdrant vector database under the collection negozio-emporio-verde. Query Handling & Response Generation When a user submits a chat message, the workflow: Embeds the query using OpenAI. Retrieves the top 5 relevant document chunks from Qdrant. Uses Google Gemini to generate a response based on the retrieved context. Aggregates and deduplicates the source file names from the retrieved chunks. The final output includes both the AI-generated response and a list of source documents (e.g., Sources: ["FAQ.pdf", "Policy.txt"]). Set Up Steps Configure Qdrant Collection Replace QDRANTURL and COLLECTION in the "Create collection" HTTP node to initialize the Qdrant collection with: Vector size: 1536 (OpenAI embedding dimension). Distance metric: Cosine. Ensure the "Clear collection" node is configured to reset the collection if needed. Google Drive & OpenAI Integration Link the Google Drive node to the target folder (Test Negozio in this example). Verify OpenAI and Google Gemini API credentials are correctly set in their respective nodes. Metadata & Output Customization Adjust the "Aggregate" and "Response" nodes if additional metadata fields are needed. Modify the "Output" node to format the response (e.g., changing Sources: {{...}} to match your preferred style). Testing Trigger the workflow manually to test document ingestion. Use the chat interface to verify responses include accurate source attribution. Note: Replace placeholder values (e.g., QDRANTURL) with actual endpoints before deployment. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Don Jayamaha Jr
Instantly access NFT metadata, collections, traits, contracts, and ownership details from OpenSea! This workflow integrates GPT-4o-mini AI, OpenSea API, and n8n automation to provide structured NFT data for traders, collectors, and investors. How It Works Receives user queries via Telegram, webhooks, or another connected interface. Determines the correct API tool based on the request (e.g., user profile, NFT metadata, contract details). Retrieves data from OpenSea API (requires API key). Processes the information using an AI-powered NFT insights engine. Returns structured insights in an easy-to-read format for quick decision-making. What You Can Do with This Agent 🔹 Retrieve OpenSea User Profiles → Get user bio, links, and profile info. 🔹 Fetch NFT Collection Details → Get collection metadata, traits, fees, and contract info. 🔹 Analyze NFT Metadata → Retrieve ownership, rarity, and trait-based pricing. 🔹 Monitor NFTs Owned by a Wallet → Track all NFTs under a specific account. 🔹 Retrieve Smart Contract Data → Get blockchain contract details for an NFT collection. 🔹 Identify Valuable Traits → Fetch NFT trait insights and rarity scores. Example Queries You Can Use ✅ "Get OpenSea profile for 0xA5f49655E6814d9262fb656d92f17D7874d5Ac7E." ✅ "Retrieve details for the 'Azuki' NFT collection." ✅ "Fetch metadata for NFT #5678 from 'Bored Ape Yacht Club'." ✅ "Show all NFTs owned by 0x123... on Ethereum." ✅ "Get contract details for NFT collection 'CloneX'." Available API Tools & Endpoints 1️⃣ Get OpenSea Account Profile → /api/v2/accounts/{address_or_username} (Retrieve user bio, links, and image) 2️⃣ Get NFT Collection Details → /api/v2/collections/{collection_slug} (Get collection-wide metadata) 3️⃣ Get NFT Metadata → /api/v2/chain/{chain}/contract/{address}/nfts/{identifier} (Retrieve individual NFT details) 4️⃣ Get NFTs Owned by Account → /api/v2/chain/{chain}/account/{address}/nfts (List all NFTs owned by a wallet) 5️⃣ Get NFTs by Collection → /api/v2/collection/{collection_slug}/nfts (Retrieve all NFTs from a specific collection) 6️⃣ Get NFTs by Contract → /api/v2/chain/{chain}/contract/{address}/nfts (Retrieve all NFTs under a contract) 7️⃣ Get Payment Token Details → /api/v2/chain/{chain}/payment_token/{address} (Fetch info on payment tokens used in NFT transactions) 8️⃣ Get NFT Traits → /api/v2/traits/{collection_slug} (Retrieve collection-wide trait data) Set Up Steps Get an OpenSea API Key Sign up at OpenSea API and request an API key. Configure API Credentials in n8n Add your OpenSea API key under HTTP Header Authentication. Connect the Workflow to Telegram, Slack, or Database (Optional) Use n8n integrations to send alerts to Telegram, Slack, or save results to Google Sheets, Notion, etc. Deploy and Test Send a query (e.g., "Azuki latest sales") and receive instant NFT market insights! Unlock powerful NFT analytics with AI-powered OpenSea insights—start now!
by tanaypant
This workflow automatically follows the steps in a custom incident response playbook and manages incidents in PagerDuty, Jira tickets, and notifies the on-call team in Mattermost. This workflow consists of three sub-workflows, each automating specific steps in the playbook. Read more about this use case and learn how to set up the workflows step-by-step in the blog tutorial How to automate every step of an incident response workflow. Prerequisites A PagerDuty account and credentials A Mattermost account and credentials A Jira account and credentials Nodes Webhook nodes trigger the workflows when an incident is created in PagerDuty, and when the incidedent is acknowledged and resolved. Mattermost nodes create an auxiliary channel for the on-call team to discuss the incident with buttons to acknowledge the incident and mark it as resolved. PagerDuty nodes update the status of the incident. Jira nodes create an issue about the incident and update its status when it's resolved.
by WeblineIndia
This workflow streamlines the process of creating events in Google Calendar using event data stored in a Google Sheet. The process begins by retrieving the latest event entry from Google Sheets, ensuring only the most recent event details are processed. Once fetched, a Function node formats the event date to align with Google Calendar's required format—ensuring consistency and preventing date-related errors. After formatting, the structured event details are sent to Google Calendar, where an event is created with essential information such as the event title (summary), description, date, and location. Additionally, the workflow allows customization by setting the event's status as either "Busy" or "Available," helping attendees manage their schedules. A background color can also be assigned for better visibility and categorization. By automating this process, you eliminate the need for manual event creation, ensuring seamless synchronization between Google Sheets and Google Calendar. This improves efficiency, accuracy, and productivity, making event management effortless. Prerequisites : Before setting up this workflow, ensure the following: You have an active Google account connected to Google Sheets and Google Calendar. The Google Sheets API and Google Calendar API are enabled in the Google Cloud Console. n8n has the necessary OAuth2 authentication configured for both Google Sheets and Google Calendar. Your Google Sheet has columns for event details (event name, description, location, date, etc.). |Event Name|Event Description|Event Start Date|Location| |-|-|-|-| |Birthday|Celebration|27-Mar-1989|City| |Anniversary|Celebration|10-Jun-2015|City| Customization Options : Modify the Google Sheets trigger to track updates in specific columns. Adjust the data formatting function to support: Different date/time formats Time zone settings Custom event colors Attendee invitations Steps : Step 1: Add the Google Sheets Trigger Node Click "Add Node" and search for Google Sheets. Select "Google Sheets Trigger" and add it to the workflow. Authenticate using your Google account (select an existing account if already authenticated). Select the Spreadsheet and Sheet Name to monitor. Set the Trigger Event to "Row Added". Click "Execute Node" to test the connection. Click "Save". Step 2: Process Data with Function Node Click "Add Node" and search for Function. Add the Function Node and connect it to the Google Sheets Trigger Node. In the function editor, write a script to extract and format data. Ensure the required fields (title, location, date) are properly structured. Click "Execute Node" to verify the formatted output. Click "Save". Step 3: Add the Google Calendar Node Click "Add Node" and search for Google Calendar. Select "Create Event" operation. Authenticate with Google Calendar. Map the required fields Title Description Location Start time Optional: Set Event Status and Event Colors. Click "Execute Node" to test event creation. Click "Save". Step 4: Final Steps Connect all nodes in sequence (Google Sheets Trigger → Function Node → Google Calendar Node). Test the workflow by adding a sample row in Google Sheets. Verify that the event is created in Google Calendar with the correct title, description, date, and location. About WeblineIndia This workflow was built by the AI development team at WeblineIndia. We help businesses automate processes, reduce repetitive work, and scale faster. Need something custom? You can hire AI developers to build workflows tailored to your needs.
by CustomJS
This n8n template shows how to extract selected pages from a generated PDF with the PDF Toolkit by www.customjs.space. @custom-js/n8n-nodes-pdf-toolkit Notice Community nodes can only be installed on self-hosted instances of n8n. What this workflow does Downloads** each PDF using an HTTP Request. Extract** pages from the PDF file as needed. Requirements Self-hosted** n8n instance CustomJS API key** for extracting PDF files. PDF files to be merged** to be converted into a PDF Workflow Steps: Manual Trigger: Runs with user interaction. Download PDF File: Pass urls for PDF files to merge. Extract Pages from PDF: Extract selected pages from a generated PDF Usage Get API key from customJS Sign up to customJS platform. Navigate to your profile page Press "Show" button to get API key Set Credentials for CustomJS API on n8n Copy and paste your API key generated from CustomJS here. Design workflow A Manual Trigger for starting workflow. HTTP Request Nodes for downloading PDF files. Extract Pages from PDF. You can replace logic for triggering and returning results. For example, you can trigger this workflow by calling a webhook and get a result as a response from webhook. Simply replace Manual Trigger and Write to Disk nodes. Perfect for Taking a note of specific pages from PDF files. Splitting PDF file into multiple parts.