by Nexio_2000
This n8n template demonstrates how to export all icons metadata from an Iconfinder account into an organized format with previews, names, iconset names and tags. It generates HTML and CSV outputs. Good to know Iconfinder does not provide a built-in feature to export all icon data at once for contributors, which motivated the creation of this workflow. The workflow exports all iconsets for selected user account and can handle large collections. Preview image URLs are extracted in a consistent size (e.g., 128x128) for easy viewing. Basic icon metadata, including tags and iconset names is included for reference or further automation. How it works The workflow fetches all iconsets from your Iconfinder account. The workflow loops through all your iconsets, handling pagination automatically if an iconset contains more than 100 icons. Each icon is processed to retrieve its metadata, including name, tags, preview image URLs, and the iconset name it belongs to. An HTML file with a preview table and a CSV file with all icon details are generated. How to use Retrieve your User ID – A dedicated node in the workflow is available to fetch your Iconfinder user ID. This ensures the workflow knows which contributor account to access. Setup API access – The workflow includes a setup node where you provide your Iconfinder API key. This node passes the authorization token to all subsequent HTTP request nodes, so you don’t need to manually enter it multiple times. Trigger the workflow – You can start it manually or attach it to a different trigger, such as a webhook or schedule. Export Outputs – The workflow generates an HTML file with preview images and a CSV file containing all metadata. Both files are ready for download or further processing. Requirements Iconfinder account with an API key. Customising this workflow You can adjust the preview size or choose which metadata to include in HTML and CSV outputs. Combine with other workflows to automate asset cataloging.
by Paul
Agent XML System Message Engineering: Enabling Robust Enterprise Integration and Automation Why Creating System Messages in XML Is Important XML (Extensible Markup Language) engineering is a foundational technique in modern software and system architecture. It enables the structured creation, storage, and exchange of messages—such as system instructions, configuration, or logs—by providing a human-readable, platform-independent, and machine-processable format. Here’s why this matters and how big tech companies leverage it: Importance of XML in Engineering Standardization & Interoperability:** XML provides a consistent way to model and exchange data between different software components, no matter the underlying technology. This enables seamless integration of diverse systems, both internally within companies and externally across partners or clients. Traceability & Accountability:** By capturing not only the data but also its context (e.g., source, format, transformation steps), XML enables engineers to trace logic, troubleshoot issues, and ensure regulatory compliance. This is particularly crucial in sectors like finance, healthcare, and engineering where audit trails and documentation are mandatory. Configuration & Flexibility:** XML files are widely used for application settings. The clear hierarchical structure allows easy updates, quick testing of setups, and management of complex configurations—without deep developer intervention. Reusability & Automation:** Automating the creation of system messages or logs in XML allows organizations to reuse and adapt those messages for various systems or processes, reducing manual effort, errors, and improving scalability. How Big Tech Companies Use XML System Integration and Messaging:** Large enterprises including Amazon, Google, Microsoft, and SAP use XML for encoding, transporting, and processing data between distributed systems via web services (such as SOAP and REST APIs), often at web scale. Business Process Automation:** In supply chain management, e-commerce, and transactional processing, XML enables rapid, secure, and traceable information exchange—helping automate operations that cross organizational and geographical borders. Content Management & Transformation:** Companies use XML to manage and deliver dynamic content—such as translations, different document layouts, or multi-channel publishing—by separating data from its presentation and enabling real-time transformations through XSLT or similar technologies. Data Storage, Validation, and Big Data:** XML’s schema definitions (XSD) and well-defined structure are used by enterprises for validating and storing data models, supporting compatibility and quality across complex systems, including big data applications. Why XML System Message Engineering Remains Relevant > “XML is currently the most sophisticated format for distributed data — the World Wide Web can be seen as one huge XML database... Rapid adoption by industry [reinforces] that XML is no longer optional.” It brings consistency, scalability, and reliability to how software communicates, making development faster and systems more robust. Enterprises continue to use XML alongside newer formats (like JSON) wherever rich validation, structured messaging, and backward compatibility with legacy systems are required. In summary: XML engineering empowers organizations, especially tech giants, to build, scale, and manage complex digital ecosystems by facilitating integration, automation, traceability, and standardization of data and messages across their platforms, operations, and partners.
by Shahrear
Automatically process Construction Blueprints into structured Excel entries with VLM extraction >Disclaimer: This template uses community nodes, including the VLM Run node. It requires a self-hosted n8n instance and will not run on n8n Cloud. What this workflow does Monitors OneDrive for new blueprints in a target folder Downloads the file inside n8n for processing Sends the file to VLM Run for VLM analysis Fetches details from the construction.blueprint domain as JSON Appends normalized fields to an Excel sheet as a new row Setup Prerequisites: Microsoft account, VLM Run API credentials, OneDrive access, Excel Online, n8n. Install the verified VLM Run node by searching for VLM Run in the node list, then click Install. Once installed, you can start using it in your workflows. Quick Setup: Create the OneDrive folder you want to watch and copy its Folder ID OneDrive web: open the folder in your browser, then copy the value of the id= URL parameter. It is URL-encoded. Alternative in n8n: use a OneDrive node with the operation set to List to browse folders and copy the id field from the response. Create an Excel sheet with headers like: timestamp, file_name, file_id, mime_type, size_bytes, uploader_email, document_type, document_number, issue_date, author_name, drawing_title_numbers, revision_history, job_name, address, drawing_number, revision, drawn_by, checked_by, scale_information, agency_name, document_title, blueprint_id, blueprint_status, blueprint_owner, blueprint_url Configure OneDrive OAuth2 for the trigger and download nodes Use Microsoft OAuth2 in n8n. Approve requested scopes for file access and offline access when prompted. Test the connection by listing a known folder. Add VLM Run API credentials from https://app.vlm.run/dashboard to the VLM Run node Configure Excel Online OAuth2 and set Spreadsheet ID and target sheet tab Test by uploading a sample file to the watched OneDrive folder and activate Perfect for Converting uploaded construction blueprint documents into clean text Organizing extracted blueprint details into structured sheets Quickly accessing key attributes from technical files Centralized archive of blueprint-to-text conversions Key Benefits End to end automation** from OneDrive upload to structured Excel entry Accurate text extraction** of construction blueprint documents Organized attribute mapping** for consistent records Searchable archives** directly in Excel Hands-free processing** after setup How to customize Extend by adding: Version control that links revisions of the same drawing and highlights superseded rows Confidence scores per extracted field with threshold-based routing to manual or AI review Auto-generate a human-readable summary column for quick scanning of blueprint details Split large multi-sheet PDFs into per-drawing rows with individual attributes Cross-system sync to Procore, Autodesk Construction Cloud, or BIM 360 for project-wide visibility
by Harshil Agrawal
This workflow allows you to create, update, and get a webinar in GoToWebinar. GoToWebinar node: This node will create a new webinar in GoToWebinar. GoToWebinar1 node: This node will update the description of the webinar that we created in the previous node. GoToWebinar2 node: This node will get the information about the webinar that we created earlier.
by Saumil Diwaker
Overview This n8n workflow template enables querying Excel data stored in an Oracle Database using natural language powered by Oracle Select AI. The solution consists of two workflows: Workflow A** uploads an Excel file, creates a table in Oracle Database, inserts the data, and registers the table with Oracle Select AI. Workflow B** provides a chat interface that allows users to query the uploaded data using natural language. User questions are translated into SQL by Oracle Select AI, executed directly in the database, and returned as query results. Features Natural language queries** – Ask questions in plain English AI-powered SQL generation** – Automatically converts questions to SQL using Oracle Select AI Real-time responses** – Queries run directly on Oracle Database Secure by design** – Data never leaves Oracle Database Chat UI support** – Built-in chat interface when the workflow is public API-ready** – Can be used as a REST API endpoint Prerequisites Oracle Database Oracle Database 23ai or later** (required for Select AI) Database user with the following privileges: CREATE TABLE INSERT EXECUTE on DBMS_CLOUD_AI Oracle Select AI supports the following AI providers: OCI Generative AI OpenAI Azure OpenAI (used in this template)** Google Gemini Azure OpenAI Active Azure OpenAI resource Deployed GPT model (for example: gpt-4, gpt-4o-mini) Azure resource name and deployment name Oracle Credential for Azure OpenAI Create an Oracle credential to securely store the Azure OpenAI API key: BEGIN DBMS_CLOUD.CREATE_CREDENTIAL( credential_name => 'AZURE_OPENAI_CRED', username => 'azure_openai', password => 'YOUR_AZURE_OPENAI_API_KEY' ); END; / Select AI Configuration In Workflow A, update the Select AI configuration node with your environment details: { "profile_name": "EXCEL_AI", "provider": "azure", "azure_resource_name": "YOUR_RESOURCE_NAME", "azure_deployment_name": "YOUR_MODEL_DEPLOYMENT", "credential_name": "AZURE_OPENAI_CRED", "table_name": "AUTO_GENERATED" } Note: The table name is generated automatically during upload and should not be modified manually. Usage Upload Excel File (Workflow A) Upload an Excel file using the webhook endpoint: curl -X POST \ -F "file=@your-file.xlsx" \ https://your-n8n-instance.com/webhook/upload-excel Expected Response { "success": true, "tableName": "UPLOAD_EXCEL_20260209123456789", "columns": ["ID", "NAME", "AGE", "CITY", "SALARY"], "rowCount": 150, "selectAIProfile": "EXCEL_AI", "message": "Excel file successfully ingested and registered with Oracle Select AI", "nextSteps": [ "Query your data using: SELECT AI EXCEL_AI <your question>", "Example: SELECT AI EXCEL_AI show me the top 10 records by salary" ] } The returned tableName is used internally by Workflow B to scope chat queries to the correct dataset. Query Your Data with Select AI (Workflow B) After the Excel file is uploaded and registered, you can query the data using natural language through Workflow B. Example Select AI Queries After uploading an Excel file, you can query the data using natural language. Examples: Show me the top 10 records by salary from EXCEL_UPLOAD_COMPANY_TABLE EXCEL_AI count records grouped by city EXCEL_UPLOAD_TRAVEL_TABLE What is the average salary per department from EXCEL_UPLOAD_DEPT_TABLE Workflow B must be used together with Workflow A and must be configured with the same: Oracle Database Select AI profile name Azure OpenAI configuration References Oracle Select AI Documentation https://docs.oracle.com/en-us/iaas/autonomous-database-serverless/doc/select-ai-examples.html Azure OpenAI Documentation https://learn.microsoft.com/azure/ai-services/openai/ Oracle LiveLabs – Select AI Workshop https://livelabs.oracle.com/ords/r/dbpm/livelabs/run-workshop?p210_wid=3831
by MSiddhant
🚀 Instagram Leads Scraper (Perfect for Cold Outreach) This workflow automates Instagram lead generation by scraping targeted Instagram profiles based on a niche + location query (example: “dentist in New York”). It uses Apify’s Scraper to find relevant Instagram pages, extracts public emails from the results, validates emails using an Email Verification API, and stores only verified leads into Airtable. The final output is a clean lead sheet you can instantly use for cold outreach, CRM imports, or agency prospecting. ✅ How it works You click Execute Workflow and enter a niche + location query (example: dentist in New York). The workflow triggers Apify Google Search Scraper to search for Instagram profiles matching your query. Results are processed and split into individual lead items. Emails are extracted and filtered (Gmail). Each email is checked using an Email Verification API to confirm email is valid or not. Only valid emails are saved into Airtable via an upsert, so duplicates are avoided. 🎯 Use cases 📣 Agencies targeting local businesses (dentists, gyms, salons, cafes, realtors, etc.) 🧑💻 Freelancers offering web development, SEO, ads, branding 🏢 B2B service providers building niche prospect lists 🚀 Startups validating markets quickly 📩 Anyone doing cold outreach and needing verified emails 🌟 Why this workflow is valuable This workflow replaces hours of manual searching and copy-pasting with a fully automated lead pipeline. Instead of collecting random leads, you can generate highly targeted lists based on niche + city — with email verification included to improve deliverability and response rates. 🛠 Setup instructions Import the workflow JSON into your n8n instance. Create an Apify account and open Google Search Scraper actor. Copy the API endpoint Run actor synchronously and get dataset items. Paste the endpoint into the Scraping Data node and add your Apify token. Create an Airtable base/table with these fields: Username, Contact Details, URL, Followers, Email Verifier You can also clone Airtable Sheet here: Sheet Link Connect your Airtable credentials in the Airtable DB node. Execute the workflow, enter your query, and leads will appear in Airtable ✅ ⚡ Notes This workflow is designed for speed + accuracy (email verification prevents junk leads). Airtable storage uses upsert, so running it multiple times won’t spam duplicates. You can change the query structure inside the Apify node to target different platforms or keywords. Thanks for using this Workflow! MSiddhant
by Samuel Heredia
Data Extraction from MongoDB Overview This workflow exposes a public HTTP GET endpoint to read all documents from a MongoDB collection, with: Strict validation of the collection name Error handling with proper 4xx codes Response formatting (e.g., _id → id) and a consistent 2XX JSON envelope Workflow Steps Webhook Trigger: *A public GET endpoint receives requests with the collection name as a parameter*. The workflow begins with a webhook that listens for incoming HTTP GET requests. The endpoint follows this pattern: https://{{your-n8n-instance}}/webhook-test/{{uuid>}}/:nameCollection The :nameCollection parameter is passed directly in the URL and specifies the MongoDB collection to be queried. Example: https://yourdomain.com/webhook-test/abcd1234/orders would attempt to fetch all documents from the orders collection. Validation: *The collection name is checked against a set of rules to prevent invalid or unsafe queries*. Before querying the database, the collection name undergoes validation using a regular expression: ^(?!system\.)[a-zA-Z0-9._]{1,120}$ Purpose of validation: Blocks access to MongoDB’s reserved system.* collections. Prevents injection attacks by ensuring only alphanumeric characters, underscores, and dots are allowed. Enforces MongoDB’s length restrictions (max 120 characters). This step ensures the workflow cannot be exploited with malicious input. Conditional Check: *If the validation fails, the workflow stops and returns an error message. If it succeeds, it continues.* The workflow checks if the collection name passes validation. If valid ✅: proceeds to query MongoDB. If invalid ❌: immediately returns a structured HTTP 400 response, adhering to RESTful standards: { "code": 400, "message": "{{ $json.message }}" } MongoDB Query: *The workflow connects to MongoDB and retrieves all documents from the specified collection.* To use the MongoDB node, a proper database connection must be configured in n8n. This is done through MongoDB Credentials in the node settings: Create MongoDB Credentials in n8n Go to n8n → Credentials → New. Select MongoDB and Fill in the following fields: Host: The MongoDB server hostname or IP (e.g., cluster0.mongodb.net). Port: Default is 27017 for local deployments. Database: Name of the database (e.g., myDatabase). User: MongoDB username with read permissions. Password: Corresponding password. Connection Type: Standard for most cases, or Connection String if using a full URI. Replica Set / SRV Record: Enable if using MongoDB Atlas or a replica cluster. Using a Connection String (recommended for MongoDB Atlas) Example URI: mongodb+srv://<username>:<password>@cluster0.mongodb.net/myDatabase?retryWrites=true&w=majority Paste this into the Connection String field when selecting "Connection String" as the type. Verify the Connection After saving, test the credentials to confirm n8n can connect successfully to your MongoDB instance. Configure the MongoDB Node in the Workflow Operation: Find (to fetch documents). Collection: Dynamic value passed from the workflow (e.g., {{$json["nameCollection"]}}). Query: Leave empty to fetch all documents, or define filters if needed. Result: The MongoDB node will retrieve all documents from the specified collection and pass the dataset as JSON to the next node for processing. Data Formatting: *The retrieved documents are processed to adjust field names.* By default, MongoDB returns its unique identifier as _id. To align with common API conventions, this step renames _id → id. This small transformation simplifies downstream usage, making responses more intuitive for client applications. Response: *The cleaned dataset is returned as a structured JSON response to the original request.* The processed dataset is returned as the response to the original HTTP request. Clients receive a clean JSON payload with the expected format and renamed identifiers. Example response: [ { "id": "64f13c1e2f1a5e34d9b3e7f0", "name": "John Doe", "email": "john@example.com" }, { "id": "64f13c1e2f1a5e34d9b3e7f1", "name": "Jane Smith", "email": "jane@example.com" } ] Workflow Summary Webhook (GET) → Code (Validation) → IF (Validation Check) → MongoDB (Query) → Code (Transform IDs) → Respond to Webhook Key Benefits ✅ Security-first design: prevents unauthorized access or injection attacks. ✅ Standards compliance: uses HTTP status codes (400) for invalid requests. ✅ Clean API response: transforms MongoDB’s native _id into a more user-friendly id. ✅ Scalability: ready for integration with any frontend, third-party service, or analytics pipeline.
by vinci-king-01
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works This automated workflow monitors stock prices by scraping real-time data from Yahoo Finance. It uses a scheduled trigger to run at specified intervals, extracts key stock metrics using AI-powered extraction, formats the data through a custom code node, and automatically saves the structured information to Google Sheets for tracking and analysis. Key Steps: Scheduled Trigger**: Runs automatically at specified intervals to collect fresh stock data AI-Powered Scraping**: Uses ScrapeGraphAI to intelligently extract stock information (symbol, current price, price change, change percentage, volume, and market cap) from Yahoo Finance Data Processing**: Formats extracted data through a custom Code node for optimal spreadsheet compatibility and handles both single and multiple stock formats Automated Storage**: Saves all stock data to Google Sheets with proper column mapping for easy filtering, analysis, and historical tracking Set up steps Setup Time: 5-10 minutes Configure Credentials: Set up your ScrapeGraphAI API key and Google Sheets OAuth2 credentials Customize Target: Update the website URL in the ScrapeGraphAI node to your desired stock symbol (currently set to AAPL) Configure Schedule: Set your preferred trigger frequency (daily, hourly, etc.) for stock price monitoring Map Spreadsheet: Connect to your Google Sheets document and configure column mapping for the stock data fields Pro Tips: Keep detailed configuration notes in the sticky notes within the workflow Test with a single stock first before scaling to multiple stocks Consider modifying the Code node to handle different stock symbols or add additional data fields Perfect for building a historical database of stock performance over time Can be extended to track multiple stocks by modifying the ScrapeGraphAI prompt
by vinci-king-01
AI-Powered Stock Tracker with Yahoo Finance & Google Sheets ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This automated workflow monitors stock prices by scraping real-time data from Yahoo Finance. It uses a scheduled trigger to run at specified intervals, extracts key stock metrics using AI-powered extraction, formats the data through a custom code node, and automatically saves the structured information to Google Sheets for tracking and analysis. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or cloud) ScrapeGraphAI community node installed Google Sheets API access Yahoo Finance access (no API key required) Required Credentials ScrapeGraphAI API Key** - For web scraping capabilities Google Sheets OAuth2** - For spreadsheet integration Google Sheets Setup Create a Google Sheets document with the following column structure: | Column A | Column B | Column C | Column D | Column E | Column F | Column G | |----------|----------|----------|----------|----------|----------|----------| | symbol | current_price | change | change_percent | volume | market_cap | timestamp | | AAPL | 225.50 | +2.15 | +0.96% | 45,234,567 | 3.45T | 2024-01-15 14:30:00 | How it works This automated workflow monitors stock prices by scraping real-time data from Yahoo Finance. It uses a scheduled trigger to run at specified intervals, extracts key stock metrics using AI-powered extraction, formats the data through a custom code node, and automatically saves the structured information to Google Sheets for tracking and analysis. Key Steps: Scheduled Trigger**: Runs automatically at specified intervals to collect fresh stock data AI-Powered Scraping**: Uses ScrapeGraphAI to intelligently extract stock information (symbol, current price, price change, change percentage, volume, and market cap) from Yahoo Finance Data Processing**: Formats extracted data through a custom Code node for optimal spreadsheet compatibility and handles both single and multiple stock formats Automated Storage**: Saves all stock data to Google Sheets with proper column mapping for easy filtering, analysis, and historical tracking Set up steps Setup Time: 5-10 minutes Configure Credentials: Set up your ScrapeGraphAI API key and Google Sheets OAuth2 credentials Customize Target: Update the website URL in the ScrapeGraphAI node to your desired stock symbol (currently set to AAPL) Configure Schedule: Set your preferred trigger frequency (daily, hourly, etc.) for stock price monitoring Map Spreadsheet: Connect to your Google Sheets document and configure column mapping for the stock data fields Node Descriptions Core Workflow Nodes: Schedule Trigger** - Initiates the workflow at specified intervals Yahoo Finance Stock Scraper** - Extracts real-time stock data using ScrapeGraphAI Stock Data Formatter** - Processes and formats extracted data for spreadsheet compatibility Google Sheets Stock Logger** - Saves formatted stock data to your spreadsheet Data Flow: Trigger → Scraper → Formatter → Logger Customization Examples Track Multiple Stocks // In the ScrapeGraphAI node, modify the URL to track different stocks: const stockSymbols = ['AAPL', 'GOOGL', 'MSFT', 'TSLA']; const baseUrl = 'https://finance.yahoo.com/quote/'; Add Additional Data Fields // In the Code node, extend the data structure: const extendedData = { ...stockData, pe_ratio: extractedData.pe_ratio, dividend_yield: extractedData.dividend_yield, day_range: extractedData.day_range }; Custom Scheduling // Modify the Schedule Trigger for different frequencies: // Daily at 9:30 AM (market open): "0 30 9 * * *" // Every 15 minutes during market hours: "0 */15 9-16 * * 1-5" // Weekly on Monday: "0 0 9 * * 1" Data Output Format The workflow outputs structured JSON data with the following fields: { "symbol": "AAPL", "current_price": "225.50", "change": "+2.15", "change_percent": "+0.96%", "volume": "45,234,567", "market_cap": "3.45T", "timestamp": "2024-01-15T14:30:00Z" } Troubleshooting Common Issues ScrapeGraphAI Rate Limits - Implement delays between requests Yahoo Finance Structure Changes - Update scraping prompts Google Sheets Permission Errors - Verify OAuth2 credentials and document permissions Performance Tips Use appropriate trigger intervals (avoid excessive scraping) Implement error handling for network issues Consider data validation before saving to sheets Pro Tips: Keep detailed configuration notes in the sticky notes within the workflow Test with a single stock first before scaling to multiple stocks Consider modifying the Code node to handle different stock symbols or add additional data fields Perfect for building a historical database of stock performance over time Can be extended to track multiple stocks by modifying the ScrapeGraphAI prompt
by Cai Yongji
GitHub Trending to Supabase (Daily, Weekly, Monthly) Who is this for? This workflow is for developers, researchers, founders, and data analysts who want a historical dataset of GitHub Trending repositories without manual scraping. It’s ideal for building dashboards, newsletters, or trend analytics on top of a clean Supabase table. What problem is this workflow solving? Checking GitHub Trending by hand (daily/weekly/monthly) is repetitive and error-prone. This workflow automates collection, parsing, and storage so you can reliably track changes over time and query them from Supabase. What this workflow does Scrapes GitHub Trending across Daily, Weekly, and Monthly timeframes using FireCrawl. Extracts per-project fields: name, url, description, language, stars. Adds a type dimension (daily / weekly / monthly) to each row. Inserts structured results into a Supabase table for long-term storage. Setup Ensure you have an n8n instance (Cloud or self-hosted). Create credentials: FireCrawl API credential (no hardcoded keys in nodes). Supabase credential (URL + Service Role / Insert-capable key). Prepare a Supabase table (example): CREATE TABLE public.githubtrending ( id bigint GENERATED ALWAYS AS IDENTITY NOT NULL, created_at timestamp with time zone NOT NULL DEFAULT now(), data_date date DEFAULT now(), url text, project_id text, project_desc text, code_language text, stars bigint DEFAULT '0'::bigint, type text, CONSTRAINT githubtrending_pkey PRIMARY KEY (id) ); Import this workflow JSON into n8n. Run once to validate, then schedule (e.g., daily at 08:00).
by vinci-king-01
How it works This workflow automatically analyzes real estate market sentiment by scraping investment forums and news sources, then provides AI-powered market predictions and investment recommendations. Key Steps Scheduled Trigger - Runs on a cron schedule to regularly monitor market sentiment. Multi-Source Scraping - Uses ScrapeGraphAI to extract discussions from BiggerPockets forums and real estate news articles. Sentiment Analysis - JavaScript nodes analyze text content for bullish/bearish keywords and calculate sentiment scores. Market Prediction - Generates investment recommendations (buy/sell/hold) based on sentiment analysis with confidence levels. Timing Optimization - Provides optimal timing recommendations considering seasonal factors and market urgency. Investment Advisor Alerts - Formats comprehensive reports with actionable investment advice. Telegram Notifications - Sends formatted alerts directly to your Telegram channel for instant access. Set up steps Setup time: 10-15 minutes Configure ScrapeGraphAI credentials - Add your ScrapeGraphAI API key for web scraping. Set up Telegram bot - Create a bot via @BotFather and add your bot token and chat ID. Customize data sources - Update the URLs to target specific real estate forums or news sources. Adjust schedule frequency - Modify the cron expression based on how often you want sentiment updates. Test sentiment analysis - Run manually first to ensure the analysis logic works for your market. Configure alert preferences - Customize the alert formatting and priority levels as needed. Technologies Used ScrapeGraphAI** - For extracting structured data from real estate forums and news sites JavaScript Code Nodes** - For sentiment analysis, market prediction, and timing optimization Schedule Trigger** - For automated execution using cron expressions Telegram Integration** - For instant mobile notifications and team alerts JSON Data Processing** - For structured sentiment analysis and market intelligence
by PDF Vector
Overview Organizations struggle to make their document repositories searchable and accessible. Users waste time searching through lengthy PDFs, manuals, and documentation to find specific answers. This workflow creates a powerful API service that instantly answers questions about any document or image, perfect for building customer support chatbots, internal knowledge bases, or interactive documentation systems. What You Can Do This workflow creates a RESTful webhook API that accepts questions about documents and returns intelligent, contextual answers. It processes various document formats including PDFs, Word documents, text files, and images using OCR when needed. The system maintains conversation context through session management, caches responses for performance, provides source references with page numbers, handles multiple concurrent requests, and integrates seamlessly with chatbots, support systems, or custom applications. Who It's For Perfect for developer teams building conversational interfaces, customer support departments creating self-service solutions, technical writers making documentation interactive, organizations with extensive knowledge bases, and SaaS companies wanting to add document Q&A features. Ideal for anyone who needs to make large document repositories instantly searchable through natural language queries. The Problem It Solves Traditional document search returns entire pages or sections, forcing users to read through irrelevant content to find answers. Support teams repeatedly answer the same questions that are already documented. This template creates an intelligent Q&A system that provides precise, contextual answers to specific questions, reducing support tickets by up to 60% and improving user satisfaction. Setup Instructions Install the PDF Vector community node from n8n marketplace Configure your PDF Vector API key Set up the webhook URL for your API endpoint Configure Redis or database for session management Set response caching parameters Test the API with sample documents and questions Key Features RESTful API Interface**: Easy integration with any application Multi-Format Support**: Handle PDFs, Word docs, text files, and images OCR Processing**: Extract text from scanned documents and screenshots Contextual Answers**: Provide relevant responses with source citations Session Management**: Enable conversational follow-up questions Response Caching**: Improve performance for frequently asked questions Analytics Tracking**: Monitor usage patterns and popular queries Error Handling**: Graceful fallbacks for unsupported documents API Usage Example POST https://your-n8n-instance.com/webhook/doc-qa Content-Type: application/json { "documentUrl": "https://example.com/user-manual.pdf", "question": "How do I reset my password?", "sessionId": "user-123", "includePageNumbers": true } Customization Options Add authentication and rate limiting for production use, implement multi-document search across entire repositories, create specialized prompts for technical documentation or legal documents, add automatic language detection and translation, build conversation history tracking for better context, integrate with Zendesk, Intercom, or other support systems, and enable direct file upload support for local documents. Note: This workflow uses the PDF Vector community node. Make sure to install it from the n8n community nodes collection before using this template.