by Davide
This workflow automates the creation and management of a Retrieval-Augmented Generation (RAG) system using Qdrant as a vector store and Google Drive as the document source. It enables full or incremental updates to documents in the Qdrant vector database and integrates with a chatbot using Google Gemini for question answering. Here is a clear and professional description in English of the n8n workflow “Create a RAG with Qdrant and update single files”, including its benefits: Benefits Efficient RAG Setup** Seamlessly integrates OpenAI, Qdrant, and Google Drive to create a scalable RAG pipeline. Single File Update** You can replace the vector representation of a single file without reprocessing the entire collection—ideal for maintaining document freshness. Flexible File Source** Works with Google Drive, allowing document management and updates from a familiar interface. How It Works This workflow is designed to create a Retrieval-Augmented Generation (RAG) system using Qdrant as a vector store and Google Drive as a document source. It consists of four main phases: Collection Setup**: Creates or clears a Qdrant collection to store vectorized documents. Configures the collection with cosine distance metrics and other parameters. Document Processing**: Retrieves files from a specified Google Drive folder. Downloads and processes each file (text extraction, chunking, and embedding using OpenAI). Stores the embeddings in Qdrant for vector search. Single-File Update**: Allows updating or deleting a specific file in the Qdrant collection by referencing its Google Drive ID. Re-embeds the file and updates the vector store. RAG Querying**: Uses a chat trigger to receive user questions. Retrieves relevant documents from Qdrant using vector similarity. Generates answers using Google Gemini as the language model. Set Up Steps Configure Qdrant: Replace QDRANTURL and COLLECTION in the "Create collection" and "Clear collection" HTTP nodes. Ensure Qdrant API credentials are correctly set in the credentials section. Google Drive Integration: Specify the Google Drive folder ID in the "Get files" node. Ensure Google Drive OAuth credentials are configured. OpenAI and Gemini Keys: Add OpenAI API credentials for embeddings (used in "Embeddings OpenAI" nodes). Configure Google Gemini credentials for the chat model. Single-File Update: Set the file_id in the "Edit Fields3" node to target a specific Google Drive file for updates. Testing: Trigger the workflow manually to populate the Qdrant collection. Use the chat interface to test RAG responses. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Alex Gurinovich
AI powered Automated Crypto Insights with Chart-img and BrowserAI Tired of paying for costly crypto updates? Or reading long analyses? This n8n workflow automates the delivery of personalized crypto insights, using Chart-img for capturing coin graphs of BTC, ETH, SOL, and XRP as base64 images, and BrowserAI for web scraping and information gathering of news and articles. This setup ensures thorough market coverage and timely updates, without breaking the bank. Overview Designed for crypto enthusiasts, traders, and analysts, this workflow automates the process of collecting and distributing valuable crypto information. It’s perfect for anyone wanting consistent and accurate updates conveniently. Setup Instructions Pre-conditions Chart-img Account: Register for a Chart-img account and obtain an API key here. BrowserAI Account: Sign up for BrowserAI and get your API key from your BrowserAI dashboard. Step-by-Step Setup 🗓️ Schedule and Date Calculation Triggers twice daily at 8AM and 8PM to ensure up-to-date insights, and can be changed to your like. Calculates yesterday’s date dynamically for accurate data retrieval. 📊 Coin Graph Capture with Chart-img Uses Chart-img API to capture 24-hour graphs for BTC, ETH, SOL, and XRP. Converts images to base64 strings for easy integration into analysis. 🌐 Web Scraping with BrowserAI Creates tasks in BrowserAI to gather the latest crypto news and insights. Automates data extraction for comprehensive market analysis. ⌛ Monitor and Complete Tasks Incorporates status checks to ensure BrowserAI tasks complete successfully before proceeding. ✏️ Analyze and Synthesize Information Combines graph data with web-scraped insights for an enriched summary. Uses AI to generate simple, informative descriptions under 60 words to not overload you. 📩 Deliver Insights Efficiently Sends the compiled analysis to your Telegram, with easy options to switch to WhatsApp, email, or any other communication channel. Customization Guidance Content Personalization:** Customize the datasets and keywords for tailored updates. Modify Schedule:** Adjust triggering times according to your needs using n8n’s scheduling options. This workflow delivers a seamless and cost-effective approach to staying informed about crypto market trends, combining the latest technology for superior insights. ++WARNING:++ This template is intended for personal use only and does not constitute financial advice. Any actions taken using this tool are solely the user's responsibility.
by Boriwat Chanruang
Who is this for? This workflow is designed for: Content creators**, artists, or hobbyists looking to experiment with AI-generated art. Small business owners* or *marketers** using LEGO-style designs for branding or promotions. Developers* or *AI enthusiasts** wanting to automate image transformations through messaging platforms like LINE. What problem is this workflow solving? Simplifies the process of creating custom AI-generated LEGO-style images. Automates the manual effort of transforming user-uploaded images into AI-generated artwork. Bridges the gap between messaging platforms (LINE) and advanced AI tools (DALL·E). Provides a seamless system for users to upload an image and receive an AI-transformed output without technical expertise. What this workflow does Image Upload via LINE: Users send an image to the LINE chatbot. AI-Powered Prompt Creation: GPT generates a prompt to describe the uploaded image for LEGO-style conversion. AI Image Generation: DALL·E 3 processes the prompt and creates a LEGO-style isometric image. Image Delivery: The generated image is returned to the user in LINE. Setup Prerequisites LINE Developer Account** with API credentials. Access to OpenAI API with DALL·E and GPT-4 capabilities. A configured n8n instance to run this workflow. Steps Environment Setup: Add your LINE API Token and OpenAI credentials as environment variables (LINE_API_TOKEN, OPENAI_API_KEY) in n8n. Configure LINE Webhook: Point the LINE webhook to your n8n instance. Connect OpenAI: Set up OpenAI API credentials in the workflow nodes for GPT-4 and DALL·E. Test Workflow: Upload a sample image in LINE and ensure it returns the LEGO-style AI image. How to customize this workflow to your needs Localization**: Modify response messages in LINE to fit your audience's language and tone. Integration**: Add nodes to send notifications through other platforms like Slack or email. Image Style**: Replace the LEGO-style image prompt with other artistic styles or themes. Advanced Use Cases Art Contests: Users upload images and receive AI-enhanced outputs for community voting or branding. Marketing Campaigns: Quickly generate creative visual content for ads and promotions using customer-submitted photos. Education: Use the workflow to teach students about AI-generated art and automation through a hands-on approach. Tips for Optimization Error Handling**: Add fallback nodes to handle invalid images or API errors gracefully. Logging**: Implement a logging mechanism to track requests and outputs for debugging and analytics. Scalability**: Use queue-based systems or cloud scaling to handle large volumes of image requests. Enhancements Add sticky notes in n8n to provide inline instructions for configuring each node. Create a tutorial video or documentation for first-time users to set up and customize the workflow. Include advanced filters to allow users to select from multiple styles beyond LEGO (e.g., pixel art, watercolor). This workflow enables seamless interaction between messaging platforms and advanced AI capabilities, making it highly versatile for various creative and business applications.
by lin@davoy.tech
This workflow template, "Personal Assistant to Note Messages and Extract Namecard Information" is designed to streamline the processing of incoming messages on the LINE messaging platform. It integrates with powerful tools like Microsoft Teams , Microsoft To Do , OneDrive , and OpenRouter.ai to handle tasks such as saving notes, extracting namecard information, and organizing images. Whether you’re managing personal productivity or automating workflows for teams, this template offers a versatile and customizable solution. By leveraging this workflow, you can automate repetitive tasks, improve collaboration, and enhance efficiency in handling LINE messages. Who Is This Template For? This template is ideal for: Professionals: Who want to save important messages, extract data from namecards, or organize images automatically. Teams: Looking to integrate LINE messages into tools like Microsoft Teams and Microsoft To Do for better collaboration. Developers: Seeking to build intelligent workflows that process text, images, and other inputs from LINE. Business Owners: Who need to manage customer interactions, follow-ups, and task tracking efficiently. What Problem Does This Workflow Solve? Managing incoming messages on LINE can be time-consuming, especially when dealing with diverse input types like text, images, and namecards. This workflow solves that problem by: Automatically identifying and routing different message types (text, images, namecards) to appropriate actions. Extracting structured data from namecards and saving it for follow-up tasks. Uploading images to OneDrive and saving text messages to Microsoft Teams or Microsoft To Do for easy access. Sending real-time feedback to users via LINE to confirm that their messages have been processed. What This Workflow Does Receive Messages via LINE Webhook: The workflow is triggered whenever a user sends a message (text, image, or other types) to the LINE bot. Display Loading Animation: A loading animation is displayed to reassure the user that their request is being processed. Route Input Types: The workflow uses a Switch node to determine the type of input: Text Starting with "T": Adds the message as a task in Microsoft To Do. Plain Text: Saves the message in Microsoft Teams under a designated channel (e.g., "Notes"). Images: Identifies whether the image is a namecard, handwritten note, or other content, then processes accordingly. Unsupported formats trigger a polite response indicating the limitation. Process Namecards: *Images * If the image is identified as a namecard, the workflow extracts structured data (e.g., name, email, phone number) using OpenRouter.ai and saves it to Microsoft To Do for follow-up tasks. Save Images to OneDrive: Images are uploaded to OneDrive, renamed based on their unique message ID, and linked in Microsoft Teams for reference. Send Feedback via LINE: The workflow replies to the user with confirmation messages, such as "[ Task Created ]" or "[ Message Saved ]." Setup Guide Pre-Requisites Access to the LINE Developers Console to configure your webhook and bot. Accounts for Microsoft Teams , Microsoft To Do, and OneDrive with API access. An OpenRouter.ai account with credentials to access models like GPT-4o. Basic knowledge of APIs, webhooks, and JSON formatting. Step-by-Step Setup 1) Configure the LINE Webhook: Go to the LINE Developers Console and set up a webhook to receive incoming messages. Copy the Webhook URL from the Line Webhook node and paste it into the LINE Console. Remove any "test" configurations when moving to production. 2) Set Up Microsoft Integrations: Connect your Microsoft Teams, Microsoft To Do, and OneDrive accounts to the respective nodes in the workflow. 3) Set Up OpenRouter.ai: Create an account on OpenRouter.ai and obtain your API credentials. Connect your credentials to the OpenRouter nodes in the workflow. Test the Workflow: Simulate sending text, images, and namecards to the LINE bot to verify that all actions are processed correctly. How to Customize This Workflow to Your Needs Add More Actions: Extend the workflow to handle additional input types or integrate with other tools. Enhance Image Processing: Use advanced OCR tools to improve text extraction from complex images. Customize Feedback Messages: Modify the reply format to include emojis, links, or other formatting options. Expand Use Cases: Adapt the workflow for specific industries, such as sales or customer support, by tailoring the actions to relevant tasks. Why Use This Template? Versatile Automation: Handles multiple input types (text, images, namecards) with ease. Seamless Integration: Connects LINE messages to popular productivity tools like Microsoft Teams and To Do. Structured Data Extraction: Extracts and organizes data from namecards, saving time and effort. Real-Time Feedback: Keeps users informed about the status of their requests with instant notifications.
by Dr. Firas
Convert YouTube videos to viral Shorts with Klap and auto-post with Blotato > ⚠️ Disclaimer: This workflow uses Community Nodes and requires a self-hosted n8n instance. Who is this for? This workflow is perfect for content creators, YouTubers, marketing teams and entrepreneurs who want to effortlessly convert long YouTube videos into short, viral-ready clips and publish them automatically on TikTok, Instagram, YouTube Shorts and other platforms. What problem is this workflow solving? Manually creating short, engaging clips from YouTube videos takes hours: Selecting highlights Adding subtitles and effects Downloading and editing Posting individually on each platform This workflow eliminates all of that: AI-powered Shorts generation with Klap Smart scheduling based on your posting calendar Full automation of uploads to multiple platforms What this workflow does From a simple YouTube link sent via Telegram, the workflow: Extracts the YouTube URL and number of Shorts requested Sends the video to Klap for AI-powered Shorts generation Checks when the Shorts are ready Schedules publication times based on your custom settings Uploads the Shorts to Blotato Auto-posts on TikTok, YouTube Shorts, Instagram and more Sends a confirmation recap to Telegram Setup Connect your Telegram bot to the trigger node Add your Klap API key for video processing Link your Google Sheets with your scheduling preferences Add your Blotato API key and social platform IDs Adjust the number of Shorts generated if needed Modify the scheduling logic or time windows in the Google Sheets How to customize this workflow to your needs Change AI video settings in the Klap API request Adjust time windows and frequency in the scheduling nodes Limit the workflow to specific platforms (e.g., TikTok only) Add a manual approval step before publishing Modify the Telegram recap message content 📄 Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube
by Gleb D
This n8n workflow automates the enrichment of a company list by discovering and extracting each company’s official LinkedIn URL using Bright Data’s search capabilities and Google Gemini AI for HTML parsing and result interpretation. Who is this template for? This workflow is ideal for sales, business development, and data research professionals who need to collect official LinkedIn company profiles for multiple organizations, starting from a list of company names in Google Sheets. It’s especially useful for teams who want to automate sourcing LinkedIn URLs, enrich their prospect database, or validate company data at scale. How it works Manual Trigger: The workflow is started manually (useful for controlled batch runs and testing). Read Company Names: Company names are loaded from a specified Google Sheets table. Loop Over Each Company: Each company is processed one-by-one: A custom Google Search URL is generated for each name. A Bright Data Web Unlocker request is sent to fetch Google search results for “site:linkedin.com [company name]”. Parse LinkedIn Profile URL Using AI: Google Gemini (or your specified LLM) analyzes the fetched search page and extracts the most likely official LinkedIn company profile. Result Handling: If a profile is found, it’s stored in the results. If not, an empty result is created, but you can add custom logic (notifications, retries, etc.). Batch Data Enrichment: All found company URLs are bundled into a single request for further enrichment from a Bright Data dataset. Export: The workflow appends the final, structured data for each company to another sheet in your Google Sheets file. Setup instructions 1. Replace API Keys: Insert your Bright Data API key in these nodes: Bright Data Web Request - Google Search for Company LinkedIn URL HTTP Request - Post API call to Bright Data Snapshot Progress HTTP Request - Getting data from Bright Data 2. Connect Google Sheets: Set up your Google Sheets credentials and specify the sheet for reading input and writing output. 3. Customize Output Structure: Adjust the Python code node (see sticky note in the template) if you want to include additional or fewer fields in your output. 4. Adjust for Scale or Error Handling: You can modify the logic for “not found” results (e.g., to notify a Slack channel or retry failed companies). 5. Run the Workflow: Start manually, monitor the run, and check your Google Sheet for results. Customization guidance Change Input/Output Sheets: Update the sheet names or columns if your source/target spreadsheet has a different structure. Use Another AI Model: Replace the Google Gemini node with another LLM node if preferred. Integrate Alerts: Add Slack or email nodes to notify your team when a LinkedIn profile is not found or when the process is complete.
by Ezema Kingsley Chibuzo
🧠 What It Does This n8n workflow turns your Telegram bot into a smart, multi-modal AI assistant that accepts text, documents, images, and audio messages, interprets them using OpenAI models, and responds instantly with context-aware answers. It integrates a Supabase vector database to store document embeddings and retrieve relevant information before sending a prompt to OpenAI — enabling a full RAG experience 💡 Why This Workflow? Most support bots can only handle basic text input. This workflow: Supports multiple input formats (voice, documents, images, text) Dynamically extracts and processes data from uploaded files Implements RAG by combining user input with relevant memory or vector-based context Delivers more accurate, relevant, and human-like AI responses. 👤 Who It's For Businesses looking to automate support using Telegram Freelancers or solopreneurs offering AI Chatbots for businesses. Creators building AI-powered bots for real use cases as it's great for Customer support knowledge, Legal or Policy document, long FAQs, Project documentation, and Product information retrieval. Devs or analysts exploring AI + multi-format input + vector memory. ⚙️ How It Works 🗂️ Knowledge Base Setup Run the “Add to Supabase Vector DB” workflow manually to upload a document from your google drive and embed it into your vector database. This powers the Telegram chatbot’s ability to answer questions using your content. 🔁 Telegram Message Routing Telegram Trigger captures the user message (Text, Image, Voice, Document) Message Router routes input by type using a Switch node Each type is handled separately: Voice → Translate recording to text (.ogg, .mp3) Image → Analyze image to text. Text → Sent directly to AI Agent (.txt). Document → Parsed (e.g. .docx to .txt) accordingly. 📎 Document Type Routing Before routing documents by type, the Supported Document File Types node first checks if the file extension is allowed. If not supported, it exits early with an error message — preventing unnecessary processing. Supported documents are then routed using the Document Router node, and converted to text for further processing. Supported Document File Types .jpg .jpeg .png .webp .pdf .doc .docx .xls .xlsx .json .xml. The text content is combined with stored memory and embedded knowledge using a RAG approach, enabling the AI to respond based on real uploaded data. 🧠 RAG via Supabase Uploaded documents are vectorized using OpenAI Embeddings. Embeddings are stored in Supabase with metadata. On new questions, the chatbot: Extracts question intent Queries Supabase for semantically similar chunks Ranks retrieved chunks to find the most relevant match. Injects them into the prompt for OpenAI. OpenAI generates a grounded response based on actual document content. Response is sent to the Telegram user with content awareness. 🛠 How to Set It Up Open n8n or your local/self-hosted instance. Import the `.json ` workflow file. Set up these credentials: Google drive API Key Telegram API (Bot Token) Guide OpenAI API Supabase API Key + Environment ConvertAPI API Key Postgres API Key Cohere API Key Add a prompt suited to your business. Add a custom AI agent prompt that reflects your business domain, tone, and purpose. This is very important. Without it, your agent won't know how best to respond. Activate the workflow. Start testing by sending a message or document to your Telegram bot.
by Marcelo Abreu
What this workflow does Runs automatically every Monday morning at 8 AM Collects your Google Search Console from the last month and the month before that for a given url (date range is configurable) Formats the data, aggregating it by date, query, page, device and country Generates AI-driven analysis and insights on your results, providing actionable recommendations Renders the report as a visually appealing PDF with charts and tables Sends the report via Slack (you can also add email or WhatsApp) A sample for the first page of the report: Setup Guide Create an account of pdforge and use the pre-made Meta Ads template. Connect Google OAuth2 (guide on the template), OpenAI and Slack to n8n Set your site url and date range (opcional) Customize the scheduling date and time Requirements Google OAuth2 (via Google Search Console): Documentation pdforge access: Create an account AI API access (e.g. via OpenAI, Anthropic, Google or Ollama) Slack acces (via OAuth2): Documentation Feel free to contact me via Linkedin, if you have any questions! 👋🏻
by Andrey
⚠️ DISCLAIMER: This workflow uses the HDW LinkedIn community node, which is only available on self-hosted n8n instances. It will not work on n8n.cloud. Overview This workflow automates the entire LinkedIn lead generation process from finding prospects that match your Ideal Customer Profile (ICP) to sending personalized messages. It uses AI to analyze lead data, score potential clients, and prioritize your outreach efforts. Key Features AI-Driven Lead Generation**: Convert ICP descriptions into LinkedIn search parameters Comprehensive Data Enrichment**: Analyze company websites, LinkedIn posts, and news Intelligent Lead Scoring**: Prioritize leads based on AI analysis of intent signals Automated Outreach**: Connect with prospects and send personalized messages Requirements Self-hosted n8n instance with the HDW LinkedIn community node installed OpenAI API access (for GPT-4o) Google Sheets access HDW API key (available at app.horizondatawave.ai) LinkedIn account Setup Instructions 1. Install Required Nodes Ensure the HDW LinkedIn community node is installed on your n8n instance Command: npm install n8n-nodes-hdw (or use this instruction) 2. Configure Credentials OpenAI**: Add your OpenAI API key Google Sheets**: Set up Google account access HDW LinkedIn**: Configure your API key from horizondatawave.ai 3. Set Up Google Sheet Create a new Google Sheet with the following columns (or copy template): Name, URN, URL, Headline, Location, Current company, Industry, etc. The workflow will populate these columns automatically 4. Customize Your ICP Use chat to provide the AI Agent with your Ideal Customer Profile Example: "Target marketing directors at SaaS companies with 50-200 employees" 5. Adjust Scoring Criteria Modify the lead scoring prompt in the "Company Score Analysis" node to match your specific product/service Tune the evaluation criteria based on your unique business needs 6. Configure Message Templates Update the HDW LinkedIn Send Message node with your custom message How It Works ICP Translation: AI converts your ICP description into LinkedIn search parameters Lead Discovery: Workflow searches LinkedIn using these parameters Data Collection: Results are saved to Google Sheets Enrichment: System collects additional data about each lead: Company website analysis Lead's LinkedIn posts Company's LinkedIn posts Recent company news Intent Analysis: AI analyzes all data to identify buying signals Lead Scoring: Leads are scored on a 1-10 scale based on likelihood of interest Connection Requests: Top-scoring leads receive connection requests Follow-Up: When connections are accepted, automated messages are sent Customization Search Parameters**: Adjust the AI Agent prompt to refine your target audience Scoring Criteria**: Modify scoring prompts to highlight indicators relevant to your product Message Content**: Update message templates for personalized outreach Schedule**: Configure when connection requests and messages are sent Rate Limits & Best Practices LinkedIn has connection request limits (approximately 100-200 per week) The workflow includes safeguards to avoid exceeding these limits Consider spacing your outreach for better response rates Note: Always use automation tools responsibly and in accordance with LinkedIn's terms of service.
by Vincent LE ROUX
Sync Dartagnan Email Templates to Braze Why Use This Workflow Email marketing demands consistency across platforms. This workflow automatically synchronizes your email templates from Dartagnan to Braze, eliminating manual transfers and ensuring brand consistency. Perfect for marketing teams who need to maintain a unified email experience while leveraging the strengths of both platforms. Business Benefits Save Time**: Eliminate hours of manual template copying and formatting between platforms Maintain Consistency**: Ensure your email templates look identical across Dartagnan and Braze Reduce Errors**: Automated synchronization prevents human error in template transfers Streamline Workflows**: Create once in Dartagnan, use everywhere through Braze's distribution power Preserve Image Assets**: Keep images hosted on Dartagnan while properly formatting them for Braze How It Works This workflow performs a bi-directional sync between your Dartagnan email templates and Braze platform. It intelligently handles: Template Updates: Automatically updates existing templates in Braze when modified in Dartagnan New Template Creation: Creates new templates in Braze when added to Dartagnan Image URL Transformation: Properly embeds and formats image URLs to meet Braze requirements while keeping assets on Dartagnan infrastructure Technical Implementation The workflow uses a scheduled trigger to check for template changes and then processes them in batches: Authentication: Securely connects to both Dartagnan and Braze APIs Template Retrieval: Fetches current templates from Dartagnan Comparison Logic: Determines which templates need updating or creation in Braze Content Transformation: Processes HTML content and image URLs to ensure compatibility API Integration: Pushes changes to Braze through their Content Blocks API Customization Options This workflow can be customized to meet your specific needs: Sync Frequency**: Adjust the schedule to run hourly, daily, or on any custom schedule Template Filtering**: Add conditions to sync only specific templates based on tags or categories Error Handling**: Configure notification emails when synchronization issues occur Logging**: Enable detailed logs for troubleshooting and auditing Setup Requirements Setting up this workflow takes approximately 20-30 minutes and requires: Dartagnan Requirements API Client ID API Client Secret Template access permissions Braze Requirements Braze Instance URL API Key with content block permissions Appropriate rate limits configured Common Use Cases Email Campaign Coordination**: Maintain consistent templates across platforms for multi-channel campaigns Agency Work**: Design in Dartagnan, deploy through client's Braze instance Rebranding Projects**: Update templates once and propagate changes automatically International Marketing**: Maintain language variants across platforms with automatic synchronization Get Started Once installed, configure your API credentials, set your desired synchronization schedule, and let the workflow handle the rest. The initial sync will create all your templates in Braze, with subsequent runs only updating what's changed.
by Julian Kaiser
Startup Funding Research Automation with Claude, Perplexity AI, and Airtable How it works This intelligent workflow automatically discovers and analyzes recently funded startups by: Monitoring multiple news sources (TechCrunch and VentureBeat) for funding announcements Using AI to extract key funding details (company name, amount raised, investors) Conducting automated deep research on each company through perplexity deep research or jina deep search. Organizing all findings into a structured Airtable database for easy access and analysis Set up steps (10-15 minutes) Connect your news feed sources (TechCrunch and VentureBeat). Could be extended. These were easy to scrape and this data can be expensive. Set up your AI service credentials (Claude and Perplexity or jina which has generous free tier) Connect your Airtable account and create a base with appropriate fields (can be imported from my base) or see structure below. Airtable Base Structure Funding Round Base | Field Name | Data Type | Description | |------------|-----------|-------------| | website_url | String | URL of the company website | | company_name | String | Name of the company | | funding_round | String | The funding stage or round (e.g., Series A, Seed, etc.) | | funding_amount | Number | The amount of funding received | | lead_investor | String | The primary investor leading the funding round | | market | String | The market or industry sector the company operates in | | participating_investors | String | List of other investors participating in the funding round | | press_release_url | String | URL to the press release about the funding | | evaluation | Number | The company's valuation | Structure Company Deep Research Base | Field Name | Data Type | Description | |------------|-----------|-------------| | website_url | String | URL of the company website | | company_name | String | Name of the company | | funding_round | String | The funding stage or round (e.g., Series A, Seed, etc.) | | funding_amount | Number | The amount of funding received | | currency | String | Currency of the funding amount | | announcement_date | String | Date when the funding was announced | | lead_investor | String | The primary investor leading the funding round | | participating_investors | String | List of other investors participating in the funding round | | industry | String | The industry sectors the company operates in | | company_description | String | Description of the company's business | | hq_location | String | Company headquarters location | | founding_year | Number | Year the company was founded | | founder_names | String | Names of the company founders | | ceo_name | String | Name of the company CEO | | employee_count | Number | Number of employees at the company | | total_funding | Number | Total funding amount received to date | | total_funding_currency | String | Currency of total funding | | funding_purpose | String | Purpose or use of the funding | | business_model | String | Company's business model | | valuation | Object | Company valuation information | | previous_rounds | Object | Information about previous funding rounds | | source_urls | String | Source URLs for the funding information | | original_report | String | Original report text about the funding | | market | String | The market the company operates in | | press_release_url | String | URL to the press release about the funding | | evaluation | Number | The company's valuation | Notes I found that by using perplexity via open router, we lose access to the sources, as they are not stored in the same location as the report itself so I opted to use perplexity API via HTTP node. For using perplexity and or jina you have to configure header auth as described in Header Auth - n8n Docs What you can learn How to scrape data using sitemaps How to extract strucutred data from unstructured text How to execute parts of the workflow as subworkflow How to use deep research in a practical scenario How to define more complex JSON schemas
by ist00dent
This n8n template lets you automatically pull market data for the cryptocurrencies from CoinGecko every hour, calculate custom volatility and market-health metrics, classify each coin’s price action into buy/sell/hold/neutral signals with risk ratings, and expose both individual analyses and a portfolio summary via a webhook. It’s perfect for crypto analysts, DeFi builders, or portfolio managers who want on-demand insights without writing a single line of backend code. 🔧 How it works Schedule Trigger fires every hour (or interval you choose). HTTP Request (CoinGecko) fetches the top 10 coins by market cap, including 24 h, 7 d, and 30 d price change percentages. Split In Batches ensures each coin is processed sequentially. Function (Calculate Market Metrics) computes: A weighted volatility score Market-cap-to-volume ratio Price-to-ATH ratio Composite market score IF & Switch nodes categorize each coin’s 24 h price action (up >5%, down >5%, high volatility, or stable) and append: signal (BUY/SELL/HOLD/NEUTRAL) riskRating (High/Medium/Low/Unknown) recommendation & investmentStrategy guidance NoOp & Merge nodes consolidate each branch back into a single data stream. Function (Generate Portfolio Summary) aggregates all analyses into: A Markdown portfolioSummary Counts of buy/sell/hold/neutral signals Risk distribution Webhook Response returns the full JSON payload with individual analyses and the summary for downstream consumers. 👤 Who is it for? This workflow is ideal for: Crypto researchers and analysts who need scheduled market insights DeFi and trading bot developers looking to automate signal generation Portfolio managers seeking a no-code overview of top assets Automation engineers exploring API integration and data enrichment 📑 Data Structure When you trigger the webhook, you’ll receive a JSON object containing: individualAnalyses: Array of { coin, symbol, currentPrice, priceChanges, marketMetrics, signal, riskRating, recommendation } portfolioSummary: Markdown report summarizing signals, risk distribution, and top opportunity marketSignals: Counts of each signal type riskDistribution: Counts of each risk rating timestamp: ISO string of analysis time ⚙️ Setup Instructions Import: In n8n Editor → click “Import from JSON” → paste this workflow JSON. Configure Schedule: Double-click the Schedule Trigger → set your desired interval (default: every hour). Webhook Path: Open the Webhook node → choose a unique path (e.g., /crypto‐analysis) and “POST”. Activate: Save and activate the workflow. Test: Open the webhook url to other tab or use cURL curl -X POST https://<your-n8n-host>/webhook/<path> You’ll get back a JSON payload with both portfolioSummary and individualAnalyses. 📝 Tips Rate-Limit Handling: If CoinGecko returns 429, insert a Delay node (e.g., 500 ms) after the HTTP Request. Batch Size: Default is 1 coin at a time; you can bump it to parallelize. Customization: Tweak volatility weightings or add new metrics directly in the “Calculate Market Metrics” Function node. Extension: Swap CoinGecko for another API by updating the HTTP Request URL and field mappings.