by Yaron Been
Automated pipeline to collect and analyze investor data from Crunchbase, tracking investment patterns, funding history, and portfolio companies for market analysis and lead generation. ๐ What It Does Investor Profiling**: Collects comprehensive data on investors and VC firms Investment Pattern Analysis**: Tracks funding history and investment preferences Portfolio Monitoring**: Keeps tabs on investor portfolios and new investments Data Enrichment**: Enhances raw data with additional context and metrics ๐ฏ Perfect For Startup founders seeking investors Market research analysts Investment professionals Business development teams Competitive intelligence โ๏ธ Key Benefits โ Comprehensive investor profiles โ Real-time investment tracking โ Market trend analysis โ Data-driven investment decisions โ Time-saving automation ๐ง What You Need Crunchbase API access n8n instance Storage solution (database or spreadsheet) ๐ Data Points Collected Investor/Firm details Investment history Portfolio companies Funding rounds participated in Investment focus areas Contact information (when available) ๐ ๏ธ Setup & Support Quick Setup Deploy in 30 minutes with our step-by-step configuration guide ๐บ Watch Tutorial ๐ผ Get Expert Support ๐ง Direct Help Transform your investor research with automated data collection and analysis. Spend less time gathering data and more time making strategic decisions.
by Yulia
This n8n workflow demonstrates how to create an agent using LangChain and SQLite. The agent can understand natural language queries and interact with a SQLite database to provide accurate answers. ๐ช ๐ Setup Run the top part of the workflow once. It downloads the example SQLite database, extracts from a ZIP file and saves locally (chinook.db). ๐ฃ๏ธ Chatting with Your Data Send a message in a chat window. Locally saved SQLite database loads automatically. User's chat input is combined with the binary data. The LangChain Agend node gets both data and begins to work. The AI Agent will process the user's message, perform necessary SQL queries, and generate a response based on the database information. ๐๏ธ ๐ Example Queries Try these sample queries to see the AI Agent in action: "Please describe the database" - Get a high-level overview of the database structure, only one or two queries are needed. "What are the revenues by genre?" - Retrieve revenue information grouped by genre, LangChain agent iterates several time before producing the answer. The AI Agent will store the final answer in its memory, allowing for context-aware conversations. ๐ฌ Read the full article: ๐ https://blog.n8n.io/ai-agents/
by Jimleuk
This n8n workflow demonstrates how we can use Multimodal LLMs to parse and extract from PDF documents in n8n. In this particular scenario, we're passing a candidate's CV/resume to an AI which filters out unqualified applications. However, this sneaky candidate has added in hidden prompt to bypass our bot! Whatever will we do? No fret, using AI Vision is one approach to solve this problem... read on! How it works Our candidate's CV/Resume is a PDF downloaded via Google Drive for this demonstration. The PDF is then converted into an image PNG using a tool called Stirling PDF. Since the hidden prompt has a white font color, it is is invisible in the converted image. The image is then forwarded to a Basic LLM node to process using our multimodal model - in this example, we'll use Google's Gemini 1.5 Pro. In the Basic LLM node, we'll need to set a User Message with the type of Binary. This allows us to directly send the image file in our request. The LLM is now immune to the hidden prompt and its response is has expected. The example CV/Resume with hidden prompt can be found here: https://drive.google.com/file/d/1MORAdeev6cMcTJBV2EYALAwll8gCDRav/view?usp=sharing Requirements Google Gemini API Key. Alternatively, GPT4 will also work for this use-case. Stirling PDF or another service which can convert PDFs into images. Note for data privacy, this example uses a public API and it is recommended that you self-host and use a private instance of Stirling PDF instead. Customising the workflow Swap out the manual trigger for another trigger such as a webhook to integrate into your existing services. This example demonstrates a validation use-case ie. "does the candidate look qualified?". You can try additionally extracting data points instead such as years of experiences, previous companies etc.
by Pavel Duchovny
Who is this for? This workflow is designed for: Database administrators and developers working with MongoDB Content managers handling movie databases Organizations looking to implement AI-powered search and recommendation systems Developers interested in combining LangChain, OpenAI, and MongoDB capabilities What problem does this workflow solve? Traditional database queries can be complex and require specific MongoDB syntax knowledge. This workflow addresses: The complexity of writing MongoDB aggregation pipelines The need for natural language interaction with movie databases The challenge of maintaining user preferences and favorites The gap between AI language models and database operations What this workflow does This workflow creates an intelligent agent that: Accepts natural language queries about movies Translates user requests into MongoDB aggregation pipelines Queries a movie database containing detailed information including: Plot summaries Genre classifications Cast and director information Runtime and release dates Ratings and awards Provides contextual responses using OpenAI's language model Allows users to save favorite movies to the database Maintains conversation context using a window buffer memory Setup Required Credentials: OpenAI API credentials MongoDB connection details Node Configuration: Configure the MongoDB connection in the MongoDBAggregate node Set up the OpenAI Chat Model with your API key Ensure the webhook trigger is properly configured for receiving chat messages Database Requirements: A MongoDB collection named "movies" with the specified document structure Proper indexes for efficient querying Appropriate user permissions for read/write operations How to customize this workflow Modify the Document Structure: Update the tool description in the MongoDBAggregate node to match your collection schema Adjust the aggregation pipeline templates for your specific use case Enhance the AI Agent: Customize the prompt in the "AI Agent - Movie Recommendation" node Modify the window buffer memory size based on your context needs Add additional tools for more functionality Extend Functionality: Add more MongoDB operations beyond aggregation Implement additional workflows for different types of queries Create custom error handling and validation Add user authentication and rate limiting Integration Options: Connect to external APIs for additional movie data Add webhook endpoints for different platforms Implement caching mechanisms for frequent queries Add data transformation nodes for specific output formats This workflow serves as a foundation that can be adapted to various use cases beyond movie recommendations, such as e-commerce product search, content management systems, or any scenario requiring intelligent database interaction.
by Yaron Been
Automated pipeline that exports technology stack data from BuiltWith to Google Sheets for analysis, reporting, and team collaboration. ๐ What It Does Extracts technology stack data Organizes data in Google Sheets Updates automatically on schedule Supports multiple company tracking Enables easy data sharing ๐ฏ Perfect For Sales teams Market researchers Business analysts Competitive intelligence Technology consultants โ๏ธ Key Benefits โ Centralized technology database โ Easy data analysis โ Team collaboration โ Historical tracking โ Custom reporting ๐ง What You Need BuiltWith API access Google account n8n instance Google Sheets setup ๐ Data Exported Company information Web technologies Hosting details Analytics tools Marketing technologies Contact information ๐ ๏ธ Setup & Support Quick Setup Start exporting in 15 minutes with our step-by-step guide ๐บ Watch Tutorial ๐ผ Get Expert Support ๐ง Direct Help Transform raw technology data into actionable business intelligence with automated exports.
by Marketing Canopy
Automate Sports Betting Data with TheOddsAPI This workflow enables you to create and update a table using TheOddsAPI for sports betting data. It automatically pulls upcoming Ice Hockey games at the start of the day and updates the table with results at the end of the day. You can modify it to retrieve odds and game data for any sport. This setup is particularly useful for sports betting applications, such as tracking the results of a predictive model. It leverages scheduled triggers to activate HTTP requests, which then create or update fields in Airtable by matching on the game ID. Prerequisites Before implementing this workflow, ensure you have the following: TheOddsAPI Account & API Key Sign up at TheOddsAPI and obtain an API key. Ensure you have the correct API permissions to access sports odds and results. Airtable Account & API Key Create an account at Airtable and set up a database. Obtain an API key from the Account Settings page. API Access & Rate Limits Review TheOddsAPIโs rate limits and ensure your account tier allows for scheduled API calls. Confirm that Airtable API limits align with your expected data retrieval frequency. Step-by-Step Guide to Integrating TheOddsAPI 1. Schedule API Requests Set up a trigger to automatically pull upcoming Ice Hockey games at the start of each day. 2. Fetch Data from TheOddsAPI Retrieve the latest sports betting data, including game details and odds, using TheOddsAPI. 3. Store Data in Airtable Insert or update records in Airtable by matching game IDs, ensuring data accuracy. Sample Airtable Template Column Setup for Ice Hockey (Table can adjust depending on sport and data needs. Reference TheOddsAPI for more documentation.) Game ID** Sport** League** Game Date (UTC)** Home Team** Away Team** Completed** (Boolean: TRUE/FALSE for game completion status) Scores** (JSON or String for final scores) Last Update** (Timestamp of the latest update) 4. Schedule an End-of-Day Update Configure another trigger to fetch final game results at the end of the day. 5. Update Records in Airtable Modify existing Airtable records with final scores and game outcomes for complete tracking. 6. Customize for Other Sports Adjust API parameters to retrieve data for different sports and betting odds, making the system flexible for multiple use cases. This structured workflow automates sports betting data collection and updates, ensuring accurate and real-time tracking of odds and game results. By integrating TheOddsAPI with Airtable, you can build scalable applications for predictive sports analytics and betting insights.
by Jaruphat J.
Who is this for? This workflow is ideal for businesses, accountants, and finance teams who receive bank slip images via LINE and want to automate the extraction of transaction details. It eliminates manual data entry and speeds up financial tracking. What problem does this workflow solve? Many businesses receive bank transfer slips via LINE from customers, but manually recording transaction details into spreadsheets is time-consuming and error-prone. This workflow automates the entire process, extracting structured data from the bank slips and storing it in Google Sheets for seamless record-keeping. What this workflow does: Receives bank slip images from LINE BOT Extracts transaction details (sender, receiver, amount, transaction ID) using SpaceOCR Automatically logs extracted data into Google Sheets Works with Standard Bank Slips & PromptPay transactions Eliminates manual data entry and reduces errors Setup Instructions: 1. Prerequisites A LINE BOT with Messaging API enabled A SpaceOCR API Key (Get from https://spaceocr.com/) A Google Sheets account to store extracted data An n8n instance running (Cloud or Self-hosted) 2. Setup Google Sheets Create a Google Sheet with the following structure: A (Date) B (Time) C (Sender) D (Receiver) E (Bank Name) F (Amount) G (Transaction ID) Ensure your Google Sheets API is enabled and connected to n8n. For an example of the required format, check this Google Sheets template: Google Sheets Template 3. Configure n8n Workflow 1. Webhook Node (Receives bank slip from LINE BOT) Set method:* Set Path:* 2. HTTP Request (Download Image from LINE Message) Retrieves image URL from the LINE message payload 3. SpaceOCR Node (Extract Text from Bank Slip) Input:* API Key:* #### 4. Google Sheets Node (Save Transaction Data) Select your Google Sheet Map extracted data (sender, receiver, amount, etc.) to the respective columns 4. Deploy & Test Activate the workflow in n8n Set Webhook URL in LINE Developer Console Send a test bank slip image to the LINE BOT Check Google Sheets for extracted transaction data
by n8n custom workflows
Introduction The namesilo Bulk Domain Availability workflow is a powerful automation solution designed to check the registration status of multiple domains simultaneously using the Namesilo API. This workflow efficiently processes large lists of domains by splitting them into manageable batches, adhering to API rate limits, and compiling the results into a convenient Excel spreadsheet. It eliminates the tedious process of manually checking domains one by one, saving significant time for domain investors, web developers, and digital marketers. The workflow is particularly valuable during brainstorming sessions for new projects, when conducting domain portfolio audits, or when preparing domain acquisition strategies. By automating the domain availability check process, users can quickly identify available domains for registration without the hassle of navigating through multiple web interfaces. Who is this for? This workflow is ideal for: Domain investors and flippers who need to check multiple domains quickly Web developers and agencies evaluating domain options for client projects Digital marketers researching domain availability for campaigns Business owners exploring domain options for new ventures IT professionals managing domain portfolios Users should have basic familiarity with n8n workflow concepts and a Namesilo account to obtain an API key. No coding knowledge is required, though understanding of domain name systems would be beneficial. What problem is this workflow solving? Checking domain availability one-by-one is a time-consuming and tedious process, especially when dealing with dozens or hundreds of potential domains. This workflow solves several key challenges: Manual Inefficiency: Eliminates the need to individually search for each domain through registrar websites. Rate Limiting: Handles API rate limits automatically with built-in waiting periods. Data Organization: Compiles availability results into a structured Excel file rather than scattered notes or multiple browser tabs. Bulk Processing: Processes up to 200 domains per batch, with the ability to handle unlimited domains across multiple batches. Time Management: Frees up valuable time that would otherwise be spent on repetitive manual checks. What this workflow does Overview The workflow takes a list of domains, processes them in batches of up to 200 domains per request (to comply with API limitations), checks their availability using the Namesilo API, and compiles the results into an Excel spreadsheet showing which domains are available for registration and which are already taken. Process Input Setup: The workflow begins with a manual trigger and uses the "Set Data" node to collect the list of domains to check and your Namesilo API key. Domain Processing: The "Convert & Split Domains" node transforms the input list into batches of up to 200 domains to comply with API limitations. Batch Processing: The workflow loops through each batch of domains. API Integration: For each batch, the "Namesilo Requests" node sends a request to the Namesilo API to check domain availability. Data Parsing: The "Parse Data" node processes the API response, extracting information about which domains are available and which are taken. Rate Limit Management: A 5-minute wait period is enforced between batches to respect Namesilo's API rate limits. Data Compilation: The "Merge Results" node combines all the availability data. Output Generation: Finally, the "Convert to Excel" node creates an Excel file with two columns: Domain and Availability (showing "Available" or "Unavailable" for each domain). Setup Import the workflow: Download the workflow JSON file and import it into your n8n instance. Get Namesilo API key: Create a free account at Namesilo and obtain your API key from https://www.namesilo.com/account/api-manager Configure the workflow: Open the "Set Data" node Enter your Namesilo API key in the "Namesilo API Key" field Enter your list of domains (one per line) in the "Domains" field Save and activate: Save the workflow and run it using the manual trigger. How to customize this workflow to your needs Modify domain input format**: You can adjust the code in the "Convert & Split Domains" node if your domain list comes in a different format. Change batch size**: If needed, you can modify the batch size (currently set to 200) in the "Convert & Split Domains" node to accommodate different API limitations. Adjust wait time**: If you have a premium API account with different rate limits, you can modify the wait time in the "Wait" node. Enhance output format**: Customize the "Convert to Excel" node to add additional columns or formatting to the output file. Add domain filtering**: You could add a node before the API request to filter domains based on specific criteria (length, keywords, TLDs). Integrate with other services**: Connect this workflow to domain registrars to automatically register available domains that meet your criteria.
by phil
This workflow automates the backup of your n8n workflows data to Google Drive every day. It ensures that important configurations and execution logs are securely stored, reducing the risk of data loss and improving workflow resilience. ๐น Why Use This? โ Automates routine backups effortlessly. โ Reduces manual intervention and potential data loss. โ Securely stores critical workflow configurations in Google Drive. With this workflow, you can focus on innovation while n8n takes care of your backups. ๐โจ ๐ How It Works This workflow operates seamlessly with a combination of scheduled triggers, JSON data transformation, and secure cloud storage. ๐ Setup Steps Trigger the backup โ Choose between manual execution or automated scheduling at 1:30 AM daily. Data preparation โ Your workflow parameters define the backup location and organize files effectively. Transformation & Encoding โ The data is processed and converted into a JSON file in base64 format. Cloud Storage โ The backup is securely uploaded to your designated Google Drive folder. ๐ง Customization Options You can modify various aspects of the backup workflow to better suit your needs: 1๏ธโฃ Adjusting Backup Frequency By default, the workflow runs daily at 1:30 AM. To change this: Open the Trigger Node in n8n. Modify the Cron Expression or select a different frequency (e.g., hourly, weekly, or custom intervals). 2๏ธโฃ Selecting Specific Workflows to Backup Instead of backing up all workflows, you can filter which ones to include: Add a Filter Node before exporting data. Define specific workflow IDs or names to include in the backup. 3๏ธโฃ Changing the Backup Destination The default destination is Google Drive, but you can change this: Replace the Google Drive Node with a different storage provider (e.g., Dropbox, AWS S3, or local storage via FTP/SFTP). Configure authentication for the new destination. 4๏ธโฃ Modifying Data Format By default, the workflow stores data in JSON format. If you need a different format: Convert JSON to CSV using the Spreadsheet File Node. Store backups in a compressed format (ZIP) by adding a Compression Node. 5๏ธโฃ Encrypting the Backup for Extra Security For added protection: Use the Crypto Node to encrypt the JSON file before uploading. Set up an Access-Controlled Folder in Google Drive with limited permissions. โ Verify That Your Backup Works Before relying on this workflow for your automated backups, make sure it works correctly by performing a quick test: Manually trigger the workflow in n8n and check if the backup file appears in your Google Drive. Open Google Drive, navigate to the backup folder, and download the JSON file. Verify its content by checking if the data matches your workflowโs execution logs. Try to import the JSON file back into n8n using the โImport Fileโ function to ensure the workflow structure is intact. Alternatively, copy and paste a test file into Google Drive and confirm that it appears correctly in your workflow logs. This quick test will confirm that your backup is running smoothly and that your data is retrievable whenever needed. ๐ How to Find Your Google Drive Directory ID To ensure that the backup is uploaded to the correct folder, you need to retrieve your Google Drive Directory ID. Follow these simple steps: Open Google Drive. Navigate to the folder where you want to store your backups. Click on the folder and check the URL in your browser. The Directory ID is the long string of characters at the end of the URL after /folders/. Example: ๐ If your folder URL is: https://drive.google.com/drive/folders/14oUlH_LW_NT0Xb2woZWvuzRncV-bhla Then, your Directory ID is: 14oUlH_LW_NT0Xb2woZWvuzRncV-bhla Copy this Directory ID and use it in the workflow's parameters to ensure the backup is saved in the correct location. Phil | Inforeole
by Aditya Gaur
Who is this template for? This template is designed for developers, DevOps engineers, and automation enthusiasts who want to streamline their GitLab merge request process using n8n, a low-code workflow automation tool. It eliminates manual intervention by automating the merging of GitLab branches through API calls. How it works ? Trigger the workflow: The workflow can be triggered by a webhook, a scheduled event, or a GitLab event (e.g., a new merge request is created or approved). Fetch Merge Request Details: n8n makes an API call to GitLab to retrieve merge request details. Check Merge Conditions: The workflow validates whether the merge request meets predefined conditions (e.g., approvals met, CI/CD pipelines passed). Perform the Merge: If all conditions are met, n8n sends a request to the GitLab API to merge the branch automatically. Setup Steps 1. Prerequisites An n8n instance (Self-hosted or Cloud) A GitLab personal access token with API access A GitLab repository with merge requests enabled 2. Create the n8n Workflow Set up a trigger: Choose a trigger node (Webhook, Cron, or GitLab Trigger). Fetch merge request details: Add an HTTP Request node to call GET /merge_requests/:id from GitLab API. Validate conditions: Check if the merge request has necessary approvals. Ensure CI/CD pipelines have passed. Merge the request: Use an HTTP Request node to call PUT /merge_requests/:id/merge API. 3. Test the Workflow Create a test merge request. Check if the workflow triggers and merges automatically. Debug using n8n logs if needed. 4. Deploy and Monitor Deploy the workflow in production. Use n8nโs monitoring features to track execution. This template enables seamless GitLab merge automation, improving efficiency and reducing manual work! Note: Never hard code API token or secret in your https request.
by Agent Studio
Automatically store Retell transcripts in Google Sheets/Airtable/Notion from webhook Overview This workflow stores the results of a Retell voice call (transcript, analysis, etc.) once it has ended and been analyzed. It listens for call_analyzed webhook events from Retell and stores the data in Airtable, Google Sheets, and Notion (choose based on your stack). Useful for anyone building Retell agents who want to keep a detailed history of analyzed calls in structured tools. Who is it for For builders of Retell's Voice Agents who want to store call history and essential analytic data. Prerequisites Have a Retell AI Account Create a Retell agent Associate a phone number with your Retell agent Set up one of the following: An Airtable base and table (example: "Transcripts") A Google Sheet with a โTranscriptsโ tab A Notion database with columns to match the transcript fields Templates: Airtable Google Sheets Notion How it works Receives a webhook POST request from Retell when a call has been analyzed. Filters out any event that is not call_analyzed (Retell sends webhooks for call_started, call_ended and call_analyzed) Extracts useful fields like: Call ID, start/end time, duration, total cost Transcript, summary, sentiment Stores this data in your preferred tool: Airtable Google Sheets Notion How to use it Copy the webhook URL (e.g., https://your-instance.app.n8n.cloud/webhook/poc-retell-analysis) and paste it in your Retell agent under "Webhook settings" then "Agent Level Webhook URL". Make sure your Airtable, Google Sheet, or Notion databases are correctly configured to receive the fields. After each call, once Retell finishes the analysis, this workflow will automatically log the results. Extension If you use any "Post-Call Analysis" fields, you can add columns to your Airtable, Google Sheet, or Notion database. Then fetch the data from the call.call_analysis.custom_analysis_data object. Additional Notes Phone numbers are extracted depending on the call direction (from_number or to_number). Cost is converted from cents to dollars before saving. Dates are converted from timestamps to local ISO strings. You can remove any of the outputs (Airtable, Google Sheets, Notion) if you're only using one. ๐ Reach out to us if you're interested in analysing your Retell Agent conversations.
by Batu รztรผrk
๐ Transform LinkedIn Post Reactions into Content Ideas with Airtable ๐ Description This workflow helps you to turn your LinkedIn activity into a powerful content ideation engine. It captures your most recent post reactions on LinkedIn automatically, filters them based on recency, and structures the content into Airtableโready for brainstorming, inspiration, or publication planning. โ๏ธ What It Does Fetches* the latest liked posts from LinkedIn via a public API (rapidapi.com/Real-Time Linkedin Scraper*). Filters** posts to include only those marked as your decided reaction and posted in the last 7 days. Extracts** the post text, author, links and more. Formats** the data into a database-friendly structure. Saves** the output in Airtable for easy tracking, tagging, or team collaboration. ๐ก Use Cases Build a content idea vault from posts you admire. Capture inspiration from thought leaders. Identify trends based on what you find insightful. Supercharge your personal brand or newsletter by turning likes into learning. ๐ Prerequisites Before using this template, make sure you have: โ A RapidAPI account and access to the linkedin-api8 endpoint. โ Your RapidAPI key and the target LinkedIn username. โ An Airtable account with a base/table set up. ๐งฐ Setup Instructions Clone this template into your n8n instance. Open the Fetch LinkedIn Likes node and enter: Your LinkedIn username. Your RapidAPI key in the headers. Open the Save to Airtable node and: Connect your Airtable account. Link the correct base (Content Hub) and table (Ideas). Set your desired schedule in the Trigger node. Activate the workflow and you're done! ๐ Airtable Setup Create a base called Content Hub and a table named Ideas with the following columns: | Column Name | Type | Required | Notes | |-------------|------------|----------|----------------------------| | Title | Single line text | โ | Generated from author info | | Description | Long text | โ | Contains post content | | Source | URL | โ | Link to the original post | | Type | Single select | โ | Value: Linkedin