by iamvaar
This workflow automates the process of analyzing a contract submitted via a web form. It extracts the text from an uploaded PDF, uses AI to identify potential red flags, and sends a summary report to a Telegram chat. Prerequisites Before you can use this workflow, you'll need a few things set up. 1. JotForm Form You need to create a form in JotForm with at least two specific fields: Email Address**: A standard field to collect the user's email. File Upload**: This field will be used to upload the contract or NDA. Make sure to configure it to allow .pdf files. 2. API Keys and IDs JotForm API Key**: You can generate this from your JotForm account settings under the "API" section. Gemini API Key**: You'll need an API key from Google AI Studio to use the Gemini model. Telegram Bot Token**: Create a new bot by talking to the @BotFather on Telegram. It will give you a unique token. Telegram Chat ID**: This is the ID of the user, group, or channel you want the bot to send messages to. You can get this by using a bot like @userinfobot. Node-by-Node Explanation Here is a breakdown of what each node in the workflow does, in the order they execute. 1. JotForm Trigger What it does**: This node kicks off the entire workflow. It actively listens for new submissions on the specific JotForm you select. How it works**: When someone fills out your form and hits "Submit," JotForm sends the submission data (including the email and a link to the uploaded file) to this node. 2. Grab Attachment Details (HTTP Request) What it does**: The initial data from JotForm doesn't contain a direct download link for the file. This node takes the submissionID from the trigger and makes a request to the JotForm API to get the full details of that submission. How it works**: It constructs a URL using the submissionID and your JotForm API key to fetch the submission data, which includes the proper download URL for the uploaded contract. 3. Grab the Attached Contract (HTTP Request) What it does**: Now that it has the direct download link, this node fetches the actual PDF file. How it works**: It uses the file URL obtained from the previous node to download the contract. The node is set to expect a "file" as the response, so it saves the PDF data in binary format for the next step. 4. Extract Text from PDF File What it does**: This node takes the binary PDF data from the previous step and extracts all the readable text from it. How it works**: It processes the PDF and outputs plain text, stripping away any formatting or images. This raw text is now ready to be analyzed by the AI. 5. AI Agent (with Google Gemini Chat Model) What it does**: This is the core analysis engine of the workflow. It takes the extracted text from the PDF and uses a powerful prompt to analyze it. The "Google Gemini Chat Model" node is connected as its "brain." How it works**: It sends the contract text to the Gemini model. The prompt instructs Gemini to act as an expert contract analyst. It specifically asks the AI to identify major red flags and hidden/unfair clauses. It also tells the AI to format the output as a clean report using Telegram's MarkdownV2 style and to keep the response under 1500 characters. 6. Send a text message (Telegram) What it does**: This is the final step. It takes the formatted analysis report generated by the AI Agent and sends it to your specified Telegram chat. How it works**: It connects to your Telegram bot using your Bot Token and sends the AI's output ($json.output) to the Chat ID you've provided. Because the AI was instructed to format the text in MarkdownV2, the message will appear well-structured in Telegram with bolding and bullet points.
by Anderson Adelino
Voice Assistant Interface with n8n and OpenAI This workflow creates a voice-activated AI assistant interface that runs directly in your browser. Users can click on a glowing orb to speak with the AI, which responds with voice using OpenAI's text-to-speech capabilities. Who is it for? This template is perfect for: Developers looking to add voice interfaces to their applications Customer service teams wanting to create voice-enabled support systems Content creators building interactive voice experiences Anyone interested in creating their own "Alexa-like" assistant How it works The workflow consists of two main parts: Frontend Interface: A beautiful animated orb that users click to activate voice recording Backend Processing: Receives the audio transcription, processes it through an AI agent with memory, and returns voice responses The system uses: Web Speech API for voice recognition (browser-based) OpenAI GPT-4o-mini for intelligent responses OpenAI Text-to-Speech for voice synthesis Session memory to maintain conversation context Setup requirements n8n instance (self-hosted or cloud) OpenAI API key with access to: GPT-4o-mini model Text-to-Speech API Modern web browser with Web Speech API support (Chrome, Edge, Safari) How to set up Import the workflow into your n8n instance Add your OpenAI credentials to both OpenAI nodes Copy the webhook URL from the "Audio Processing Endpoint" node Edit the "Voice Assistant UI" node and replace YOUR_WEBHOOK_URL_HERE with your webhook URL Access the "Voice Interface Endpoint" webhook URL in your browser Click the orb and start talking! How to customize the workflow Change the AI personality**: Edit the system message in the "Process User Query" node Modify the visual style**: Customize the CSS in the "Voice Assistant UI" node Add more capabilities**: Connect additional tools to the AI Agent Change the voice**: Select a different voice in the "Generate Voice Response" node Adjust memory**: Modify the context window length in the "Conversation Memory" node Demo Watch the template in action: https://youtu.be/0bMdJcRMnZY
by PDF Vector
Overview Researchers and academic institutions need efficient ways to process and analyze large volumes of research papers and academic documents, including scanned PDFs and image-based materials (JPG, PNG). Manual review of academic literature is time-consuming and makes it difficult to identify trends, track citations, and synthesize findings across multiple papers. This workflow automates the extraction and analysis of research papers and scanned documents using OCR technology, creating a searchable knowledge base of academic insights from both digital and image-based sources. What You Can Do Extract key information from research papers automatically, including methodologies, findings, and citations Build a searchable database of academic insights from both digital and image-based sources Track citations and identify research trends across multiple papers Synthesize findings from large volumes of academic literature efficiently Who It's For Research institutions, university libraries, R&D departments, academic researchers, literature review teams, and organizations tracking scientific developments in their field. The Problem It Solves Literature reviews require reading hundreds of papers to identify relevant findings and methodologies. This template automates the extraction of key information from research papers, including methodologies, findings, and citations. It builds a searchable database that helps researchers quickly find relevant studies and identify research gaps. Setup Instructions: Install the PDF Vector community node with academic features Configure PDF Vector API with academic search enabled Configure Google Drive credentials for document access Set up database for storing extracted research data Configure citation tracking preferences Set up automated paper ingestion from sources Configure summary generation parameters Key Features: Google Drive integration for research paper retrieval (PDFs, JPGs, PNGs) OCR processing for scanned documents and images Automatic extraction of paper metadata and structure from any format Methodology and findings summarization from PDFs and images Citation network analysis and metrics Multi-paper trend identification Searchable research database creation Integration with academic search engines Customization Options: Add field-specific extraction templates Configure automated paper discovery from arXiv, PubMed, etc. Implement citation alert systems Create research trend visualizations Add collaboration features for research teams Build API endpoints for research queries Integrate with reference management tools Implementation Details: The workflow uses PDF Vector's academic features to understand research paper structure and extract meaningful insights. It processes papers from various sources, identifies key contributions, and creates structured summaries. The system tracks citations to measure impact and identifies emerging research trends by analyzing multiple papers in a field. Note: This workflow uses the PDF Vector community node. Make sure to install it from the n8n community nodes collection before using this template.
by Avkash Kakdiya
How it works This workflow automatically generates and publishes marketing blog posts to WordPress using AI. It begins by checking your PostgreSQL database for unprocessed records, then uses OpenAI to create SEO-friendly, structured blog content. The content is formatted for WordPress, including categories, tags, and meta descriptions, before being published. After publishing, the workflow updates the original database record to track processing status and WordPress post details. Step-by-step Trigger workflow** Schedule Trigger – Runs the workflow at defined intervals. Fetch unprocessed record** PostgreSQL Trigger – Retrieves the latest unprocessed record from the database. Check Record Exists – Confirms the record is valid and ready for processing. Generate AI blog content** OpenAI Chat Model – Processes the record to generate blog content based on the title. Blog Post Agent – Structures AI output into JSON with title, content, excerpt, and meta description. Format and safeguard content** Code Node – Prepares structured data for WordPress, ensuring categories, tags, and error handling. Publish content and update database** WordPress Publisher – Publishes content to WordPress with proper categories, tags, and meta. Update Database – Marks the record as processed and stores WordPress post ID, URL, and processing timestamp. Why use this? Automates end-to-end blog content generation and publishing. Ensures SEO-friendly and marketing-optimized posts. Maintains database integrity by tracking published content. Reduces manual effort and accelerates content workflow. Integrates PostgreSQL, OpenAI, and WordPress seamlessly for scalable marketing automation.
by IranServer.com
Automated Blog Content Generation from Google Trends to WordPress This n8n workflow automatically generates SEO-friendly blog content based on trending topics from Google Trends and publishes it to WordPress. Perfect for content creators, bloggers, and digital marketers who want to stay on top of trending topics with minimal manual effort. Who's it for Content creators** who need fresh, trending topic ideas Bloggers** looking to automate their content pipeline Digital marketers** wanting to capitalize on trending searches WordPress site owners** seeking automated content generation SEO professionals** who want to target trending keywords How it works The workflow operates on a scheduled basis (daily at 8:45 PM by default) and follows this process: Trend Discovery: Fetches the latest trending searches from Google Trends for a specific country (Iran by default) Content Research: Performs Google searches on the top 3 trending topics to gather detailed information AI Content Generation: Uses OpenAI's GPT-4o model to create SEO-friendly blog posts based on the trending topics and search results Structured Output: Ensures the generated content has proper title and content structure Auto-Publishing: Automatically creates draft posts in WordPress The AI is specifically prompted to create engaging, SEO-optimized content without revealing the automated sources, ensuring natural-sounding blog posts. How to set up Install required community node: n8n-nodes-serpapi for Google Trends and Search functionality Configure credentials: SerpApi: Sign up at serpapi.com and add your API key OpenAI: Add your OpenAI API key for GPT-4o access WordPress: Configure your WordPress site credentials Customize the country code: Change the "Country" field in the "Edit Fields" node (currently set to "IR" for Iran) Adjust the schedule: Modify the "Schedule Trigger" to run at your preferred time Test the workflow: Run it manually first to ensure all connections work properly Requirements SerpApi account** (for Google Trends and Search data) OpenAI API access** (for content generation using GPT-4o) WordPress site** with API access enabled How to customize the workflow Change target country**: Modify the country code in the "Edit Fields" node to target different regions Adjust content quantity**: Change the limit in the "Limit" node to process more or fewer trending topics Modify AI prompt**: Edit the prompt in the "Basic LLM Chain" node to change writing style or focus Schedule frequency**: Adjust the "Schedule Trigger" for different posting frequencies Content status**: Change from "draft" to "publish" in the WordPress node for immediate publishing Add content filtering**: Insert additional nodes to filter topics by category or keywords
by AI/ML API | D1m7asis
📲 AI Multi-Model Telegram Chatbot (n8n + AIMLAPI) This n8n workflow enables Telegram users to interact with multiple AI models dynamically using #model_id commands. It also supports a /models command to list all available models. Each user has a daily usage limit, tracked via Google Sheets. 🚀 Key Features Dynamic Model Selection:** Users choose models on-the-fly via #model_id (e.g., #openai/gpt-4o). /models Command:** Lists all available models grouped by provider. Daily Limit Per User:** Enforced using Google Sheets. Prompt Parsing:** Extracts model and message from user input. Logging:** Logs every request & result into Google Sheets for usage tracking. Seamless Telegram Delivery:** Responses are sent directly back to the chat. 🛠 Setup Guide 1. 📲 Create a Telegram Bot Go to @BotFather Use /newbot → Set name & username. Copy the generated API token. 2. 🔐 Add Telegram Credentials to n8n Go to n8n > Credentials > Telegram API. Create a new credential with the BotFather token. 3. 📗 Google Sheets Setup Create a Google Sheet named Sheet1. Add columns: user_id | date | query | result Share the sheet with your Service Account or OAuth Email (depending on auth method). 4. 🔌 Connect AIMLAPI Get your API key from AIMLAPI. In n8n > Credentials, add AI/ML API: API Key: your_key_here. 5. ⚙️ Customize Limits & Enhancements Adjust daily limits in the Set Daily Limit node. Optional: Add NSFW content filtering. Implement alias commands. Extend with /help, /usage, /history. Add inline button UX (advanced). 💡 How It Works ➡️ Command Examples: Start a chat with a specific model: #openai/gpt-4o Write a motivational quote. Request available models list: /models ➡️ Workflow Logic: Receives a Telegram message. Switch node checks if the message is /models or a prompt. For /models, it fetches and sends a grouped list of models. For prompts: Checks usage limits. Parses #model_id and prompt text. Dynamically routes the request to the chosen model. Sends the AI's response back to the user. Logs the query & result to Google Sheets. If daily limit exceeded → sends a limit exceeded message. 🧪 Testing & Debugging Tips Test via a separate Telegram chat. Use Console/Set nodes to debug payloads. Always test commands by messaging the bot (not via "Execute Node"). Validate cases: Missing #model_id. Invalid model_id. Limit exceeded handling.
by Evgeny Agronsky
What it does Automates code review by listening for a comment trigger on GitLab merge requests, summarising the diff, and using an LLM to post constructive, line‑specific feedback. If a JIRA ticket ID is found in the MR description, the ticket’s summary is used to inform the AI review. Use cases Quickly obtain high‑quality feedback on MRs without waiting for peers. Highlight logic, security or performance issues that might slip through cursory reviews. Incorporate project context by pulling in related JIRA ticket summaries. Good to know Triggered by commenting ai-review on a merge request. The LLM returns only high‑value findings; if nothing critical is detected, the workflow posts an “all clear” message. You can swap out the LLM (Gemini, OpenAI, etc.) or adjust the prompt to fit your team’s guidelines. AI usage may incur costs or be geo‑restricted depending on your provider n8n.io. How it works Webhook listener:** A Webhook node captures GitLab note events and filters for the trigger phrase. Fetch & parse:** The workflow retrieves MR details and diffs, splitting each change into “original” and “new” code blocks. Optional JIRA context:** If your MR description includes a JIRA key (e.g., PROJ-123), the workflow fetches the ticket (and parent ticket for subtasks) and composes a brief context summary. LLM review:** The parsed diff and optional context are sent to an LLM with instructions to identify logic, security or performance issues and suggest improvements. Post results:** Inline comments are posted back to the MR at the appropriate file/line positions; if no issues are found, a single “all clear” note is posted. How to use Import the template JSON and open the Webhook node. Replace the REPLACE_WITH_UNIQUE_PATH placeholder with your desired path and configure a GitLab project webhook to send MR comments to that URL. Select your LLM credentials in the Gemini (or other LLM) node, and optionally add JIRA credentials in the JIRA nodes. Activate the workflow and comment ai-review on any merge request to test it. For each review, the workflow posts status updates (“AI review initiated…”) and final comments. Requirements A GitLab project with a generate Personal Access Token (PAT) stored as an environment variable (GITLAB_TOKEN). LLM credentials (e.g., Google Gemini) and optional JIRA credentials. Customising this workflow Change the trigger phrase in the Trigger Phrase Filter node. Modify the LLM prompt to focus on different aspects (e.g., style, documentation). Filter out certain file types or directories before sending diffs to the LLM. Integrate other services (Slack, email) to notify teams when reviews are complete.
by Supira Inc.
💡 How It Works This workflow automatically detects new YouTube uploads, retrieves their transcripts, summarizes them in Japanese using GPT-4 o mini, and posts the results to a selected Slack channel. It’s ideal for teams who follow multiple creators, internal training playlists, or corporate webinars and want concise Japanese summaries in Slack without manual work. Here’s the flow at a glance: YouTube RSS Trigger — monitors a specific channel’s RSS feed. HTTP Request via RapidAPI — fetches the video transcript (supports both English & Japanese). Code Node — merges segmented transcript text into one clean string. OpenAI (GPT-4o-mini) — generates a natural-sounding, 3-line Japanese summary. Slack Message — posts the title, link, and generated summary to #youtube-summary. ⚙️ Requirements n8n (v1.60 or later) RapidAPI account + [youtube-transcript3 API key] OpenAI API key (GPT-4o-mini recommended) Slack workspace with OAuth connection 🧩 Setup Instructions 1.Replace YOUR_RAPIDAPI_KEY_HERE with your own RapidAPI key. 2.Add your OpenAI Credential under Credentials → OpenAI. 3.Set your target Slack channel (e.g., #youtube-summary). 4.Enter the YouTube channel ID in the RSS Trigger node. 5.Activate the workflow and test with a recent video. 🎛️ Customization Tips Modify the OpenAI prompt to change summary length or tone. Duplicate the RSS Trigger for multiple channels → merge before summarization. Localize Slack messages using Japanese or English templates. 🚀 Use Case Perfect for marketing teams, content curators, and knowledge managers who want to stay updated on YouTube content in Japanese without leaving Slack.
by Robert Breen
Automatically research new leads in your target area, structure the results with AI, and append them into Google Sheets — all orchestrated in n8n. ✅ What this template does Uses Perplexity to research businesses (coffee shops in this example) with company name + email Cleans and structures the output into proper JSON using OpenAI Appends the new leads directly into Google Sheets, skipping duplicates > Trigger: Manual — “Start Workflow” 👤 Who’s it for Sales & marketing teams** who need to prospect local businesses Agencies** running outreach campaigns Freelancers** and consultants looking to automate lead research ⚙️ How it works Set Location → define your target area (e.g., Hershey PA) Get Current Leads → pull existing data from your Google Sheet to avoid duplicates Research Leads → query Perplexity for 20 businesses, excluding already-scraped ones Write JSON → OpenAI converts Perplexity output into structured Company/Email arrays Split & Merge → align Companies with Emails row-by-row Send Leads to Google Sheets → append or update leads in your sheet 🛠️ Setup instructions Follow these sticky-note setup steps (already included in the workflow): 1) Connect Google Sheets (OAuth2) In n8n → Credentials → New → Google Sheets (OAuth2) Sign in with your Google account and grant access In the Google Sheets node, select your Spreadsheet and Worksheet Example sheet: https://docs.google.com/spreadsheets/d/1MnaU8hSi8PleDNVcNnyJ5CgmDYJSUTsr7X5HIwa-MLk/edit#gid=0 2) Connect Perplexity (API Key) Sign in at https://www.perplexity.ai/account Generate an API key: https://docs.perplexity.ai/guides/getting-started In n8n → Credentials → New → Perplexity API, paste your key 3) Connect OpenAI (API Key) In n8n → Credentials → New → OpenAI API Paste your OpenAI API key In the OpenAI Chat Model node, select your credential and a vision-capable model (e.g., gpt-4o-mini, gpt-4o) 🔧 Requirements A free Google account An OpenAI API key (https://platform.openai.com) A Perplexity API key (https://docs.perplexity.ai) n8n self-hosted or cloud instance 🎨 How to customize Change the Search Area in the Set Location node Modify the Perplexity system prompt to target different business types (e.g., gyms, salons, restaurants) Expand the Google Sheet schema to include more fields (phone, website, etc.) 📬 Contact Need help customizing this (e.g., filtering by campaign, sending reports by email, or formatting your Google Sheet)? 📧 robert@ynteractive.com 🔗 Robert Breen 🌐 ynteractive.com
by Robert Breen
🧑💻 Description This workflow integrates Slack with an OpenAI Chat Agent to create a fully interactive chatbot inside your Slack workspace. It works in a bidirectional loop: A user sends a message in Slack. The workflow captures the message and logs it back into Slack (so you can monitor what’s being passed into the agent). The message is sent to an OpenAI-powered agent (e.g., GPT-4o). The agent generates a response. The response is formatted and posted back to Slack in the same channel or DM thread. This allows you to monitor, test, and interact with the agent directly from Slack. 📌 Use Cases Team Support Bot**: Provide quick AI-generated answers to FAQs in Slack. E-commerce Example**: The default prompt makes the bot act like a store assistant, but you can swap in your own domain knowledge. Conversation Monitoring**: Log both user and agent messages in Slack for visibility and review. Custom AI Agents**: Extend with RAG, external APIs, or workflow automations for specialized tasks. ⚙️ Setup Instructions 1️⃣ OpenAI Setup Sign up at OpenAI. Generate an API key from the API Keys page. In n8n → Credentials → New → OpenAI → paste your key and save. In the OpenAI Chat node, select your credential and configure the system prompt. Example included: “You are an ecommerce bot. Help the user as if you were working for a mock store.” You can edit this prompt to fit your use case (support bot, HR assistant, knowledge retriever, etc.). 2️⃣ Slack Setup Go to Slack API Apps → click Create New App. Under OAuth & Permissions, add the following scopes: Read: channels:history, groups:history, im:history, mpim:history, channels:read, groups:read, users:read. Write: chat:write. Install the app to your workspace → copy the Bot User OAuth Token. In n8n → Credentials → New → Slack OAuth2 API → paste the token and save. In the Slack nodes (e.g., Send User Message in Slack, Send Agent’s Response in Slack), select your credential and specify the Channel ID or User ID to send/receive messages. 🎛️ Customization Guidance Change Agent Behavior: Update the system message in the **Chat Agent node. Filter Channels**: Limit listening to a specific channel by adjusting the Slack node’s Channel ID. Format Responses: The **Format Response node shows how to structure agent replies before posting back to Slack. Extend Workflows**: Add integrations with databases, CRMs, or APIs for dynamic data-driven responses. 🔄 Workflow Flow (Simplified) Slack User Message → Send User Message in Slack → Chat Agent → Format Response → Send Agent Response in Slack 📬 Contact Need help customizing this workflow (e.g., multi-channel listening, advanced AI logic, or external integrations)? 📧 robert@ynteractive.com 🔗 Robert Breen 🌐 ynteractive.com
by Cong Nguyen
📄 What this workflow does This workflow automatically turns a topic and a reference image URL into a finished, branded article image. It uses GPT-4o to generate a short, detailed image prompt, sends it to FAL Flux image-to-image for rendering, polls until the job is completed, downloads and resizes the image, overlays your company logo, and finally saves the branded result into a specified Google Drive folder. 👤 Who is this for Content teams who need consistent, on-brand article images. Marketing teams looking to scale blog and landing page visuals. Designers who want to automate repetitive resizing and branding tasks. Anyone who needs a pipeline from topic → AI illustration → Google Drive asset. ✅ Requirements OpenAI (GPT-4o) API credentials (for image prompt generation). FAL API key for Flux image-to-image generation. Google Drive OAuth2 connection + target folder ID for saving images. A company logo file/URL (direct download link from Google Drive or any public URL). ⚙️ How to set up Connect OpenAI GPT-4o in the “Create prompt” node. Add your FAL API key to all HTTP Request nodes (generate image, check image finish, Get image link). Replace the logo link in “Get company’s logo” with your own logo URL. Configure the Google Drive node with your OAuth2 credentials and set the correct Folder ID. Update the image_url in “Link image” (or pass from upstream data). Test the workflow end-to-end with a sample subject and image. 🔁 How it works Form/Manual Trigger → Input subject + reference image URL. GPT-4o → Generates a <70-word sharp/detailed prompt (no text/logos). FAL Flux (HTTP Request) → Submits job for image-to-image generation. Polling Loop → Wait + check status until COMPLETED. Download Image → Retrieves generated image link. Resize Image → Standardize to 800×500 pixels. Get & Resize Logo → Fetch company logo, resize for branding. Composite → Overlay logo onto article image. Save to Google Drive → Final branded image saved in target folder. 💡 About Margin AI Margin AI is your AI Service Companion. We help organizations design intelligent, human-centric automation — from content pipelines and branding workflows to customer insights and sales enablement. Our tailored AI solutions scale marketing, operations, and creative processes with ease.
by Olivier
This template qualifies and segments B2B prospects in ProspectPro using live web data and AI. It retrieves website content and search snippets, processes them with an LLM, and updates the prospect record in ProspectPro with qualification labels and tags. The workflow ensures each prospect is processed once and can be reused as a sub-flow or direct trigger. ✨ Features Automatically qualify B2B companies based on website and search content Flexible business logic: qualify and segment prospects by your own criteria Updates ProspectPro records with labels and tags Live data retrieval via Bedrijfsdata.nl RAG API nodes Easy customization through flexible AI setup Extendable and modular: use as a trigger workflow or callable sub-flow ⚙ Requirements n8n instance or cloud workspace Install the Bedrijfsdata.nl Verified Community Node Bedrijfsdata.nl developer account (14-day free trial, 500 credits) Install the ProspectPro Verified Community Node ProspectPro account & API credentials (14-day free trial) OpenAI API credentials (or another LLM) 🔧 Setup Instructions Import the template and set your credentials (Bedrijfsdata.nl, ProspectPro, OpenAI). Connect to a trigger (e.g., ProspectPro "New website visitor") or call as a sub-workflow. Adjust qualification logic in the Qualify & Tag Prospect node to match your ICP. Optional: extend tags, integrate with Slack/CRM, or add error logging. 🔐 Security Notes Prevents re-processing of the same prospect using tags Error branches included for invalid input or API failures LLM output validated via a structured parser 🧪 Testing Run with a ProspectPro ID of a company with a known domain Check execution history and ProspectPro for enrichment results Verify updated tags and qualification label in ProspectPro 📌 About Bedrijfsdata.nl Bedrijfsdata.nl operates the most comprehensive company database in the Netherlands. With real-time data on 3.7M+ businesses and AI-ready APIs, they help Dutch SMEs enrich CRM, workflows, and marketing automation. Website: https://www.bedrijfsdata.nl Developer Platform: https://developers.bedrijfsdata.nl API docs: docs.bedrijfsdata.nl Support: https://www.bedrijfsdata.nl/klantenservice Support hours: Monday–Friday, 09:00–17:00 CET 📌 About ProspectPro ProspectPro is a B2B Prospecting Platform for Dutch B2B SMEs. It helps sales teams identify prospects, identify website visitors and more. Website: https://www.prospectpro.nl Platform: https://mijn.prospectpro.nl API docs: https://www.docs.bedrijfsdata.nl Support: https://www.prospectpro.nl/klantenservice Support hours: Monday–Friday, 09:00–17:00 CET